text
stringlengths
60
353k
source
stringclasses
2 values
**Crossing of cheques** Crossing of cheques: A crossed cheque is a cheque that has been marked specifying an instruction on the way it is to be redeemed. A common instruction is for the cheque to be deposited directly to an account with a bank and not to be immediately cashed by the holder over the bank counter. The format and wording varies between countries, but generally, two parallel lines may be placed either vertically across the cheque or on the top left hand corner of the cheque. By using crossed cheques, cheque writers can effectively protect the instrument from being stolen or cashed by unauthorized persons.Cheques can be open (uncrossed) or crossed. Types of crossing: General crossing A crossed cheque generally is a cheque that only bears two parallel transverse lines, optionally with the words 'and company' or '& Co.' (or any abbreviation of them) on the face of the cheque, between the lines, usually at the top left corner or at any place in the approximate half (in width) of the cheque. In the UK, the crossing is across the cheque by the person who originally wrote the cheque (the drawer), or it can legitimately be added by the person the cheque is payable to (the payee), or even by the bank that the cheque is being paid into.Generally-crossed cheques can only be paid into a bank account, so that the beneficiary can be traced.Crossing alone does not affect the negotiability of the instrument. Types of crossing: Account payee Adding a crossing to a cheque increases its security in that it cannot be cashed at a bank counter but must be paid into an account in exactly the same name as the payee or endorsee indicated on the check. Not negotiable The words 'not negotiable' can be added to a crossing. The effect of such a crossing is that it removes the most important characteristic of a negotiable instrument (according to section 123). Types of crossing: Restrictive or account payee crossings Where some customary instruction is written between the two parallel transverse lines (constituting crossing of cheque) that may result in imposing certain restrictions on the collecting or paying banker, it is called restrictive crossing. A crossing may have the name of a specific banker added between the lines. A cheque with such a crossing can only be paid into an account at that bank. Types of crossing: The beneficiary bank can add an additional crossing to allow another bank, who are acting as their agent in collecting payment on cheques, to be paid the cheque on their behalf. The example is "State Bank of India". In these cases, the respective restrictions mandate to pay the cheque through State Bank of India (acting as collecting banker) only. Consequence of a bank not complying with the crossing: A bank's failure to comply with the crossings amounts to a breach of contract with its customer. The bank may not be able to debit the drawer's account and may be liable to the true owner for his loss.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skip (audio playback)** Skip (audio playback): A skip occurs when a phonograph (gramophone), cassette tape or compact disc player malfunctions or is disturbed so as to play incorrectly, causing a break in sound or a jump to another part of the recording. Vinyl gramophone records: Vinyl records are easily scratched and vinyl readily acquires a static charge, attracting dust that is difficult to remove completely. Dust and scratches cause audio clicks and pops and, in extreme cases, they can cause the needle (stylus) to skip over a series of grooves, or worse yet, cause the needle to skip backwards, creating an unintentional locked groove that repeats the same 1.8 seconds (at 33⅓ RPM) or 1.3 seconds (at 45 RPM) of track over and over again. Locked grooves are not uncommon and are even heard occasionally in broadcasts. The locked groove gave rise to the expression "broken record" referring to someone who continually repeats the same statement with little if any variation. Compact discs: A skip or jump is when the laser of a CD player cannot read the faulty groove or block of data. Skips are usually caused by marks blocking the path of the beam to the disc, e.g. a finger mark, hair, dirt in general, or a scratch. Since the read mechanism has very little contact with the disc's surface and the data itself is not on the outer layer of the disc, the blockage is not a physical issue as with a record, but rather reflective. Compact discs: Basic players Early CD players were very basic in nature. A laser tracks the blocks of data from the centre of the disc outwards, while the disc itself revolves at a variable speed between a starting speed of 495 RPM, and a minimum finishing speed of 212 RPM. Generally, one cycle constitutes one block of data. If there is a faulty block of data, the player may do one of the following: Repeat the previous block of audio Skip the faulty block Try and retry to read it, causing a stopping and starting of the musicA player may utilise one or more of these techniques, depending on how faulty the data is. In the case of severe, irrecoverable damage to the data, the player may try to rescan the disc to relocate its position. In this case, the machine may make a series of audible chirping noises as the laser is moving from the faulty block to the data information area and back again. Compact discs: Later players When CD players began their induction into battery-powered portable machines and vehicles, a skip could happen even for simple movement such as walking, vehicles jerking etc. Therefore a strategy was needed to try to prevent this. When certain techniques were tested and failed, the most successful and popular method to date was to spin the disc faster in order to read a chunk of the data into memory while playing. This meant that the player itself could concentrate on reading while the software controlling the buffers and memory distribution could also act as the audio feed. In the case of a minor error, the disc's rotation would again speed up to facilitate several attempts to read the data. Compact discs: A technique was also developed for testing data to prevent severe skipping. These two techniques were largely successful, unless of course the data was damaged beyond repair, in which case the audio may stop, or skip as before. CD-ROM drives In a computer, a CD-ROM drive is governed by the program controlling it. In most cases, the BIOS has rudimentary access to the drive for boot purposes, while operating systems usually come bundled with their own drivers. The drive itself has very little instruction, apart from direct instructions, such as spin up, read data etc. Compact discs: When playing a CD in the computer, the media player of choice is giving instructions to the CD drive, whether through the operating system's drivers or by accessing the device's low-level interface itself. Usually, similar to modern players, the media player will be reading audio into memory for later playback, especially given the extreme speeds used by CD-ROM drives in order to access raw data on other discs. Because of this, if there is a fault during playback, the player will already be performing a checksum to verify the data read is correct. If it is wrong, the audio is usually stopped, depending on the player. Cassette tapes: Cassette tape players can cause skips when the tape being played is worn or in some other way damaged. Since the tape is rolled into a reel, it depends on how the tape runs across the roll as to how the skip affects playback. Indeed, some early artists such as the Beatles deliberately rearranged the tape reels in order to produce loops used in their recordings. Computer audio: Electronic media on a computer can often skip. Such media may include compressed and uncompressed audio, and video containing audio. Generally this cannot happen to music instructive files such as MIDI or MOD files, however depending on the circumstance single notes may become "jammed", when the note off message is not received by the playback device. Computer audio: Computer skips can be caused by lack of available RAM or processing power, damaged storage mediums (CD, hard drive etc.), a crash in the playback software, or a corrupted, incomplete or damaged audio file. Depending on the player and the operating system, a skip usually consists of a 50MS, 300MS, 500MS or a one-second loop, depending on the size or length of the chunk of data that is currently loaded into memory. Skipping as a musical component: Compact Disc skipping is prevalent in glitch music.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**KCND3** KCND3: Potassium voltage-gated channel subfamily D member 3 also known as Kv4.3 is a protein that in humans is encoded by the KCND3 gene. It contributes to the cardiac transient outward potassium current (Ito1), the main contributing current to the repolarizing phase 1 of the cardiac action potential. Function: Voltage-gated potassium (Kv) channels represent the most complex class of voltage-gated ion channels from both functional and structural standpoints. Their diverse functions include regulating neurotransmitter release, heart rate, insulin secretion, neuronal excitability, epithelial electrolyte transport, smooth muscle contraction, and cell volume. Four sequence-related potassium channel genes – shaker, shaw, shab, and shal – have been identified in Drosophila, and each has been shown to have human homolog(s). Function: Kv4.3 is a member of the potassium channel, voltage-gated, shal-related subfamily, members of which form voltage-activated A-type potassium ion channels and are prominent in the repolarization phase of the action potential. This member includes two isoforms with different sizes, which are encoded by alternatively spliced transcript variants of this gene. Clinical significance: Gain of function is believed to cause Brugada syndrome although only indirectly shown by mutations in the beta subunit KCNE3 which causes gain of function of Kv4.3.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Q fever** Q fever: Q fever or query fever is a disease caused by infection with Coxiella burnetii, a bacterium that affects humans and other animals. This organism is uncommon, but may be found in cattle, sheep, goats, and other domestic mammals, including cats and dogs. The infection results from inhalation of a spore-like small-cell variant, and from contact with the milk, urine, feces, vaginal mucus, or semen of infected animals. Rarely, the disease is tick-borne. The incubation period can range from 9 to 40 days. Humans are vulnerable to Q fever, and infection can result from even a few organisms. The bacterium is an obligate intracellular pathogenic parasite. Signs and symptoms: The incubation period is usually two to three weeks. The most common manifestation is flu-like symptoms: abrupt onset of fever, malaise, profuse perspiration, severe headache, muscle pain, joint pain, loss of appetite, upper respiratory problems, dry cough, pleuritic pain, chills, confusion, and gastrointestinal symptoms, such as nausea, vomiting, and diarrhea. About half of infected individuals exhibit no symptoms.During its course, the disease can progress to an atypical pneumonia, which can result in a life-threatening acute respiratory distress syndrome, usually occurring during the first four to five days of infection.Less often, Q fever causes (granulomatous) hepatitis, which may be asymptomatic or become symptomatic with malaise, fever, liver enlargement, and pain in the right upper quadrant of the abdomen. This hepatitis often results in the elevation of transaminase values, although jaundice is uncommon. Q fever can also rarely result in Retinal vasculitis.The chronic form of Q fever is virtually identical to endocarditis (i.e. inflammation of the inner lining of the heart), which can occur months or decades following the infection. It is usually fatal if untreated. However, with appropriate treatment, the mortality falls to around 10%.A minority of Q fever survivors develops Q fever fatigue syndrome after acute infection, one of the more well-studied post-acute infection syndromes. Q fever fatigue syndrome is characterised by post-exertional malaise and debilitating fatigue. People with Q fever fatigue syndrome frequently meet the diagnostic criteria for myalgic encephelomyelitis/chronic fatigue syndrome (ME/CFS). Symptom often persist years after the initial infection. Diagnosis: Diagnosis is usually based on serology (looking for an antibody response) rather than looking for the organism itself. Serology allows the detection of chronic infection by the appearance of high levels of the antibody against the virulent form of the bacterium. Molecular detection of bacterial DNA is increasingly used. Contrary to most obligate intracellular parasites, Coxiella burnetii can be grown in an axenic culture, but its culture is technically difficult and not routinely available in most microbiology laboratories.Q fever can cause endocarditis (infection of the heart valves) which may require transoesophageal echocardiography to diagnose. Q fever hepatitis manifests as an elevation of alanine transaminase and aspartate transaminase, but a definitive diagnosis is only possible on liver biopsy, which shows the characteristic fibrin ring granulomas. Prevention: Research done in the 1960s–1970s by French Canadian-American microbiologist and virologist Paul Fiset was instrumental in the development of the first successful Q fever vaccine.Protection is offered by Q-Vax, a whole-cell, inactivated vaccine developed by an Australian vaccine manufacturing company, CSL Limited. The intradermal vaccination is composed of killed C. burnetii organisms. Skin and blood tests should be done before vaccination to identify pre-existing immunity, because vaccinating people who already have immunity can result in a severe local reaction. After a single dose of vaccine, protective immunity lasts for many years. Revaccination is not generally required. Annual screening is typically recommended.In 2001, Australia introduced a national Q fever vaccination program for people working in "at risk" occupations. Vaccinated or previously exposed people may have their status recorded on the Australian Q Fever Register, which may be a condition of employment in the meat processing industry or in veterinary research. An earlier killed vaccine had been developed in the Soviet Union, but its side effects prevented its licensing abroad.Preliminary results suggest vaccination of animals may be a method of control. Published trials proved that use of a registered phase vaccine (Coxevac) on infected farms is a tool of major interest to manage or prevent early or late abortion, repeat breeding, anoestrus, silent oestrus, metritis, and decreases in milk yield when C. burnetii is the major cause of these problems. Treatment: Treatment of acute Q fever with antibiotics is very effective. Commonly used antibiotics include doxycycline, tetracycline, chloramphenicol, ciprofloxacin, and ofloxacin; the antimalarial drug hydroxychloroquine is also used. Chronic Q fever is more difficult to treat and can require up to four years of treatment with doxycycline and quinolones or doxycycline with hydroxychloroquine. If a person has chronic Q fever, doxycycline and hydroxychloroquine will be prescribed for at least 18 months. Q fever in pregnancy is especially difficult to treat because doxycycline and ciprofloxacin are contraindicated in pregnancy. The preferred treatment for pregnancy and children under the age of eight is co-trimoxazole. Epidemiology: The pathogenic agent is found worldwide, with the exception of New Zealand. The bacterium is extremely sustainable and virulent: a single organism is able to cause an infection. The common source of infection is the inhalation of contaminated dust, contact with contaminated milk, meat, or wool, and particularly birthing products. Ticks can transfer the pathogenic agent to other animals. Transfer between humans seems extremely rare and has so far been described in very few cases.Some studies have shown more men to be affected than women, which may be attributed to different employment rates in typical professions. Epidemiology: "At risk" occupations include: Veterinary personnel; Stockyard workers; Farmers; Sheep shearers; Animal transporters; Laboratory workers handling potentially infected veterinary samples or visiting abattoirs; People who cull and process kangaroos; and Hide (tannery) workers. History: Q fever was first described in 1935 by Edward Holbrook Derrick in slaughterhouse workers in Brisbane, Queensland. The "Q" stands for "query" and was applied at a time when the causative agent was unknown; it was chosen over suggestions of abattoir fever and Queensland rickettsial fever, to avoid directing negative connotations at either the cattle industry or the state of Queensland.The pathogen of Q fever was discovered in 1937, when Frank Macfarlane Burnet and Mavis Freeman isolated the bacterium from one of Derrick's patients. It was originally identified as a species of Rickettsia. H.R. Cox and Gordon Davis elucidated the transmission when they isolated it from ticks found in the US state of Montana in 1938. It is a zoonotic disease whose most common animal reservoirs are cattle, sheep, and goats. Coxiella burnetii – named for Cox and Burnet – is no longer regarded as closely related to the Rickettsiae, but as similar to Legionella and Francisella, and is a Gammaproteobacterium. Society and culture: An early mention of Q fever was important in one of the early Dr. Kildare films (1939, Calling Dr. Kildare). Kildare's mentor Dr. Gillespie (Lionel Barrymore) tires of his protégé working fruitlessly on "exotic diagnoses" ("I think it's Q fever!") and sends him to work in a neighborhood clinic, instead.Q fever was also highlighted in an episode of the U.S. television medical drama House ("The Dig", season seven, episode 18). Society and culture: Biological warfare C. burnetii has been used to develop biological weapons.The United States investigated it as a potential biological warfare agent in the 1950s, with eventual standardization as agent OU. At Fort Detrick and Dugway Proving Ground, human trials were conducted on Whitecoat volunteers to determine the median infective dose (18 MICLD50/person i.h.) and course of infection. The Deseret Test Center dispensed biological Agent OU with ships and aircraft, during Project 112 and Project SHAD. As a standardized biological, it was manufactured in large quantities at Pine Bluff Arsenal, with 5,098 gallons in the arsenal in bulk at the time of demilitarization in 1970.C. burnetii is currently ranked as a "category B" bioterrorism agent by the CDC. It can be contagious, and is very stable in aerosols in a wide range of temperatures. Q fever microorganisms may survive on surfaces up to 60 days. It is considered a good agent in part because its ID50 (number of bacilli needed to infect 50% of individuals) is considered to be one, making it the lowest known. In animals: Q fever can affect many species of domestic and wild animals, including ruminants (cattle, sheep, goats, bison, deer species...), carnivores (dogs, cats, seals...), rodents, reptiles and birds. However, ruminants (cattle, goats, and sheep) are the most frequently affected animals, and can serve as a reservoir for the bacteria. Clinical signs In contrast to humans, though a respiratory and cardiac infection could be experimentally reproduced in cattle, the clinical signs mainly affect the reproductive system. Q fever in ruminants is, therefore, mainly responsible for abortions, metritis, retained placenta, and infertility. In animals: The clinical signs vary between species. In small ruminants (sheep and goats), it is dominated by abortions, premature births, stillbirths, and the birth of weak lambs or kids. One of the characteristics of abortions in goats is that they are very frequent and clustered in the first year or two after contamination of the farm. This is known as an abortion storm.In cattle, although abortions also occur, they are less frequent and more sporadic. The clinical picture is rather dominated by nonspecific signs such as placental retentions, metritis, and consequent fertility disorders. In animals: Epidemiology With the exception of New Zealand, which is currently free of Q fever, the disease is present throughout the world. Numerous epidemiological surveys have been carried out. They have shown that about one in three cattle farms and one in four sheep or goat farms are infected, but wide variations are seen between studies and countries. In China, Iran, Great Britain, Germany, Hungary, the Netherlands, Spain, the US, Belgium, Denmark, Croatia, Slovakia, the Czech Republic, Serbia, Slovenia, and Jordan, for example, more than 50% of cattle herds were infected with Q fever.Infected animals shed the bacteria by three routes - genital discharge, faeces, and milk. Excretion is greatest at the time of parturition or abortion, and placentas and aborted fetuses are the main sources of bacteria, particularly in goats. In animals: As C. burnetii is small and resistant in the environment, it is easily airborne and can be transmitted from one farm to another, even if several kilometres away. In animals: Control Biosecurity measures Based on the epidemiological data, biosecurity measures can be derived: The spread of manure from infected farm should be avoided in windy conditions The level of hygiene must be very high during parturition and fetal annexes, and fetuses must be collected and destroyed as soon as possible Medical measures A vaccine for cattle, goats and sheep exists. It reduces clinical expression such as abortions and decreases excretion of the bacteria by the animals leading to control of Q fever in herds.In addition, vaccination of herds against Q fever has been shown to reduce the risk of human infection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Position (poker)** Position (poker): Position in poker refers to the order in which players are seated around the table and the related poker strategy implications. Players who act first are in "early position"; players who act later are in "late position"; players who act in between are in "middle position". A player "has position" on opponents acting before him and is "out of position" to opponents acting after him. Because players act in clockwise order, a player "has position" on opponents seated to his right, except when the opponent has the button and certain cases in the first betting round of games with blinds. Position in Texas hold 'em: The primary advantage held by a player in late position is that he will have more information with which to make better decisions than players in early position, who will have to act first, without the benefit of this extra information. This advantage has led to many players in heads-up play raising on the button with an extremely wide range of hands because of this positional advantage. Also, as earlier opponents fold, the probability of a hand being the best goes up as the number of opponents goes down. The blinds are the least desirable position because a player is forced to contribute to the pot and they must act first on all betting rounds after the flop. Although the big blind has a big advantage on the first round of betting, it is on average the biggest money losing position. Texas hold 'em example: There are 10 players playing $4/$8 fixed limit. Alice pays the $2 small blind. Bob pays the $4 big blind. Carol is under the gun (first to act). If Carol has a hand like K♥ J♠, she may choose to fold. With 9 opponents remaining to act, there is approximately a 40% chance that at least one of them will have a better hand than Carol's like A-A, K-K, Q-Q, J-J, A-K, A-Q, A-J or K-Q. And even if no one does, seven of them (all but the two players in the blind) will have position on Carol in the next three betting rounds. Texas hold 'em example: Now instead, suppose David in the cut-off position (to the right of the button) has the same K♥ J♠ and all players fold to him. In this situation, there are only three opponents left to act, so the odds that one of them has a better hand are considerably less (only around 16%). Secondly, two of those three (Alice and Bob) will be out of position to David on later betting rounds. A common play would be for David to raise and hope that the button (the only player who has position on David) folds. David's raise might simply steal the blinds if they don't have playable hands, but if they do play, David will be in good shape to take advantage of his position in later betting rounds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Serre's conjecture II (algebra)** Serre's conjecture II (algebra): In mathematics, Jean-Pierre Serre conjectured the following statement regarding the Galois cohomology of a simply connected semisimple algebraic group. Namely, he conjectured that if G is such a group over a perfect field F of cohomological dimension at most 2, then the Galois cohomology set H1(F, G) is zero. Serre's conjecture II (algebra): A converse of the conjecture holds: if the field F is perfect and if the cohomology set H1(F, G) is zero for every semisimple simply connected algebraic group G then the p-cohomological dimension of F is at most 2 for every prime p.The conjecture holds in the case where F is a local field (such as p-adic field) or a global field with no real embeddings (such as Q(√−1)). This is a special case of the Kneser–Harder–Chernousov Hasse principle for algebraic groups over global fields. (Note that such fields do indeed have cohomological dimension at most 2.) The conjecture also holds when F is finitely generated over the complex numbers and has transcendence degree at most 2.The conjecture is also known to hold for certain groups G. For special linear groups, it is a consequence of the Merkurjev–Suslin theorem. Building on this result, the conjecture holds if G is a classical group. The conjecture also holds if G is one of certain kinds of exceptional group.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Resorcinol** Resorcinol: Resorcinol (or resorcin) is a phenolic compound. It is an organic compound with the formula C6H4(OH)2. It is one of three isomeric benzenediols, the 1,3-isomer (or meta-isomer). Resorcinol crystallizes from benzene as colorless needles that are readily soluble in water, alcohol, and ether, but insoluble in chloroform and carbon disulfide. Production: Resorcinol is produced in several steps from benzene, starting with dialkylation with propylene to give 1,3-diisopropylbenzene. Oxidation and Hock rearrangement of this disubstituted arene gives acetone and resorcinol. Production: Resorcinol is an expensive chemical, produced in only a very few locations around the world (to date only four commercial plants are known to be operative: in the United States, Germany,China and Japan), and as such it is the determining factor in the cost of PRF adhesives.Many additional routes exist for resorcinol. It was formerly produced by disulfonation of benzene followed by hydrolysis of the 1,3-disulfonate. This method has been discarded because it cogenerates so much sulfur-containing waste. Resorcinol can also be produced when any of a large number of resins (such as galbanum and asafoetida) are melted with potassium hydroxide, or by the distillation of Brazilwood extract. It may be synthesized by melting 3-iodophenol, phenol-3-sulfonic acid with potassium carbonate. Diazotization of 3-aminophenol or on 1,3-diaminobenzene followed by hydrolysis provides yet another route. Many ortho- and para-compounds of the aromatic series (for example, the bromophenols, benzene-para-disulfonic acid) also yield resorcinol on fusion with potassium hydroxide. Reactions: Partial hydrogenation of resorcinol gives dihydroresorcinol, also known as 1,3-cyclohexanedione.It reduces Fehling's solution and ammoniacal silver solutions. It does not form a precipitate with lead acetate solution, as does the isomeric pyrocatechol. Iron(III) chloride colors its aqueous solution a dark-violet, and bromine water precipitates tribromoresorcinol. These properties are what give it its use as a colouring agent for certain chromatography experiments. Reactions: Sodium amalgam reduces it to dihydroresorcin, which when heated to 150 to 160 °C with concentrated barium hydroxide solution gives γ-acetylbutyric acid.When fused with potassium hydroxide, resorcinol yields phloroglucin, pyrocatechol, and diresorcinol. It condenses with acids or acid chlorides, in the presence of dehydrating agents, to oxyketones, for example, with zinc chloride and glacial acetic acid at 145 °C it yields resacetophenone (HO)2C6H3COCH3. With the anhydrides of dibasic acids, it yields fluoresceins. When heated with calcium chloride—ammonia to 200 °C it yields meta-dioxydiphenylamine.With sodium nitrite it forms a water-soluble blue dye, which is turned red by acids, and is used as a pH indicator under the name of lacmoid. It condenses readily with aldehydes, yielding with formaldehyde, on the addition of catalytic hydrochloric acid, methylene diresorcin [(HO)C6H3(O)]2CH2. Reaction with chloral hydrate in the presence of potassium bisulfate yields the lactone of tetra-oxydiphenyl methane carboxylic acid. In alcoholic solution it condenses with sodium acetoacetate to form 4-methylumbelliferone.In addition to electrophilic aromatic addition, resorcinol (and other polyols) undergo nucleophilic substitution via the enone tautomer. Nitration with concentrated nitric acid in the presence of cold concentrated sulfuric acid yields trinitroresorcin (styphnic acid), an explosive. Occurrence and use: Derivatives of resorcinol are found in different natural sources. Alkylresorcinols are found in rye. Polyresorcinols are found as pseudotannins in plants. Adhesives Resorcinol is mainly used in the production of resins. As a mixture with phenol, it condenses with formaldehyde to afford adhesives. Such resins are used as adhesives in the rubber industry and others are used for wood glue. Related to its conversion resins with formaldehyde, resorcinol is the starting material for resorcinarene rings. Occurrence and use: Medical uses It is present in over-the-counter topical acne treatments at 2% or less concentration, and in prescription treatments at higher concentrations. Monoacetylresorcinol, C6H4(OH)(O–COCH3), is used under the name of Euresol. It is used in hidradenitis suppurativa with limited evidence showing it can help with resolution of the lesions. Resorcinol is one of the active ingredients in products such as Resinol, Vagisil, and Clearasil. Occurrence and use: In the 1950s and early 1960s the British Army used it, in the form of a paste applied directly to the skin. One such place where this treatment was given to soldiers with chronic acne was the Cambridge Military Hospital, Aldershot, England. It was not always successful. 4-Hexylresorcinol is an anesthetic found in throat lozenges. Chemical uses Resorcinol is used as a chemical intermediate for the synthesis of pharmaceuticals and other organic compounds. It is used in the production of diazo dyes and plasticizers and as a UV absorber in resins. It is an analytical reagent for the qualitative determination of ketoses (Seliwanoff's test). It is the starting material for the initiating explosive lead styphnate. Related compounds: Resazurin, C12H7NO4, obtained by the action of nitrous acid on resorcinol, forms small dark red crystals possessing a greenish metallic glance. When dissolved in concentrated sulfuric acid and warmed to 210 °C, the solution on pouring into water yields a precipitate of resorufin, C12H7NO3, an oxyphenoxazone, which is insoluble in water but is readily soluble in hot concentrated hydrochloric acid, and in solutions of caustic alkalis. The alkaline solutions are of a rose-red color and show a cinnabar-red fluorescence. A tetrabromresorufin is used as a dyestuff under the name of Fluorescent Resorcin Blue. Related compounds: Thioresorcinol is obtained by the action of zinc and hydrochloric acid on meta-benzenedisulfonyl chloride. It melts at 27 °C and boils at 243 °C. Resorcinol disulfonic acid, (HO)2C6H2(HSO3)2, is a deliquescent mass obtained by the action of sulfuric acid on resorcin. It is readily soluble in water and ethanol. Resorcinol is also a common scaffold that is found in a class of anticancer agents, some of which (luminespib, ganetespib, KW-2478, and onalespib) were in clinical trials as of 2014. Part of the resorcinol structure binds to inhibits the N-terminal domain of heat shock protein 90, which is a drug target for anticancer treatments. History, etymology, and nomenclature: Austrian chemist Heinrich Hlasiwetz (1825–1875) is remembered for his chemical analysis of resorcinol and for his part in the first preparation of resorcinol, along with Ludwig Barth, which was published in 1864.: 10 Benzene-1,3-diol is the name recommended by the International Union of Pure and Applied Chemistry (IUPAC) in its 1993 Recommendations for the Nomenclature of Organic Chemistry.Resorcinol is so named because of its derivation from ammoniated resin gum, and for its relation to the chemical orcinol. Toxicity: Resorcinol has low toxicity, with an LD50 (rats, oral) > 300 mg/kg. It is less toxic than phenol.Resorcinol was named a substance of very high concern under European Union REACH in 2022 because of its endocrine disrupting properties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stick-built construction** Stick-built construction: A stick-built home is a wooden house constructed entirely or largely on-site; that is, built on the site which it is intended to occupy upon its completion rather than in a factory or similar facility. This term is used to contrast such a dwelling with mobile homes and modular homes that are assembled in a factory and transported to the site entirely or mostly complete and hence are not "stick-built". Stick-built construction: Stick-built homes are also those homes which are built using a more traditional method of construction rather than a modular type. The "sticks" mentioned usually refer specifically to the superstructure of the walls and roof. Stick-built construction: Most stick-built homes have many of the same things in common. They are usually built with lumber, though it is possible to use metal poles for the construction as well. This is more expensive, more time-consuming and generally harder for the homeowner to deal with once constructed. These homes also have many of the common features associated with most homes, such as shingles and drywall.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Platelet-activating factor receptor** Platelet-activating factor receptor: The platelet-activating factor receptor (PAF-R) is a G-protein coupled receptor which binds platelet-activating factor. It is encoded in the human by the PTAFR gene. The PAF receptor shows structural characteristics of the rhodopsin (MIM 180380) gene family and binds platelet-activating factor (PAF). PAF is a phospholipid (1-0-alkyl-2-acetyl-sn-glycero-3-phosphorylcholine) that has been implicated as a mediator in diverse pathologic processes, such as allergy, asthma, septic shock, arterial thrombosis, and inflammatory processes.[supplied by OMIM] Its pathogenetic role in chronic kidney failure has also been reported recently. Ligands: Agonists Platelet activating factorAntagonists Apafant (WEB-2086) Israpafant (Y-24180) Lexipafant Rupatadine
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sodium fluoride** Sodium fluoride: Sodium fluoride (NaF) is an inorganic compound with the formula NaF. It is a colorless or white solid that is readily soluble in water. It is used in trace amounts in the fluoridation of drinking water to prevent tooth decay, and in toothpastes and topical pharmaceuticals for the same purpose. In 2020, it was the 265th most commonly prescribed medication in the United States, with more than 1 million prescriptions. It is also used in metallurgy and in medical imaging. Uses: Dental caries Fluoride salts are often added to municipal drinking water (as well as to certain food products in some countries) for the purpose of maintaining dental health. The fluoride enhances the strength of teeth by the formation of fluorapatite, a naturally occurring component of tooth enamel. Although sodium fluoride is used to fluoridate water and is the standard by which other water-fluoridation compounds are gauged, hexafluorosilicic acid (H2SiF6) and its salt sodium hexafluorosilicate (Na2SiF6) are more commonly used additives in the United States. Uses: Osteoporosis Fluoride supplementation has been extensively studied for the treatment of postmenopausal osteoporosis. This supplementation does not appear to be effective; even though sodium fluoride increases bone density, it does not decrease the risk of fractures. Uses: Medical imaging In medical imaging, fluorine-18-labelled sodium fluoride (USP, sodium fluoride F18) is one of the oldest tracers used in positron emission tomography (PET), having been in use since the 1960s. Relative to conventional bone scintigraphy carried out with gamma cameras or SPECT systems, PET offers more sensitivity and spatial resolution. Fluorine-18 has a half-life of 110 min, which requires it to be used promptly once produced; this logistical limitation hampered its adoption in the face of the more convenient technetium-99m-labelled radiopharmaceuticals. However fluorine-18 is generally considered to be a superior radiopharmaceutical for skeletal imaging. In particular it has a high and rapid bone uptake accompanied by very rapid blood clearance, which results in a high bone-to-background ratio in a short time. Additionally the annihilation photons produced by decay of 18F have a high energy of 511 keV compared to the 140 keV photons of 99mTc. Uses: Chemistry Sodium fluoride has a variety of specialty chemical applications in synthesis and extractive metallurgy. It reacts with electrophilic chlorides including acyl chlorides, sulfur chlorides, and phosphorus chloride. Like other fluorides, sodium fluoride finds use in desilylation in organic synthesis. Sodium fluoride can be used to produce fluorocarbons via the Finkelstein reaction; this process has the advantage of being simple to perform on a small scale but is rarely used on an industrial scale due to the existence of more effective techniques (e.g. Electrofluorination, Fowler process). Uses: Biology Sodium fluoride is sometimes added at relatively high concentrations (~20 mM) to protein lysis buffers in order to inhibit endogenous phosphatases and thereby protect phosphorylated protein sites. Sodium pyrophosphate and Sodium orthovanadate are also used for this purpose. Other uses Sodium fluoride is used as a cleaning agent (e.g., as a "laundry sour").Sodium fluoride can be used in a nuclear molten salt reactor. Over a century ago, sodium fluoride was used as a stomach poison for plant-feeding insects. Inorganic fluorides such as fluorosilicates and sodium fluoride complex magnesium ions as magnesium fluorophosphate. They inhibit enzymes such as enolase that require Mg2+ as a prosthetic group. Thus, fluoride poisoning prevents phosphate transfer in oxidative metabolism. Safety: The lethal dose for a 70 kg (154 lb) human is estimated at 5–10 g.Fluorides, particularly aqueous solutions of sodium fluoride, are rapidly and quite extensively absorbed by the human body.Fluorides interfere with electron transport and calcium metabolism. Calcium is essential for maintaining cardiac membrane potentials and in regulating coagulation. High ingestion of fluoride salts or hydrofluoric acid may result in fatal arrhythmias due to profound hypocalcemia. Chronic over-absorption can cause hardening of bones, calcification of ligaments, and buildup on teeth. Fluoride can cause irritation or corrosion to eyes, skin, and nasal membranes.Sodium fluoride is classed as toxic by both inhalation (of dusts or aerosols) and ingestion. In high enough doses, it has been shown to affect the heart and circulatory system. For occupational exposures, the Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health have established occupational exposure limits at 2.5 mg/m3 over an eight-hour time-weighted average.In the higher doses used to treat osteoporosis, plain sodium fluoride can cause pain in the legs and incomplete stress fractures when the doses are too high; it also irritates the stomach, sometimes so severely as to cause peptic ulcer disease. Slow-release and enteric-coated versions of sodium fluoride do not have significant gastric side effects, and have milder and less frequent complications in the bones. In the lower doses used for water fluoridation, the only clear adverse effect is dental fluorosis, which can alter the appearance of children's teeth during tooth development; this is mostly mild and is unlikely to represent any real effect on aesthetic appearance or on public health. A chronic fluoride ingestion of 1 ppm of fluoride in drinking water can cause mottling of the teeth (fluorosis) and an exposure of 1.7 ppm will produce mottling in 30%–50% of patients. Chemical structure: Sodium fluoride is an inorganic ionic compound, dissolving in water to give separated Na+ and F− ions. Like sodium chloride, it crystallizes in a cubic motif where both Na+ and F− occupy octahedral coordination sites; its lattice spacing, approximately 462 pm, is smaller than that of sodium chloride (564 pm). Occurrence: The mineral form of NaF, villiaumite, is moderately rare. It is known from plutonic nepheline syenite rocks. Production: NaF is prepared by neutralizing hydrofluoric acid or hexafluorosilicic acid (H2SiF6), both byproducts of the reaction of fluorapatite (Ca5(PO4)3F) from phosphate rock during the production of superphosphate fertilizer. Neutralizing agents include sodium hydroxide and sodium carbonate. Alcohols are sometimes used to precipitate the NaF: HF + NaOH → NaF + H2OFrom solutions containing HF, sodium fluoride precipitates as the bifluoride salt sodium bifluoride (NaHF2). Heating the latter releases HF and gives NaF. Production: HF + NaF ⇌ NaHF2In a 1986 report, the annual worldwide consumption of NaF was estimated to be several million tonnes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bulverism** Bulverism: Bulverism is a type of ad hominem rhetorical fallacy that combines circular reasoning and the genetic fallacy with presumption or condescension. The Bulverist assumes a speaker's argument is invalid or false and then explains why the speaker came to make that mistake or to be so silly (even if the opponent's claim is actually right) by attacking the speaker or the speaker's motive. Bulverism: The term Bulverism was coined by C. S. Lewis after an imaginary character to poke fun at a serious error in thinking that, he alleged, frequently occurred in a variety of religious, political, and philosophical debates. Similar to Antony Flew's "subject/motive shift", Bulverism is a fallacy of irrelevance. One accuses an argument of being wrong on the basis of the arguer's identity or motive, but these are irrelevant to the argument's validity or truth. Source of the concept: Lewis wrote about this in a 1941 essay, which was later expanded and published in 1944 in The Socratic Digest under the title "Bulverism". This was reprinted both in Undeceptions and the more recent anthology God in the Dock in 1970. He explains the origin of this term: Suppose I think, after doing my accounts, that I have a large balance at the bank. And suppose you want to find out whether this belief of mine is "wishful thinking." You can never come to any conclusion by examining my psychological condition. Your only chance of finding out is to sit down and work through the sum yourself. When you have checked my figures, then, and then only, will you know whether I have that balance or not. If you find my arithmetic correct, then no amount of vapouring about my psychological condition can be anything but a waste of time. If you find my arithmetic wrong, then it may be relevant to explain psychologically how I came to be so bad at my arithmetic, and the doctrine of the concealed wish will become relevant—but only after you have yourself done the sum and discovered me to be wrong on purely arithmetical grounds. It is the same with all thinking and all systems of thought. If you try to find out which are tainted by speculating about the wishes of the thinkers, you are merely making a fool of yourself. You must first find out on purely logical grounds which of them do, in fact, break down as arguments. Afterwards, if you like, go on and discover the psychological causes of the error. Source of the concept: You must show that a man is wrong before you start explaining why he is wrong. The modern method is to assume without discussion that he is wrong and then distract his attention from this (the only real issue) by busily explaining how he became so silly. In the course of the last fifteen years I have found this vice so common that I have had to invent a name for it. I call it "Bulverism". Some day I am going to write the biography of its imaginary inventor, Ezekiel Bulver, whose destiny was determined at the age of five when he heard his mother say to his father—who had been maintaining that two sides of a triangle were together greater than a third—"Oh you say that because you are a man." "At that moment", E. Bulver assures us, "there flashed across my opening mind the great truth that refutation is no necessary part of argument. Assume that your opponent is wrong, and explain his error, and the world will be at your feet. Attempt to prove that he is wrong or (worse still) try to find out whether he is wrong or right, and the national dynamism of our age will thrust you to the wall." That is how Bulver became one of the makers of the Twentieth Century. Threat and remedy: The special threat of this fallacy lies in that it applies equally to the person who errs as to that person's opponent. Taken to its logical consequence, it implies that all arguments are unreliable and hence undermines all rational thought. Lewis says, "Until Bulverism is crushed, reason can play no effective part in human affairs. Each side snatches it early as a weapon against the other; but between the two reason itself is discredited."The remedy, according to Lewis, is to accept that some reasoning is not tainted by the reasoner. Some arguments are valid and some conclusions true, regardless of the identity and motives of the one who argues them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SimScale** SimScale: SimScale is a computer-aided engineering (CAE) software product based on cloud computing. SimScale was developed by SimScale GmbH and allows computational fluid dynamics, finite element analysis and thermal simulations. The backend of the platform uses open source codes: FEA: Code_Aster and CalculiX CFD: OpenFOAMThe cloud-based platform of SimScale allows users to run more simulations, and in turn iterate more design changes, compared to traditional local computer-based systems. Features: The thermal module allows uncoupled thermo-mechanical, conjugate heat transfer and convective heat transfer simulations. Industrial applications: Japan-based Tokyowheel — a company that engineers technical carbon fiber racing wheels for competitive cyclists — used SimScale's CFD software component to determine the most aerodynamic wheel profile. QRC Technologies performed thermal simulations on SimScale to test multiple variations of their RF tester. Marketing: On 2 December 2015, a community plan was announced making the platform accessible free of charge, based on a new investment round led by Union Square Ventures. It includes a one-time allotment of 3000 computation hours and 500 GB of storage for any registered user. Simulations and projects created by a user registered under the plan are accessible to all other users within the public project library.SimScale has also organized several free webinars: 3D Printer Workshop F1 Aerodynamics Workshop Simulation in Biomedical Engineering Workshop
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Physical strength** Physical strength: Physical strength is the measure of a human's exertion of force on physical objects. Increasing physical strength is the goal of strength training. Overview: An individual's physical strength is determined by two factors: the cross-sectional area of muscle fibers recruited to generate force and the intensity of the recruitment. Individuals with a high proportion of type I slow twitch muscle fibers will be relatively weaker than a similar individual with a high proportion of type II fast twitch fibers, but would have greater endurance. The genetic inheritance of muscle fiber type sets the outermost boundaries of physical strength possible (barring the use of enhancing agents such as testosterone), although the unique position within this envelope is determined by training. Individual muscle fiber ratios can be determined through a muscle biopsy. Other considerations are the ability to recruit muscle fibers for a particular activity, joint angles, and the length of each limb. For a given cross-section, shorter limbs are able to lift more weight. The ability to gain muscle also varies person to person, based mainly upon genes dictating the amounts of hormones secreted, but also on sex, age, health of the person, and adequate nutrients in the diet. A one-repetition maximum test is the most accurate way to determine maximum muscular strength. Strength capability: There are various ways to measure physical strength of a person or population. Strength capability analysis is usually done in the field of ergonomics where a particular task (e.g., lifting a load, pushing a cart, etc.) and/or a posture is evaluated and compared to the capabilities of the section of the population that the task is intended towards. The external reactive moments and forces on the joints are usually used in such cases. The strength capability of the joint is denoted by the amount of moment that the muscle force can create at the joint to counter the external moment. Strength capability: Skeletal muscles produce reactive forces and moments at the joints. To avoid injury or fatigue, when person is performing a task, such as pushing or lifting a load, the external moments created at the joints due to the load at the hand and the weight of the body segments must be ideally less than the muscular moment strengths at the joint. Strength capability: One of the first sagittal-plane models to predict strength was developed by Chaffin in 1969. Based on this model, the external moments at each joint must not exceed the muscle strength moments at that joint. Where, Sj is the muscle strength moment at joint, j, and Mj/L is the external moment at the joint, j, due to load, L and the body segments preceding the joint in the top-down analysis. Strength capability: Top-down analysis is the method of calculating the reactive moments and forces at each joint starting at the hand, all the way till the ankle and foot. In a 6-segment model, the joints considered are elbow, shoulder, L5/S1 disc of the spine, hip, knee and ankle. It is common to ignore the wrist joint in manual calculations. Software intended for such calculation use the wrist joint also, dividing the lower arm into hand and forearm segments. Prediction of static strength: Static strength prediction is the method of predicting the strength capabilities of a person or a population (based on anthropometry) for a particular task and/or posture (an isometric contraction). To predict capability, manual calculations are usually performed using the top-down analysis on a six or seven-link model, based on available information about the case and then compared to standard guidelines, such as the one provided by the National Institute for Occupational Safety and Health. Strength in the Animal kingdom: Animals with large mass are able to produce larger amounts of force on average. Some of the strongest animals include blue whales and elephants. The strongest primate is the gorilla. The strongest marine animal is the blue whale.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1,2-Dichloroethene** 1,2-Dichloroethene: 1,2-Dichloroethene, commonly called 1,2-dichloroethylene or 1,2-DCE, is the name for a pair of organochlorine compounds with the molecular formula C2H2Cl2. They are both colorless liquids with a sweet odor. It can exist as either of two geometric isomers, cis-1,2-dichloroethene or trans-1,2-dichloroethene, but is often used as a mixture of the two. They have modest solubility in water. These compounds have some applications as a degreasing solvent. In contrast to most cis-trans compounds, the Z isomer (cis) is more stable than the E isomer (trans) by 0.4 kcal/mol. Production and use: cis-DCE, the Z isomer, is obtainable by the controlled chlorination of acetylene: C2H2 + Cl2 → C2H2Cl2Industrially both isomers arise as byproducts of the production of vinyl chloride, which is produced on a vast scale. Unlike 1,1-dichloroethylene, the 1,2-dichloroethylene isomers do not polymerize.trans-DCE has applications including electronics cleaning, precision cleaning, and certain metal cleaning applications. Reactions: Both isomers participate in Kumada coupling reactions. trans-1,2-Dichloroethylene participates in cycloaddition reactions. Safety: These compounds have "moderate oral toxicity to rats". Environmental aspects: The dichloroethylene isomers occur in some polluted waters and soils. Significant attention has been paid to their further degradation, e.g. by iron particles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wet stacking** Wet stacking: Wet stacking is a condition in diesel engines in which unburned fuel passes on into the exhaust system. The word "stacking" comes from the term "stack" for exhaust pipe or chimney stack. The oily exhaust pipe is therefore a "wet stack". Wet stacking: This condition can have several causes. The most common cause is idling the engine for long intervals, which does not generate enough heat in the cylinder for a complete burn. "Idling" may be running at full rated operating speed, but with very little load applied. Another is excessive fueling. That may be caused by weak or leaky injectors, fuel settings turned up too high or over fueling for the given rpms. Cold weather running or other causes that prevent the engine from reaching proper operating temperature can cause a buildup of fuel due to incomplete burn that can result in 'wet stacking'. In diesel generators, it is usually because the diesel engine is running at only a small percentage of its rated output. For efficient combustion, a diesel engine should not be run under at least 60 percent of its rated power output.Wet stacking is detectable by the presence of a black ooze around the exhaust manifold, piping and turbocharger, if fitted. It can be mistaken for lubricating oil in some cases, but it consists of the "heavy ends" of the diesel fuel which do not burn when combustion temperature is too low. The heavier, more oily components of diesel fuel contain more stored energy than a comparable quantity of gasoline, but diesel requires an adequate loading of the engine in order to keep combustion temperature high enough to make use of it. Often, one can hear a slight miss in the engine due to fuel buildup. When the engine is first placed under a load after long periods of idling and wet stacking, it may blow some black exhaust out as it burns that excess fuel off. Continuous black exhaust from the stack when under a constant load is also an indication that some of the fuel is not being burned. Additionally, wet stacking can result in a build up of diesel fuel in the engine which does not combust due to the low temperature in the engine. This results in a reduced fuel economy. This fuel leaks through the cylinders and dilutes the engine oil. If not frequently changed, this diluted oil can lead to increased wear on the cylinder and premature engine failure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Locator/Identifier Separation Protocol** Locator/Identifier Separation Protocol: Locator/ID Separation Protocol (LISP) (RFC 6830) is a "map-and-encapsulate" protocol which is developed by the Internet Engineering Task Force LISP Working Group. The basic idea behind the separation is that the Internet architecture combines two functions, routing locators (where a client is attached to the network) and identifiers (who the client is) in one number space: the IP address. LISP supports the separation of the IPv4 and IPv6 address space following a network-based map-and-encapsulate scheme (RFC 1955). In LISP, both identifiers and locators can be IP addresses or arbitrary elements like a set of GPS coordinates or a MAC address. Historical origin: The Internet Architecture Board's October 2006 Routing and Addressing Workshop renewed interest in the design of a scalable routing and addressing architecture for the Internet. Key issues driving this renewed interest include concerns about the scalability of the routing system and the impending exhaustion of IPv4 address space. Since the IAB workshop, several proposals have emerged that attempted to address the concerns expressed at the workshop. All of these proposals are based on a common concept: the separation of Locator and Identifier in the numbering of Internet devices, often termed the "Loc/ID split". Historical origin: Current Internet Protocol Architecture The current namespace architecture used by the Internet Protocol uses IP addresses for two separate functions: as an end-point identifier to uniquely identify a network interface within its local network addressing context as a locator for routing purposes, to identify where a network interface is located within a larger routing context LISP: There are several advantages to decoupling Location and Identifier, and to LISP specifically. LISP: Improved routing scalability BGP-free multihoming in active-active configuration Address family traversal: IPv4 over IPv4, IPv4 over IPv6, IPv6 over IPv6, IPv6 over IPv4 Inbound traffic engineering Mobility Simple deployability No host changes are needed Customer driven VPN provisioning replacing MPLS-VPN Network virtualization Customer operated encrypted VPN based on LISP/GETVPN replacing IPsec scalability problems High availability for seamless communication sessions through (constraint-based) multihomingA recent discussion of several LISP use cases may be found in IETF has an active workgroup establishing standards for LISP. As of 2016, the LISP specifications are on the experimental track. The LISP workgroup started to move the core specifications onto the standards track in 2017 - as of June 2021 three revisions (for RFC 6830, RFC 6833, and 8113) are ready for publication as RFCs, but they await completion of work on a revision of RFC 6834 and the LISP Security Framework. LISP: Terminology Routing Locator (RLOC): A RLOC is an IPv4 or IPv6 address of an egress tunnel router (ETR). A RLOC is the output of an EID-to-RLOC mapping lookup. Endpoint ID (EID): An EID is an IPv4 or IPv6 address used in the source and destination address fields of the first (most inner) LISP header of a packet. Egress Tunnel Router (ETR): An ETR is a device that is the tunnel endpoint; it accepts an IP packet where the destination address in the "outer" IP header is one of its own RLOCs. ETR functionality does not have to be limited to a router device; server host can be the endpoint of a LISP tunnel as well. Ingress Tunnel Router (ITR): An ITR is a device that is the tunnel start point; it receives IP packets from site end-systems on one side and sends LISP-encapsulated IP packets, across the Internet to an ETR, on the other side. Proxy ETR (PETR): A LISP PETR implements ETR functions on behalf of non-LISP sites. A PETR is typically used when a LISP site needs to send traffic to non-LISP sites but the LISP site is connected through a service provider that does not accept nonroutable EIDs as packet sources. Proxy ITR (PITR): A PITR is used for inter-networking between Non-LISP and LISP sites, a PITR acts like an ITR but does so on behalf of non-LISP sites which send packets to destinations at LISP sites. xTR: A xTR refers to a device which functions both as an ITR and an ETR (which is typical), when the direction of data flow is not part of the context description. LISP: Re-encapsulating Tunnel Router (RTR): An RTR is used for connecting LISP-to-LISP communications within environments where direct connectivity is not supported. Examples include: 1) joining LISP sites connected to "disjointed locator spaces"—for example a LISP site with IPv4-only RLOC connectivity and a LISP site with IPv6-only RLOC connectivity; and 2) creating a data plane 'anchor point' by a LISP-speaking device behind a NAT box to send and receive traffic through the NAT device. The LISP mapping system: In the Locator/Identifier Separation Protocol the network elements (routers) are responsible for looking up the mapping between end-point-identifiers (EID) and route locators (RLOC) and this process is invisible to the Internet end-hosts. The mappings are stored in a distributed database called the mapping system, which responds to the lookup queries. The LISP beta network initially used a BGP-based mapping system called LISP ALternative Topology (LISP+ALT), but this has now been replaced by a DNS-like indexing system called DDT inspired from LISP-TREE. The protocol design made it easy to plug in a new mapping system, when a different design proved to have benefits. Some proposals have already emerged and have been compared. Implementations: Cisco has released public IOS, IOS XR, IOS XE and NX-OS images which support LISP. A team of researchers from the Université catholique de Louvain and T-Labs/TU Berlin have written a FreeBSD implementation called OpenLISP. Implementations: The LIP6 lab of UPMC, France, has implemented a fully featured control-plane (MS/MR, DDT, xTR) for OpenLISP Historically, LISPmob was an open source implementation of LISP for Linux, OpenWRT and Android maintained at Polytechnic University of Catalonia. It could act as xTR or LISP Mobile Node. Recently, this implementation has been further developed into a full open source LISP router called the "Open Overlay Router" or OOR. Implementations: AVM added LISP support in firmware for their FRITZ!Box devices starring from FRITZ!OS version 6.00. LANCOM Systems supports LISP in the router operating system HPE supports LISP in their Comware 7 platform based routers (under the marketing name FlexNetwork MSR and VSR). This platform is developed by H3C Technologies and sold in China under their own logo. OpenDaylight supports LISP flow mappings. ONOS develops a distributed LISP control plane as an SDN application. Lispers.net provides an open source, feature complete implementation of LISP. fd.io also supports LISP by the Overlay Network Engine (ONE). A simple LISP Mapping System implementation is also available in Java. LISP beta network: A testbed has been developed to gain real-life experience with LISP. Participants include Google, Facebook, NTT, Level3, InTouch N.V. and the Internet Systems Consortium. As of January 2014, around 600 companies, universities, and individual contributors from 34 countries are involved. The geographical distribution of participating routers, and the prefixes they are responsible for, can be observed on the LISPmon project website (updated daily). The multi-company, LISP-community initiative LISP4.net/LISP6.net publishes relevant information about this beta network on http://www.lisp4.net/ and http://www.lisp6.net/. Since March 2020 the LISP Beta Network is not maintained anymore. LISP-Lab consortium research network: The LISP-Lab project, coordinated by UPMC/LIP6, aims at building a LISP network experimentation platform exclusively built using open source LISP nodes (OpenLISP) acting as ITR/ETR tunnelling routers, MS/MR mapping servers/resolvers, DDT root and Proxy ITR/ETR. Partners include two academic institutions (UPMC, TPT), two Cloud Networking SME (Alphalink, NSS), two network operators (Renater, Orange), two SMEs on Access/Edge Networking (Border 6, Ucopia) and one Internet eXchange Point (Rezopole). The platform should be opened to external partners on 2014/2015 and is already interconnected to the LISP Beta Network with an OpenLISP DDT root. Future use of LISP: ICAO is considering Ground-Based LISP as a candidate technology for the next-generation Aeronautical Telecommunications Network (ATN). The solution is under further development in part of the SESAR (Single European Sky ATM Research) FCI activities. Other approaches: Several proposals for separating the two functions and allowing the Internet to scale better have been proposed, for instance GSE/8+8 as network based solution and SHIM6, HIP and ILNP as host based solutions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Instruments (software)** Instruments (software): Instruments (formerly Xray) is an application performance analyzer and visualizer, integrated in Xcode 3.0 and later versions of Xcode. It is built on top of the DTrace tracing framework from OpenSolaris, which was ported to Mac OS X v10.5 and which is available in all following versions of macOS. Instruments (software): Instruments shows a time line displaying any event occurring in the application, such as CPU activity variation, memory allocation, and network and file activity, together with graphs and statistics. Group of events are monitored via customizable "instruments", which have the ability to record user generated events and replay (emulate) them exactly as many times as needed, so a developer can see the effect of code changes without actually doing the repetitive work. The Instrument Builder feature allows the creation of custom analysis instruments. Features: Built-in instruments can track CPU activity of processes and threads. Memory allocation and release, garbage collection and memory leaks. File reads, writes, locks. Network activity and traffic. This instrument works like Activity Monitor but also stores the data for future reference. Graphics and inner workings of OpenGL and Metal. Energy diagnostics and "dead" objects. UI automation and Core animation. User events, such as keyboard keys pressed and mouse moves and clicks with exact time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dogic** Dogic: The Dogic () is an icosahedron-shaped puzzle like the Rubik's Cube. The 5 triangles meeting at its tips may be rotated, or 5 entire faces (including the triangles) around the tip may be rotated. It has a total of 80 movable pieces to rearrange, compared to the 20 pieces in the Rubik's Cube. History: The Dogic was patented by Zsolt and Robert Vecsei in Hungary on 20 October 1993. The patent was granted 28 July 1998 (HU214709). It was originally sold by VECSO in two variants under the names "Dogic" and "Dogic 2", but was only produced in quantities far short of the demand.In 2004, Uwe Mèffert acquired the plastic molds from its original manufacturer at the request of puzzle fans and collectors worldwide, and made another production run of the Dogics. These Dogics were first shipped in January 2005, and are now being sold by Meffert in his puzzle shop, Meffert's until September 2010 when the lack of interest for Meffert's Dogics made Uwe Meffert stop his Dogic production run.According to Uwe Mèffert, 2000 units have been produced by him. Description: The basic design of the Dogic is an icosahedron cut into 60 triangular pieces around its 12 tips and 20 face centers. All 80 pieces can move relative to each other. There are also a good number of internal moving pieces inside the puzzle, which are necessary to keep it in one piece as its surface pieces are rearranged. There are two types of twists that it can undergo: a shallow twist which rotates the 5 triangles around a single tip, and a deep twist which rotates 5 entire faces (including the triangles around the tip) around the tip. The shallow twist moves the triangles between faces but keeps them around the same tip; the deeper twist moves the triangles between the 5 tips lying at the base of the rotated faces but keeps them on the same faces. Each triangle has a single color, while the face centers may have up to 3 colors, depending on the particular coloring scheme employed. Solutions: The solutions for the different versions of the Dogic differ. Solutions: The 12-color Dogic is the more challenging version, where the face centers must be rearranged to match the colors of the face centers in adjacent faces. The triangles must then match the corresponding colors in the face centers. The face centers are mathematically equivalent to the corner pieces of the Megaminx, and so the same algorithms may be used for solving either. The triangles are relatively easy to solve once the face centers are in place, because the 5 triangles per tip are identical in color and may be freely interchanged. Solutions: The 10-color Dogic is slightly less challenging, since there is no unique solved state: the face centers may be randomly placed relative to each other, and the result would still look 'solved'. However, it may still be desirable to put them in aesthetically pleasing arrangements, such as pairing up faces of the same color, as depicted in the second photograph. The triangles are slightly more tricky to solve than in the 12-color Dogic, because adjacent triangles in the solved state are not the same color and so cannot be freely interchanged. Solutions: The 5-color and 2-color Dogics are even less of a challenge, since there is a large number of identical pieces. These simpler versions cater to those puzzle fans who are not yet at the skill level to manage the full complexity of the 12-color Dogic. Number of combinations: Due to different numbers of visually identical pieces in the two versions of the puzzle, they each have a different number of possible combinations. There are 60 tip pieces and 20 centres with 3 orientations, giving a theoretical maximum of 60!·20!·320 positions. This limit is not reached on either puzzle, due to reducing factors detailed below. Number of combinations: 12-color Dogic Only even permutations of centres are possible (2) The orientation of the first 19 centres determines the orientation of the last centre. (3) Some tip pieces are indistinguishable (5!12) The orientation of the puzzle does not matter (60): all 60 possible positions and orientations of the first center are equivalent because of the lack of fixed reference points.This leaves 59 20 19 12 2.20 10 82 positions for the 12-color Dogic. Number of combinations: The precise figure is 21 991 107 793 244 335 592 538 616 581 443 187 569 604 232 889 165 919 156 829 382 848 981 603 083 878 400 000 (roughly 22 sesvigintillion on the short scale or 22 tredecilliard on the long scale). 10-color Dogic Only even permutations of the centres are possible (2) Centre orientation does not matter (320) Ten of the centres are visually identical to the other ten (210) Some tip pieces are indistinguishable (6!10) The orientation of the puzzle does not matter (60)This leaves 59 20 11 10 4.40 10 66 positions for the 10-color Dogic. The precise figure is 4 400 411 583 858 825 100 777 127 453 704 140 502 784 413 155 112 522 644 357 120 000 000 (roughly 4.4 unvigintillion on the short scale or 4.4 undecillion on the long scale).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Esquisse d'un Programme** Esquisse d'un Programme: "Esquisse d'un Programme" (Sketch of a Programme) is a famous proposal for long-term mathematical research made by the German-born, French mathematician Alexander Grothendieck in 1984. He pursued the sequence of logically linked ideas in his important project proposal from 1984 until 1988, but his proposed research continues to date to be of major interest in several branches of advanced mathematics. Grothendieck's vision provides inspiration today for several developments in mathematics such as the extension and generalization of Galois theory, which is currently being extended based on his original proposal. Brief history: Submitted in 1984, the Esquisse d'un Programme was a proposal submitted by Alexander Grothendieck for a position at the Centre National de la Recherche Scientifique. The proposal was not successful, but Grothendieck obtained a special position where, while keeping his affiliation at the University of Montpellier, he was paid by the CNRS and released of his teaching obligations. Grothendieck held this position from 1984 till 1988. This proposal was not formally published until 1997, because the author "could not be found, much less his permission requested". The outlines of dessins d'enfants, or "children's drawings", and "Anabelian geometry", that are contained in this manuscript continue to inspire research; thus, "Anabelian geometry is a proposed theory in mathematics, describing the way the algebraic fundamental group G of an algebraic variety V, or some related geometric object, determines how V can be mapped into another geometric object W, under the assumption that G is not an abelian group, in the sense of being strongly noncommutative. The word anabelian (an alpha privative an- before abelian) was introduced in Esquisse d'un Programme. While the work of Grothendieck was for many years unpublished, and unavailable through the traditional formal scholarly channels, the formulation and predictions of the proposed theory received much attention, and some alterations, at the hands of a number of mathematicians. Those who have researched in this area have obtained some expected and related results, and in the 21st century the beginnings of such a theory started to be available." Abstract of Grothendieck's programme: ("Sommaire") 1. The Proposal and enterprise ("Envoi"). 2. "Teichmüller's Lego-game and the Galois group of Q over Q" ("Un jeu de “Lego-Teichmüller” et le groupe de Galois de Q sur Q"). 3. Number fields associated with dessins d'enfant". ("Corps de nombres associés à un dessin d’enfant"). 4. Regular polyhedra over finite fields ("Polyèdres réguliers sur les corps finis"). 5. General topology or a 'Moderated Topology' ("Haro sur la topologie dite 'générale', et réflexions heuristiques vers une topologie dite 'modérée"). 6. Differentiable theories and moderated theories ("Théories différentiables" (à la Nash) et “théories modérées"). 7. Pursuing Stacks ("À la Poursuite des Champs"). 8. Two-dimensional geometry ("Digressions de géométrie bidimensionnelle"). 9. Summary of proposed studies ("Bilan d’une activité enseignante"). 10. Epilogue. NotesSuggested further reading for the interested mathematical reader is provided in the References section. Abstract of Grothendieck's programme: Extensions of Galois's theory for groups: Galois groupoids, categories and functors Galois developed a powerful, fundamental algebraic theory in mathematics that provides very efficient computations for certain algebraic problems by utilizing the algebraic concept of groups, which is now known as the theory of Galois groups; such computations were not possible before, and also in many cases are much more effective than the 'direct' calculations without using groups. To begin with, Alexander Grothendieck stated in his proposal: "Thus, the group of Galois is realized as the automorphism group of a concrete, pro-finite group which respects certain structures that are essential to this group." This fundamental, Galois group theory in mathematics has been considerably expanded, at first to groupoids- as proposed in Alexander Grothendieck's Esquisse d' un Programme (EdP)- and now already partially carried out for groupoids; the latter are now further developed beyond groupoids to categories by several groups of mathematicians. Here, we shall focus only on the well-established and fully validated extensions of Galois' theory. Thus, EdP also proposed and anticipated, along previous Alexander Grothendieck's IHÉS seminars (SGA1 to SGA4) held in the 1960s, the development of even more powerful extensions of the original Galois's theory for groups by utilizing categories, functors and natural transformations, as well as further expansion of the manifold of ideas presented in Alexander Grothendieck's Descent Theory. The notion of motive has also been pursued actively. This was developed into the motivic Galois group, Grothendieck topology and Grothendieck category . Such developments were recently extended in algebraic topology via representable functors and the fundamental groupoid functor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ermakov–Lewis invariant** Ermakov–Lewis invariant: Many quantum mechanical Hamiltonians are time dependent. Methods to solve problems where there is an explicit time dependence is an open subject nowadays. It is important to look for constants of motion or invariants for problems of this kind. For the (time dependent) harmonic oscillator it is possible to write several invariants, among them, the Ermakov–Lewis invariant which is developed below. Ermakov–Lewis invariant: The time dependent harmonic oscillator Hamiltonian reads H^=12[p^2+Ω2(t)q^2]. It is well known that an invariant for this type of interaction has the form I^=12[(q^ρ)2+(ρp^−ρ˙q^)2], where ρ obeys the Ermakov equation ρ¨+Ω2ρ=ρ−3. The above invariant is the so-called Ermakov–Lewis invariant. It is easy to show that I^ may be related to the time independent harmonic oscillator Hamiltonian via a unitary transformation of the form ln ln ln ⁡ρdt, as 12[p^2+q^2]=T^I^T^†. This allows an easy form to express the solution of the Schrödinger equation for the time dependent Hamiltonian. The first exponential in the transformation is the so-called squeeze operator. This approach may allow to simplify problems such as the Quadrupole ion trap, where an ion is trapped in a harmonic potential with time dependent frequency. The transformation presented here is then useful to take into account such effects. The geometric meaning of this invariant can be realized within the quantum phase space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alcohol myopia** Alcohol myopia: Alcohol myopia is a cognitive-physiological theory on alcohol use disorder in which many of alcohol's social and stress-reducing effects, which may underlie its addictive capacity, are explained as a consequence of alcohol's narrowing of perceptual and cognitive functioning. The alcohol myopia model posits that rather than disinhibit, alcohol produces a myopia effect that causes users to pay more attention to salient environmental cues and less attention to less salient cues. Therefore, alcohol's myopic effects cause intoxicated people to respond almost exclusively to their immediate environment. This "nearsightedness" limits their ability to consider future consequences of their actions as well as regulate their reactive impulses.Alcohol's ability to alter behavior and decision-making stems from its impact on synaptic transmission at GABA receptors. Alcohol's effects on the synaptic level dampen the brain's processing ability and limit attentional capacity.Overall, the alcohol myopia theory proposes that intoxicated individuals will act rashly and will choose overly simple solutions to complex problems. Three classes of myopia: Alcohol's myopic effects on the drinker's cognitive processes can be characterized into three classes: self-inflation, relief, and excess. Three classes of myopia: Self-inflation Alcohol consumption alters the drinker's self-image by "enhancing feelings of self-appraisal and even narcissism". Alcohol inhibits sophisticated levels of mental processing that are necessary to recognize personal flaws. The ‘tunnel vision’ effect of alcohol myopia, which limits the attentional capacity of the drinker, causes individuals to focus on favorable and superficial characteristics of themselves. Overall, the self-inflating effect of alcohol can increase the drinker's self-confidence and therefore lead them to engage in activities or social situations that would normally make her or him nervous or uncomfortable when sober. Three classes of myopia: Relief Alcohol can alleviate the drinker's feelings of stress or anxiety. Alcohol myopia limits those under the influence of alcohol to see the world through a nearsighted lens; in other words, consumption of alcohol will lead individuals to temporarily forget about previous worries or problems, for these feelings lay outside of the restricted set of immediate cues that the drinker can respond to. By depriving the individual of the attention capacity necessary to process undesirable thoughts, alcohol myopia can bring the drinker a sense of relief. Three classes of myopia: Excess Alcohol exaggerates the drinker's perception of the world around them. The drinker's response to this exaggerated world manifests in erratic and dramatic behaviors. Under the influence of alcohol, individuals are incapable of sufficiently processing the long-term consequences of their actions; they will respond to immediate and salient cues in the moment. In this way, drunk individuals can be described as "slaves to the present moment".Alcohol is believed to disinhibit urges normally considered socially unacceptable. The sober brain is able to utilize the frontal cortex to make executive decisions and restrain these impulses. However, the drunk brain is unable to regulate the urges for excessive behavior.By leading the brain to overreact to present cues and disregard the implications of one's actions, alcohol often provokes aggressive behavior. Alcohol consumption can result in a "Jekyll and Hyde" effect in individuals who are typically amiable when sober but are perhaps predisposed to aggressive behavior. Additionally, alcohol has a dramatic connection to criminal behavior, rage, physical destruction, and sexual assault.It is important to note, however, that alcohol myopia's effects on excessive behavior do not incite aggression in all drinkers. In some intoxicated individuals, excess simply manifests itself in their becoming significantly more talkative, flirtatious, or adventurous. Additionally, in situations in which inhibitory cues are the most salient, the individual may behave in a more prudent or passive manner than they would when sober. Alcohol's effects on neurotransmission: Alcohol is classified as a sedative hypnotic drug. Alcohol produces a sedative effect by acting on receptors of the inhibitory neurotransmitter GABA. GABA receptors contain a binding site for the chemical, GABA, a chloride ion channel, and an additional binding site for alcohol molecules.GABA produces its normal inhibitory effects on cell activity by reducing a neuron's firing rate. When a GABA molecule attaches to its binding site, it activates the receptor, resulting in an inflow of chloride ions. The increase in concentration of negative charge inside the cell hyperpolarizes the membrane. This hyperpolarization decreases the likelihood that the membrane will send an action potential to neighboring neurons; the difference of charge across the membrane has increased, while it would need to decrease in order to reach the threshold charge necessary to propagate an action potential.Alcohol acts as a positive allosteric modulator and therefore amplifies the transmitter's inhibitory effects. When alcohol molecules bind to its site on the GABA receptor, they lengthen the time that the receptor's chloride ion pore remains open, resulting in an even greater hyperpolarization of the membrane. Additionally the binding of alcohol causes the GABA transmitter to bind to its receptors more frequently, and therefore augments the transmitter's ability to inhibit cell activity.Overall, alcohol's interactions with GABA receptors decrease neuronal firing across the body and inhibit cortical activation. Behavioral changes associated with alcohol myopia stem from the inhibitory effects of this reduction of firing and activation. The inhibition conflict: One effect of alcohol myopia is that it amplifies rash responses in intoxicated individuals. Alcohol does not directly affect the emotions and actions of inebriated people, but does so indirectly via its involvement in the inhibition conflict. The inhibition conflict: Inhibition conflict is a cognitive function that arises in people and allows them to make decisions based on immediate stimuli and stimuli that require a higher level of processing. In sober individuals, situations that produce an inhibition conflict would consist of one set of salient cues (external stimuli) that stimulate a certain response and other cues (internal stimuli such as possible negative consequences or societal standards and norms) that would inhibit the salient cues and therefore prevent rash action. Those influenced by alcohol myopia are unable to comprehend this second set of cues, as the condition narrows an individual's ability for higher-level cognitive functioning. Therefore, these individuals tend to act rashly without consideration for the consequences of their actions.Studies have been conducted to test the effects of alcohol on the intensity of males’ aggressive response to external stimuli, demonstrating the role of inhibition conflict on alcohol myopia. Male subjects under the influence of alcohol often ignored external cues, both in laboratory settings and in real life situations. In the lab patients who were given alcohol were more likely to respond to unpleasant tones (external stimuli) violently, despite internal cues advising them against aggression. Surveys conducted also demonstrated that while intoxicated men are more likely to address the salient cue of anger with aggressive behavior towards their partners. The results of these studies demonstrate men experiencing the effects of alcohol myopia were unable to process the consequences of their actions, and continued to act aggressively despite consequences. Alcohol had effectively limited their interpretation of salient cues and prevented them from interpreting cues that would inhibit aggressive action. The inhibition conflict: Women have also exhibited the effects of alcohol myopia's ability to disrupt the inhibition conflict. Research conducted in 2002 determined that there was a positive relationship between college females’ level of sobriety and their decisions to engage in risky sexual behavior. Results showed that a majority of college aged females who had been drinking chose not to address risk topics before sexual intercourse with a partner. Alcohol myopia can explain this relationship. The inebriated females’ abilities to analyze internal cues warning them of the risks of sex were inhibited by alcohol, while alcohol caused them to become more responsive to the salient cue of arousal. Risky behavior: Alcohol myopia has been shown to increase the likelihood that a person will engage in risky behavior. The increased risk taking brought on by alcohol myopia often ends with aversive consequences for the person acting dangerously or those influenced by the intoxicated's actions. Those under the influence of alcohol myopia are often unaware of the consequences of their behavior as well as its risky nature. It has been shown that alcohol myopia causes people to function like those with maladaptive risky behaviors, often caused by behavior disorders or a personal history of substance use. Dosage of alcohol intensifies these effects of myopia.People under the influence of alcohol myopia act in a risky manner because of the myopia's inhibiting effects on their ability to analyze the probable outcomes of their actions. Alcohol activates dopaminergic circuits in the midbrain that also regulate the brain's analyzation and recognition of the outcomes of an action. It is not yet clear on exactly how alcohol effects these dopaminergic circuits. The following behaviors are influenced by risk taking when a person is experiencing the effects of alcohol myopia. Risky behavior: Personal goals Alcohol myopia has also been found to affect one's level of commitment to a personal goal. Individual commitment to a goal is dependent upon level of personal desire and feasibility of the goal. A person's ability to appropriately interpret feasibility is inhibited by alcohol myopia. This is because desire is a more salient stimulus than feasibility, causing those experiencing alcohol myopia to ignore the less salient stimulant of feasibility. Because one is less inhibited by the prospect of unfeasible goals, those under the influences of alcohol myopia tend to feel more committed to their goals than sober individuals. Studies testing the relationship between intoxication and level of commitment to goals support the theory that increased goal commitment (despite level of feasibility) is a side effect of alcohol myopia. Risky behavior: Sexual arousal Alcohol myopia causes individuals to become increasingly aware of sexual arousal and more likely to respond rashly to the arousal stimulus. The decision about how to respond to sexual arousal involves cognitive function that synthesizes both impelling cues (those that draw attention to the benefits of an action) and inhibiting cues (those that focus on the consequences of an action). The alcohol myopia theory suggests that intoxicated individuals will be more likely to engage in risky sexual behavior. Intoxicated males subject to high levels of sexual arousal were more likely to engage in unprotected sex than sober males subject to the same levels of arousal. This is because the impelling cues (sexual arousal) are often more imminent than inhibitory cues (safety precautions), and those affected by alcohol myopia are limited to cognitive processing of the more immediate cues and often ignore the inhibitory cues. Risky behavior: The extent of alcohol myopia's effects on one's decisions about how to react to sexual arousal is dependent upon the level of confliction one feels. The more intense the personal conflict of whether or not to use a condom, the greater effect alcohol has on the final decision to engage in risky sexual behavior. Intoxicated males who had felt heavily conflicted about condom use were least likely to use a condom. Those intoxicated men who had been less conflicted about using a condom were more likely to engage in safe sex. Therefore, some intoxicated individuals can actually be less likely to engage in risky sexual behavior than their sober counterparts, given appropriate cues. The effects of alcohol myopia on response to sexual arousal also depend on the level of sexual arousal. When sexual arousal levels were high, a greater percentage of men reported not using a condom than when arousal levels were low. This goes back to the importance of saliency in alcohol myopia. The more salient the external cue (in this case, higher levels of sexual arousal were more salient than lower levels) the more likely it is for alcohol to inhibit the comprehension of the consequences of an action. Risky behavior: Drunk driving The Alcohol Myopia Model proposes that intoxication increases the likelihood that an individual will decide to drive in an unsafe situation. The drinker is unable to properly weigh the future consequences of his or her decision to drive; “inhibitory cues that prohibit driving are less likely to be considered because they lack salience and immediacy." Meanwhile, the intoxicated individual responds to the immediate motivations to drive. For example, he will focus on the rewards of getting home quickly and not having to pay for a cab. Therefore, under the influence of alcohol, driving becomes the simplest and most compelling option. Studies show that when questioned, intoxicated individuals reported “greater intentions to drink and drive...and fewer moral obligations against drinking and driving” than they did when sober.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Academy of Pathology** International Academy of Pathology: The International Academy of Pathology, originally called the International Association of Medical Museums (IAMM), is an institution dedicated "to the advancement of Pathology". In 1906, it was established by Dr. William Osler and Maude Abbott. Its first documented meeting occurred on May 6, 1907. In 1955, the IAMM was rename as the International Academy of Pathology (IAP).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TRPA1** TRPA1: Transient receptor potential cation channel, subfamily A, member 1, also known as transient receptor potential ankyrin 1, TRPA1, or The Wasabi Receptor, is a protein that in humans is encoded by the TRPA1 (and in mice and rats by the Trpa1) gene.TRPA1 is an ion channel located on the plasma membrane of many human and animal cells. This ion channel is best known as a sensor for pain, cold and itch in humans and other mammals, as well as a sensor for environmental irritants giving rise to other protective responses (tears, airway resistance, and cough). Function: TRPA1 is a member of the transient receptor potential channel family. TRPA1 contains 14 N-terminal ankyrin repeats and is believed to function as a mechanical and chemical stress sensor. One of the specific functions of this protein studies involves a role in the detection, integration and initiation of pain signals in the peripheral nervous system. It can be activated at sites of tissue injury or sites of inflammation directly by endogenous mediators or indirectly as a downstream target via signaling from a number of distinct G-protein coupled receptors (GPCRs), such as bradykinin. Function: Recent studies indicate that TRPA1 is activated by a number of reactive (allyl isothiocyanate, cinnamaldehyde, farnesyl thiosalicylic acid, formalin, hydrogen peroxide, 4-hydroxynonenal, acrolein, and tear gases) and non-reactive compounds (nicotine, PF-4840154) and is thus considered as a "chemosensor" in the body. TRPA1 is co-expressed with TRPV1 on nociceptive primary afferent C-fibers in humans. This sub-population of peripheral C-fibers is considered important sensors of nociception in humans and their activation will under normal conditions give rise to pain. Indeed, TRPA1 is considered as an attractive pain target. TRPA1 knockout mice showed near complete attenuation of nocifensive behaviors to formalin, tear-gas and other reactive chemicals . TRPA1 antagonists are effective in blocking pain behaviors induced by inflammation (complete Freund's adjuvant and formalin). Function: Although it is not firmly confirmed whether noxious cold sensation is mediated by TRPA1 in vivo, several recent studies clearly demonstrated cold activation of TRPA1 channels in vitro.In the heat-sensitive loreal pit organs of many snakes TRPA1 is responsible for the detection of infrared radiation.Snakes have this type of receptors in their pit organ to help them detect infrared radiation. However, frogs such as Hyalinobatrachium fleischmanni can reflect infrared light with their skins and, if the environment also reflects infrared light, the frogs will not be discovered by the predators. Structure: In 2016, cryo-electron microscopy was employed to obtain a three-dimensional structure of TRPA1. This work revealed that the channel assembles as a homotetramer, and possesses several structural features that hint at its complex regulation by irritants, cytoplasmic second messengers (e.g., calcium), cellular co-factors (e.g., inorganic anions like polyphosphates), and lipids (e.g., PIP2). Most notably, the site of covalent modification and activation for electrophilic irritants was localized to a tertiary structural feature on the membrane-proximal intracellular face of the channel, which has been termed the 'allosteric nexus', and which is composed of a cysteine-rich linker domain and the eponymous TRP domain. Breakthrough research combining cryo-electron microscopy and electrophysiology later elucidated the molecular mechanism of how the channel functions as a broad-spectrum irritant detector. With respect to electrophiles, which activate the channel by covalent modification of two cysteines in the allosteric nexus, it was shown that these reactive oxidative species act step-wise to modify two critical cysteine residues in the allosteric nexus. Upon covalent attachment, the allosteric nexus adopts a conformational change that is propagated to the channel's pore, dilating it to permit cation influx and subsequent cellular depolarization. With respect to activation by the second messenger calcium, the structure of the channel in complex with calcium localized the binding site for this ion and functional studies demonstrated that this site controls the various different effects of calcium on the channel – namely potentiation, desensitization, and receptor-operation. Clinical significance: In 2008, it was observed that caffeine suppresses activity of human TRPA1, but it was found that mouse TRPA1 channels expressed in sensory neurons cause an aversion to drinking caffeine-containing water, suggesting that the TRPA1 channels mediate the perception of caffeine.TRPA1 has also been implicated in airway irritation by cigarette smoke, cleaning supplies and in the skin irritation experienced by some smokers trying to quit by using nicotine replacement therapies such as inhalers, sprays, or patches. Clinical significance: A missense mutation of TRPA1 was found to be the cause of a hereditary episodic pain syndrome. A family from Colombia suffers from debilitating upper-body pain starting in infancy that is usually triggered by fasting or fatigue (illness, cold temperature, and physical exertion being contributory factors). A gain-of-function mutation in the fourth transmembrane domain causes the channel to be overly sensitive to pharmacological activation.Metabolites of paracetamol (acetaminophen) have been demonstrated to bind to the TRPA1 receptors, which may desensitize the receptors in the way capsaicin does in the spinal cord of mice, causing an antinociceptive effect. This is suggested as the antinociceptive mechanism for paracetamol.Oxalate, a metabolite of an anti cancer drug oxaliplatin, has been demonstrated to inhibit prolyl hydroxylase, which endows cold-insensitive human TRPA1 with pseudo cold sensitivity (via reactive oxygen generation from mitochondria). This may cause a characteristic side-effect of oxaliplatin (cold-triggered acute peripheral neuropathy). Ligand binding: TRPA1 can be considered to be one of the most promiscuous TRP ion channels, as it seems to be activated by a large number of noxious chemicals found in many plants, food, cosmetics and pollutants.Activation of the TRPA1 ion channel by the olive oil phenolic compound oleocanthal appears to be responsible for the pungent or "peppery" sensation in the back of the throat caused by olive oil.Although several nonelectrophilic agents such as thymol and menthol have been reported as TRPA1 agonists, most of the known activators are electrophilic chemicals that have been shown to activate the TRPA1 receptor via the formation of a reversible covalent bond with cysteine residues present in the ion channel. Another example of a nonelectrophilic agent is the anesthetic propofol, which is known to cause pain on injection into a vein, a side effect attributed to TRPV1 and TRPA1 activation. For a broad range of electrophilic agents, chemical reactivity in combination with a lipophilicity enabling membrane permeation is crucial to TRPA1 agonistic effect. A dibenz[b,f][1,4]oxazepine derivative substituted by a carboxylic methylester at position 10 has been reported to be a potent TRPA1 agonist (EC50 = 0.13μM or pEC50 = 6.90). The pyrimidine PF-4840154 is a potent, non-covalent activator of both the human (EC50 = 23 nM) and rat (EC50 = 97 nM) TRPA1 channels. This compound elicits nociception in a mouse model through TRPA1 activation. Furthermore, PF-4840154 is superior to allyl isothiocyanate, the pungent component of mustard oil, for screening purposes. Other TRPA1 channel activators include JT-010 and ASP-7663, while channel blockers include A-967079, HC-030031 and AM-0902. Ligand binding: The eicosanoids formed in the ALOX12 (i.e. arachidonate-12-lipoxygnease) pathway of arachidonic acid metabolism, 12S-hydroperoxy-5Z,8Z,10E,14Z-eicosatetraenoic acid (i.e. 12S-HpETE; see 12-Hydroxyeicosatetraenoic acid) and the hepoxilins (Hx), HxA3 (i.e. 8R/S-hydroxy-11,12-oxido-5Z,9E,14Z-eicosatrienoic acid) and HxB3 (i.e. 10R/S-hydroxy-11,12-oxido-5Z,8Z,14Z-eicosatrienoic acid) (see Hepoxilin#Pain perception) directly activate TRPA1 and thereby contribute to the hyperalgesia and tactile allodynia responses of mice to skin inflammation. In this animal model of pain perception, the hepoxilins are released in the spinal cord directly activate TRPA (and also TRPV1) receptors to augment the perception of pain. 12S-HpETE, which is the direct precursor to HxA3 and HxB3 in the ALOX12 pathway, may act only after being converted to these hepoxilins. The epoxide, 5,6-epoxy-8Z,11Z,14Z-eicosatrienoic acid (5,6-EET) made by the metabolism of arachidonic acid by any one of several cytochrome P450 enzymes (see Epoxyeicosatrienoic acid) likewise directly activates TRPA1 to amplify pain perception.Studies with mice, guinea pigs, and human tissues indicate that another arachidonic acid metabolite, Prostaglandin E2, operates through its prostaglandin EP3 G protein coupled receptor to trigger cough responses. Its mechanism of action does not appear to involve direct binding to TRPA1 but rather the indirect activation and/or sensitization of TRPA1 as well as TRPV1 receptors. Genetic polymorphism in the EP3 receptor (rs11209716), has been associated with ACE inhibitor-induce cough in humans.More recently, a peptide toxin termed the wasabi receptor toxin from the Australian black rock scorpion (Urodacus manicatus) was discovered; it was shown to bind TRPA1 non-covalently in the same region as electrophiles and act as a gating modifier toxin for the receptor, stabilizing the channel in an open conformation. TRPA1 inhibition: A number of small molecule inhibitors (antagonists) have been discovered which have been shown to block the function of TRPA1. At the cellular level, assays that measure agonist-activated inhibition of TRPA1-mediated calcium fluxes and electrophysiological assays have been used to characterize the potency, species specificity and mechanism of inhibition. While the earliest inhibitors, such as HC-030031, were lower potency (micromolar inhibition) and had limited TRPA1 specificity, the more recent discovery of highly potent inhibitors with low nanomolar inhibition constants, such as A-967079 and ALGX-2542 as well as high selectivity among other members the TRP superfamily and lack of interaction with other targets have provided valuable tool compounds and candidates for future drug development.Resolvin D1 (RvD1) and RvD2 (see resolvins) and maresin 1 are metabolites of the omega 3 fatty acid, docosahexaenoic acid. They are members of the specialized proresolving mediators (SPMs) class of metabolites that function to resolve diverse inflammatory reactions and diseases in animal models and, it is proposed, in humans. These SPMs also damp pain perception arising from various inflammation-based causes in animal models. The mechanism behind their pain-dampening effect involves the inhibition of TRPA1, probably (in at least certain cases) by an indirect effect wherein they activate another receptor located on neurons or nearby microglia or astrocytes. CMKLR1, GPR32, FPR2, and NMDA receptors have been proposed to be the receptors through which SPMs may operate to down-regulate TRPs and thereby pain perception. Ligand examples: Agonists 4-Oxo-2-nonenal Allicin Allyl isothiocyanate ASP-7663 Cannabidiol Cannabichromene Gingerol Icilin Polygodial Propofol Hepoxilins A3 and B3 12S-Hydroperoxy-5Z,8Z,10E,14Z-eicosatetraenoic acid 4,5-Epoxyeicosatrienoic acid Cinnamaldehyde PF-4840154 2-Arachidonoylglycerol Anandamide N-Arachidonoyl dopamine Palmitoylethanolamide Cannabidiolic acid Cannabidivarin Cannabigerol Cannabigerolic acid Cannabigerovarin Tetrahydrocannabivarin Tetrahydrocannabivarin acid Gating Modifiers WaTx Antagonists HC-030031 GRC17536 A-967079 ALGX-2513 ALGX-2541 ALGX-2563 ALGX-2561 ALGX-2542
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kawamata–Viehweg vanishing theorem** Kawamata–Viehweg vanishing theorem: In algebraic geometry, the Kawamata–Viehweg vanishing theorem is an extension of the Kodaira vanishing theorem, on the vanishing of coherent cohomology groups, to logarithmic pairs, proved independently by Viehweg and Kawamata in 1982. The theorem states that if L is a big nef line bundle (for example, an ample line bundle) on a complex projective manifold with canonical line bundle K, then the coherent cohomology groups Hi(L⊗K) vanish for all positive i.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ThreadSafe** ThreadSafe: ThreadSafe is a source code analysis tool that identifies application risks and security vulnerabilities associated with concurrency in Java code bases, using whole-program interprocedural analysis. ThreadSafe is used to identify and avoid software failures in concurrent applications running in complex environments. Features: ThreadSafe detects Java concurrency defects: Race conditions – which lead to incorrect or unpredictable behaviour that is difficult to reproduce in a debugger. Deadlocks – caused by circular waits between threads waiting for shared resources. Unpredictable results – caused by incorrect handling of concurrent collections, bad error handling, or mixed object synchronization. Features: Performance bottlenecks – caused by incorrect API usage, redundant synchronization, and unnecessary use of shared mutable state.ThreadSafe is integrated with the Eclipse software development environment and with the SonarQube software quality management platform. Contextual information is provided within the development environment to assist the developer with the investigation and resolution of concurrency issues, directly in the code. A command-line version is available for users of IDEs other than Eclipse and for build process integration. Checking adherence to standards: ThreadSafe detects violations of the concurrency-related rules in the CERT Oracle Secure Coding Standard for Java.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intermediate fibers** Intermediate fibers: Intermediate fibers, also known as fast oxidative-glycolytic fibers, are fast twitch muscle fibers which have been converted via endurance training. These fibers are slightly larger in diameter, have more mitochondria as well as a greater blood supply and more endurance than typical fast twitch fibers. Most of the body's muscles are composed of these intermediate fibers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sudden ionospheric disturbance** Sudden ionospheric disturbance: A sudden ionospheric disturbance (SID) is any one of several ionospheric perturbations, resulting from abnormally high ionization/plasma density in the D region of the ionosphere and caused by a solar flare and/or solar particle event (SPE). The SID results in a sudden increase in radio-wave absorption that is most severe in the upper medium frequency (MF) and lower high frequency (HF) ranges, and as a result often interrupts or interferes with telecommunications systems. Discovery: The Dellinger effect, or sometimes Mögel–Dellinger effect, is another name for a sudden ionospheric disturbance. The effect was discovered by John Howard Dellinger around 1935 and also described by the German physicist Hans Mögel (1900-1944) in 1930. The fadeouts are characterized by sudden onset and a recovery that takes minutes or hours. Cause: When a solar flare occurs on the Sun a blast of intense ultraviolet (UV) and x-ray (sometimes even gamma ray) radiation hits the dayside of the Earth after a propagation time of about 8 minutes. This high energy radiation is absorbed by atmospheric particles, raising them to excited states and knocking electrons free in the process of photoionization. The low altitude ionospheric layers (D region and E region) immediately increase in density over the entire dayside. The ionospheric disturbance enhances VLF radio propagation. Scientists on the ground can use this enhancement to detect solar flares; by monitoring the signal strength of a distant VLF transmitter, sudden ionospheric disturbances (SIDs) are recorded and indicate when solar flares have taken place. The small geomagnetic effect in the lower ionosphere appears as a small hook on magnetic records and is therefore called "geomagnetic crochet effect" or "sudden field effect". Effects on radio waves: Short wave radio waves (in the HF range) are absorbed by the increased particles in the low altitude D-region of the ionosphere, causing a complete blackout of radio communications. This is called a short wave fadeout (SWF). These fadeouts last for a few minutes to a few hours and are most severe in the equatorial regions where the Sun is most directly overhead. Although High Frequency signals suffer a fadeout because of the enhanced D-layer, the Sudden Ionospheric Disturbance enhances long wave (VLF) radio propagation. SIDs are observed and recorded by monitoring the signal strength of a distant VLF transmitter. Effects on radio waves: A whole array of sub-classes of SIDs exist, detectable by different techniques at various wavelengths: the short-wave fadeout (SWF), the SPA (Sudden Phase Anomaly), SFD (Sudden Frequency Deviation), SCNA (Sudden Cosmic Noise Absorption), SEA (Sudden Enhancement of Atmospherics), etc.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kelman's source characteristics** Kelman's source characteristics: Kelman's source characteristics identify three characteristics of successful marketing communications sources: source credibility source attractiveness source power. Source attractiveness: An attractive source is one that the receiver can identify with, or aspire to. The message from such as source is identified with by the receiver; e.g. "slice of life" advertising, for products such as washing powder, regularly feature actors in situations that are intended to reflect the lives of the target segment. Source power: A powerful source is intended to bring about compliance in the receiver. An example would be a police officer giving an anti-drink drive message.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Super soft X-ray source** Super soft X-ray source: A luminous supersoft X-ray source (SSXS, or SSS) is an astronomical source that emits only low energy (i.e., soft) X-rays. Soft X-rays have energies in the 0.09 to 2.5 keV range, whereas hard X-rays are in the 1–20 keV range. SSSs emit few or no photons with energies above 1 keV, and most have effective temperature below 100 eV. This means that the radiation they emit is highly ionizing and is readily absorbed by the interstellar medium. Most SSSs within our own galaxy are hidden by interstellar absorption in the galactic disk. They are readily evident in external galaxies, with ~10 found in the Magellanic Clouds and at least 15 seen in M31.As of early 2005, more than 100 SSSs have been reported in ~20 external galaxies, the Large Magellanic Cloud (LMC), Small Magellanic Cloud (SMC), and the Milky Way (MW). Those with luminosities below ~3 x 1038 erg/s are consistent with steady nuclear burning in accreting white dwarfs (WD)s or post-novae. There are a few SSS with luminosities ≥1039 erg/s.Super soft X-rays are believed to be produced by steady nuclear fusion on a white dwarf's surface of material pulled from a binary companion, the so-called close-binary supersoft source (CBSS). This requires a flow of material sufficiently high to sustain the fusion. Contrast this with the nova, where less flow causes the material to only fuse sporadically. Super soft X-ray sources can evolve into type Ia supernova, where a sudden fusion of material destroys the white dwarf, and neutron stars, through collapse.Super soft X-ray sources were first discovered by the Einstein Observatory. Further discoveries were made by ROSAT. Many different classes of objects emit supersoft X-radiation (emission dominantly below 0.5 keV). Luminous supersoft X-ray sources: Luminous super soft X-ray sources have a characteristic blackbody temperature of a few tens of eV (~20–100 eV) and a bolometric luminosity of ~1038 erg/s (below ~ 3 x 1038 erg/s).Apparently, luminous SSXSs can have equivalent blackbody temperatures as low as ~15 eV and luminosities ranging from 1036 to 1038 erg/s. The numbers of luminous SSSs in the disks of ordinary spiral galaxies such as the MW and M31 are estimated to be on the order of 103. Milky Way SSXSs: SSXSs have now been discovered in our galaxy and in globular cluster M3. MR Velorum (RX J0925.7-4758) is one of the rare MW super soft X-ray binaries. "The source is heavily reddened by interstellar material, making it difficult to observe in the blue and ultraviolet." The period determined for MR Velorum at ~4.03 d is considerably longer than that of other supersoft systems, which is usually less than a day. Close-binary supersoft source (CBSS): The CBSS model invokes steady nuclear burning on the surface of an accreting white dwarf (WD) as the generator of the prodigious super soft X-ray flux. As of 1999, eight SSXSs have orbital periods between ~4 hr and 1.35 d: RX J0019.8+2156 (MW), RX J0439.8-6809 (MW halo near LMC), RX J0513.9-6951 (LMC), RX J0527.8-6954 (LMC), RX J0537.7-7034 (LMC), CAL 83 (LMC), CAL 87 LMC), and 1E 0035.4-7230 (SMC). Symbiotic binary: A symbiotic binary star is a variable binary star system in which a red giant has expanded its outer envelope and is shedding mass quickly, and another hot star (often a white dwarf) is ionizing the gas. Three symbiotic binaries as of 1999 are SSXSs: AG Dra (BB, MW), RR Tel (WD, MW), and RX J0048.4-7332 (WD, SMC). Noninteracting white dwarfs: The youngest, hottest WD, KPD 0005+5106, is very close to 100,000 K, of type DO and is the first single WD recorded as an X-ray source with ROSAT. Cataclysmic variables: "Cataclysmic variables (CVs) are close binary systems consisting of a white dwarf and a red-dwarf secondary transferring matter via the Roche lobe overflow." Both fusion- and accretion-powered cataclysmic variables have been observed to be X-ray sources. The accretion disk may be prone to instability leading to dwarf nova outbursts: a portion of the disk material falls onto the white dwarf, the cataclysmic outbursts occur when the density and temperature at the bottom of the accumulated hydrogen layer rise high enough to ignite nuclear fusion reactions, which rapidly burn the hydrogen layer to helium. Cataclysmic variables: Apparently the only SSXS nonmagnetic cataclysmic variable is V Sagittae: bolometric luminosity of (1–10) x 1037, a binary including a blackbody (BB) accretor at T < 80 eV, and an orbital period of 0.514195 d.The accretion disk can become thermally stable in systems with high mass-transfer rates (Ṁ). Such systems are called nova-like (NL) stars, because they lack outbursts characteristic of dwarf novae. VY Scl cataclysmic variables: Among the NL stars is a small group which shows a temporary reduction or cessation of Ṁ from the secondary. These are the VY Scl-type stars or anti-dwarf novae. VY Scl cataclysmic variables: V751 Cyg V751 Cyg (BB, MW) is a VY Scl CV, has a bolometric luminosity of 6.5 x 1036 erg/s, and emits soft X-rays at quiescence. The discovery of a weak soft X-ray source of V751 Cyg at minimum presents a challenge as this is unusual for CVs which commonly display weak hard X-ray emission at quiescence.The high luminosity (6.5 x 1036 erg/s) is particularly hard to understand in the context of VY Scl stars generally, because observations suggest that the binaries become simple red dwarf + white dwarf pairs at quiescence (the disk mostly disappears). "A high luminosity in soft X-rays poses an additional problem of understanding why the spectrum is of only modest excitation." The ratio He II λ4686/Hβ did not exceed ~0.5 in any of the spectra recorded up to 2001, which is typical for accretion-powered CVs and does not approach the ratio of 2 commonly seen in supersoft binaries (CBSS).Pushing the edge of acceptable X-ray fits toward lower luminosity suggests that the luminosity should not exceed ~2 x 1033 ergs/s, which gives only ~4 x 1031 ergs/s of reprocessed light in the WD about equal to the secondary's expected nuclear luminosity. Magnetic cataclysmic variables: X-rays from magnetic cataclysmic variables are common because accretion provides a continuous supply of coronal gas. A plot of number of systems vs. orbit period shows a statistically significant minimum for periods between 2 and 3 hr which can probably be understood in terms of the effects of magnetic braking when the companion star becomes completely convective and the usual dynamo (which operates at the base of the convective envelope) can no longer give the companion a magnetic wind to carry off angular momentum. The rotation has been blamed on asymmetric ejection of planetary nebulae and winds and the fields on in situ dynamos. Orbit and rotation periods are synchronized in strongly magnetized WDs. Those with no detectable field never are synchronized. Magnetic cataclysmic variables: With temperatures in the range 11,000 to 15,000 K, all the WDs with the most extreme fields are far too cool to be detectable EUV/X-ray sources, e.g., Grw +70°8247, LB 11146, SBS 1349+5434, PG 1031+234 and GD 229.Most highly magnetic WDs appear to be isolated objects, although G 23–46 (7.4 MG) and LB 1116 (670 MG) are in unresolved binary systems.RE J0317-853 is the hottest magnetic WD at 49,250 K, with an exceptionally intense magnetic field of ~340 MG, and implied rotation period of 725.4 s. Between 0.1 and 0.4 keV, RE J0317-853 was detectable by ROSAT, but not in the higher energy band from 0.4 to 2.4 keV. RE J0317-853 is associated with a blue star 16 arcsec from LB 9802 (also a blue WD) but not physically associated. A centered dipole field is not able to reproduce the observations, but an off-center dipole 664 MG at the south pole and 197 MG at the north pole does.Until recently (1995) only PG 1658+441 possessed an effective temperature > 30,000 K. Its polar field strength is only 3 MG.The ROSAT Wide Field Camera (WFC) source RE J0616-649 has an ~20 MG field.PG 1031+234 has a surface field that spans the range from ~200 MG to nearly 1000 MG and rotates with a period of 3h24m.The magnetic fields in CVs are confined to a narrow range of strengths, with a maximum of 7080 MG for RX J1938.4-4623.None of the single magnetic stars has been seen as of 1999 as an X-ray source, although fields are of direct relevance to the maintenance of coronae in main sequence stars. PG 1159 stars: PG 1159 stars are a group of very hot, often pulsating WDs for which the prototype is PG 1159 dominated by carbon and oxygen in their atmospheres.PG 1159 stars reach luminosities of ~1038 erg/s but form a rather distinct class. RX J0122.9-7521 has been identified as a galactic PG 1159 star. Nova: There are three SSXSs with bolometric luminosity of ~1038 erg/s that are novae: GQ Mus (BB, MW), V1974 Cyg (WD, MW), and Nova LMC 1995 (WD). Apparently, as of 1999 the orbital period of Nova LMC 1995 if a binary was not known. U Sco, a recurrent nova as of 1999 unobserved by ROSAT, is a WD (74–76 eV), Lbol ~ (8–60) x 1036 erg/s, with an orbital period of 1.2306 d. Planetary nebula: In the SMC, 1E 0056.8-7154 is a WD with bolometric luminosity of 2 x 1037 that has a planetary nebula associated with it. Super soft active galactic nuclei: Supersoft active galactic nuclei reach luminosities up to 1045 erg/s. Large amplitude outbursts: Large amplitude outbursts of super soft X-ray emission have been interpreted as tidal disruption events.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Verge3D** Verge3D: Verge3D is a real-time renderer and a toolkit used for creating interactive 3D experiences running on websites. Overview: Verge3D enables users to convert content from 3D modelling tools (Blender, 3ds Max, and Maya are currently supported) to view in a web browser. Verge3D was created by the same core group of software engineers that previously created the Blend4Web framework. Features: Verge3D uses WebGL for rendering. It incorporates components of the Three.js library and exposes its API to application developers. Puzzles Application functionality can be added via JavaScript, either by writing code directly or by using Puzzles, Verge3D’s visual programming environment based on Google Blockly. Puzzles is aimed primarily at non-programmers allowing quick creation of interactive scenarios in a drag-and-drop fashion. App Manager and web publishing App Manager is a lightweight web-based tool for creating, managing and publishing Verge3D projects, running on top of the local development server. Verge3D Network service integrated in the App Manager allows for publishing Verge3D applications via Amazon S3 and EC2 cloud services. Features: PBR For purposes of authoring materials, a glTF 2.0-compliant physically based rendering pipeline is offered alongside the standard shader-based approach. PBR textures can be authored using external texturing software such as Substance Painter for which Verge3D offers the corresponding export preset. Besides the glTF 2.0 model, Verge3D supports physical materials of 3ds Max and Maya (with Autodesk Arnold as reference), and Blender's real-time Eevee materials. Features: glTF and DCC software integration Verge3D integrates directly with Blender, 3ds Max, and Maya, enabling users to create 3D geometry, materials, and animations inside the software, then export them in the JSON-based glTF format. The Sneak Peek feature allows for exporting and viewing scenes from the DCC tool environment. Facebook 3D posts For Facebook publishing, Verge3D offers a specific GLB export option. The exported GLB files are displayed and can be opened in the App Manager. Asset compression Exported files can optionally use LZMA compression, resulting in a reduction in file size of up to 6x. UI and website layouts Interface layouts, created using external WYSIWYG editors, can be linked with Puzzles to trigger changes to a 3D scene being rendered in the browser and vice versa. Animation Verge3D supports skeletal animation, including animation of bipeds and character rigs, and allows for animation of material parameters. Model parts can also be set up to be dragged by the user. Physics The physics module can be linked separately to enable collision detection, dynamically moving objects, support for characters and vehicles, springs, ropes and cloth simulation. As of version 2.11, simple physics simulations can be created and controlled without coding via Puzzles, the visual programming system used by Verge3D. AR/VR The 2.10 update added support for WebXR, an in-development open technology designed to enable virtual reality and augmented reality experiences to be displayed in web browsers. It works with both headsets with controllers, like the HTC Vive and Oculus Rift, and those without, like Google Cardboard. AR/VR experiences can enabled via Puzzles or JavaScript. Workflow: Verge3D's workflow differs substantially from other mainstream WebGL frameworks. Development of a new Verge3D application is usually started from modeling, texturing and animating 3D objects. The models are assembled in the 3D authoring tool. The scene file is then used as a basis for a Verge3D project initialized from the App Manager. An interactive scenario is optionally added using the Puzzles editor. A Verge3D application can be previewed in the web browser at any development stage using the App Manager. The finished web application can be deployed on the Verge3D Network, on Facebook or on the user's website. Notable uses: NASA's Jet Propulsion Laboratory used Verge3D to create an interactive 3D visualization of the Mars InSight lander. The web application allows for exploring and interacting with the real-time model of the spacecraft, with the possibility to move different parts and unfurl the solar panels. Notable uses: NASA's older interactive web application Experience Curiosity was ported to Verge3D from Blend4Web. The application makes it possible to operate the rover, control its cameras and the robotic arm and reproduces some of the prominent events of the Mars Science Laboratory mission.Route 66 Digital's Escape Room used Verge3D and Blender. This interactive short explores how users can navigate 3D spaces and interact with objects without the need for instruction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conductor support system** Conductor support system: On offshore oil platforms, conductor support systems, also known as conductor supported systems or satellite platforms, are small unmanned installations consisting of little more than a well bay, and a small process plant. They are designed to operate in conjunction with a static production platform which is connected to the platform by flow lines and/or by Umbilical cable. Conductor support system: Traditionally, these jacket-type structures have been installed and used in shallow to medium water depths of up to 40 – 60 meters. The conductor supported system use its inherited strength of the well conductors to support both the wells and the topside structure. The conductor supported system is particularly suited to areas with more benign environmental conditions, however it is a common development option even in hurricane / cyclone prone areas such as the Gulf of Mexico or Australia's Carnarvon Basin. The well conductors act as both structural, weight-supporting piles and flowlines for the produced fluids from the well. These are drilled and installed with a drilling jackup rig using conventional drilling / lifting techniques. A leading proponent of this cost effective style of offshore development was Apache Energy which commissioned numerous Conductor Supported Wellhead Platforms in the Carnarvon Basin to feed its so called "String of Pearls" discoveries. Notably, in this Basin, the small field designs such as Conductor Supported Platforms and Monopods, often lay in proximity to the massive offshore Liquefied Natural Gas fields of the North West Shelf, Gorgon, Wheatstone and Pluto developments.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Baby blue** Baby blue: Baby blue is a tint of azure, one of the pastel colors.The first recorded use of baby blue as a color name in English was in 1892. Variations of baby blue: Beau blue Beau blue is a light tone of baby blue. "Beau" means "beautiful" in French. The source of this color is the color that is called beau blue in the Plochere Color System, a color system formulated in 1948 that is widely used by interior designers. Baby blue eyes Baby blue eyes is a rich tone of baby blue. The source of this color is the color that is called baby blue eyes in the Plochere Color System, a color system formulated in 1948 that is widely used by interior designers. Little boy blue Little boy blue is a deep tone of baby blue.The source of this color is the "Pantone Textile Paper eXtended (TPX)" color list, color #16-4132 TPX—Little Boy Blue. Baby blue in human culture: In Western culture, the color baby blue is often associated with baby boys (and baby pink for baby girls), particularly in clothing and linen and shoes. Baby blue in human culture: In the late 1960s, philosopher Alan Watts, who lived in Sausalito, a suburb of San Francisco, suggested that police cars be painted baby blue and white instead of black and white. This proposal was implemented in San Francisco in the late 1970s, until the late 1980s. Watts also suggested that the police should wear baby blue uniforms because, he asserted, this would make them less likely to commit acts of police brutality than if they were wearing the usual dark blue uniforms. This proposal was never implemented. Baby blue in human culture: Baby blue is an official color used in the flag of Argentina.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hasse derivative** Hasse derivative: In mathematics, the Hasse derivative is a generalisation of the derivative which allows the formulation of Taylor's theorem in coordinate rings of algebraic varieties. Definition: Let k[X] be a polynomial ring over a field k. The r-th Hasse derivative of Xn is D(r)Xn=(nr)Xn−r, if n ≥ r and zero otherwise. In characteristic zero we have D(r)=1r!(ddX)r. Properties: The Hasse derivative is a generalized derivation on k[X] and extends to a generalized derivation on the function field k(X), satisfying an analogue of the product rule D(r)(fg)=∑i=0rD(i)(f)D(r−i)(g) and an analogue of the chain rule. Note that the D(r) are not themselves derivations in general, but are closely related. A form of Taylor's theorem holds for a function f defined in terms of a local parameter t on an algebraic variety: f=∑rD(r)(f)⋅tr.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cadmium** Cadmium: Cadmium is a chemical element with the symbol Cd and atomic number 48. This soft, silvery-white metal is chemically similar to the two other stable metals in group 12, zinc and mercury. Like zinc, it demonstrates oxidation state +2 in most of its compounds, and like mercury, it has a lower melting point than the transition metals in groups 3 through 11. Cadmium and its congeners in group 12 are often not considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. The average concentration of cadmium in Earth's crust is between 0.1 and 0.5 parts per million (ppm). It was discovered in 1817 simultaneously by Stromeyer and Hermann, both in Germany, as an impurity in zinc carbonate. Cadmium: Cadmium occurs as a minor component in most zinc ores and is a byproduct of zinc production. Cadmium was used for a long time as a corrosion-resistant plating on steel, and cadmium compounds are used as red, orange, and yellow pigments, to color glass, and to stabilize plastic. Cadmium use is generally decreasing because it is toxic (it is specifically listed in the European Restriction of Hazardous Substances Directive) and nickel-cadmium batteries have been replaced with nickel-metal hydride and lithium-ion batteries. One of its few new uses is in cadmium telluride solar panels. Cadmium: Although cadmium has no known biological function in higher organisms, a cadmium-dependent carbonic anhydrase has been found in marine diatoms. Characteristics: Physical properties Cadmium is a soft, malleable, ductile, silvery-white divalent metal. It is similar in many respects to zinc but forms complex compounds. Unlike most other metals, cadmium is resistant to corrosion and is used as a protective plate on other metals. As a bulk metal, cadmium is insoluble in water and is not flammable; however, in its powdered form it may burn and release toxic fumes. Characteristics: Chemical properties Although cadmium usually has an oxidation state of +2, it also exists in the +1 state. Cadmium and its congeners are not always considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. Cadmium burns in air to form brown amorphous cadmium oxide (CdO); the crystalline form of this compound is a dark red which changes color when heated, similar to zinc oxide. Hydrochloric acid, sulfuric acid, and nitric acid dissolve cadmium by forming cadmium chloride (CdCl2), cadmium sulfate (CdSO4), or cadmium nitrate (Cd(NO3)2). The oxidation state +1 can be produced by dissolving cadmium in a mixture of cadmium chloride and aluminium chloride, forming the Cd22+ cation, which is similar to the Hg22+ cation in mercury(I) chloride. Characteristics: Cd + CdCl2 + 2 AlCl3 → Cd2(AlCl4)2The structures of many cadmium complexes with nucleobases, amino acids, and vitamins have been determined. Characteristics: Isotopes Naturally occurring cadmium is composed of eight isotopes. Two of them are radioactive, and three are expected to decay but have not done so under laboratory conditions. The two natural radioactive isotopes are 113Cd (beta decay, half-life is 7.7×1015 y) and 116Cd (two-neutrino double beta decay, half-life is 2.9×1019 y). The other three are 106Cd, 108Cd (both double electron capture), and 114Cd (double beta decay); only lower limits on these half-lives have been determined. At least three isotopes – 110Cd, 111Cd, and 112Cd – are stable. Among the isotopes that do not occur naturally, the most long-lived are 109Cd with a half-life of 462.6 days, and 115Cd with a half-life of 53.46 hours. All of the remaining radioactive isotopes have half-lives of less than 2.5 hours, and the majority have half-lives of less than 5 minutes. Cadmium has 8 known meta states, with the most stable being 113mCd (t1⁄2 = 14.1 years), 115mCd (t1⁄2 = 44.6 days), and 117mCd (t1⁄2 = 3.36 hours).The known isotopes of cadmium range in atomic mass from 94.950 u (95Cd) to 131.946 u (132Cd). For isotopes lighter than 112 u, the primary decay mode is electron capture and the dominant decay product is element 47 (silver). Heavier isotopes decay mostly through beta emission producing element 49 (indium).One isotope of cadmium, 113Cd, absorbs neutrons with high selectivity: With very high probability, neutrons with energy below the cadmium cut-off will be absorbed; those higher than the cut-off will be transmitted. The cadmium cut-off is about 0.5 eV, and neutrons below that level are deemed slow neutrons, distinct from intermediate and fast neutrons.Cadmium is created via the s-process in low- to medium-mass stars with masses of 0.6 to 10 solar masses, over thousands of years. In that process, a silver atom captures a neutron and then undergoes beta decay. History: Cadmium (Latin cadmia, Greek καδμεία meaning "calamine", a cadmium-bearing mixture of minerals that was named after the Greek mythological character Κάδμος, Cadmus, the founder of Thebes) was discovered in contaminated zinc compounds sold in pharmacies in Germany in 1817 by Friedrich Stromeyer. Karl Samuel Leberecht Hermann simultaneously investigated the discoloration in zinc oxide and found an impurity, first suspected to be arsenic, because of the yellow precipitate with hydrogen sulfide. Additionally Stromeyer discovered that one supplier sold zinc carbonate instead of zinc oxide. Stromeyer found the new element as an impurity in zinc carbonate (calamine), and, for 100 years, Germany remained the only important producer of the metal. The metal was named after the Latin word for calamine, because it was found in this zinc ore. Stromeyer noted that some impure samples of calamine changed color when heated but pure calamine did not. He was persistent in studying these results and eventually isolated cadmium metal by roasting and reducing the sulfide. The potential for cadmium yellow as pigment was recognized in the 1840s, but the lack of cadmium limited this application.Even though cadmium and its compounds are toxic in certain forms and concentrations, the British Pharmaceutical Codex from 1907 states that cadmium iodide was used as a medication to treat "enlarged joints, scrofulous glands, and chilblains".In 1907, the International Astronomical Union defined the international ångström in terms of a red cadmium spectral line (1 wavelength = 6438.46963 Å). This was adopted by the 7th General Conference on Weights and Measures in 1927. In 1960, the definitions of both the metre and ångström were changed to use krypton.After the industrial scale production of cadmium started in the 1930s and 1940s, the major application of cadmium was the coating of iron and steel to prevent corrosion; in 1944, 62% and in 1956, 59% of the cadmium in the United States was used for plating. In 1956, 24% of the cadmium in the United States was used for a second application in red, orange and yellow pigments from sulfides and selenides of cadmium.The stabilizing effect of cadmium chemicals like the carboxylates cadmium laurate and cadmium stearate on PVC led to an increased use of those compounds in the 1970s and 1980s. The demand for cadmium in pigments, coatings, stabilizers, and alloys declined as a result of environmental and health regulations in the 1980s and 1990s; in 2006, only 7% of to total cadmium consumption was used for plating, and only 10% was used for pigments. History: At the same time, these decreases in consumption were compensated by a growing demand for cadmium for nickel-cadmium batteries, which accounted for 81% of the cadmium consumption in the United States in 2006. Occurrence: Cadmium makes up about 0.1 ppm of Earth's crust. It is much rarer than zinc, which makes up about 65 ppm. No significant deposits of cadmium-containing ores are known. The only cadmium mineral of importance, greenockite (CdS), is nearly always associated with sphalerite (ZnS). This association is caused by geochemical similarity between zinc and cadmium, with no geological process likely to separate them. Thus, cadmium is produced mainly as a byproduct of mining, smelting, and refining sulfidic ores of zinc, and, to a lesser degree, lead and copper. Small amounts of cadmium, about 10% of consumption, are produced from secondary sources, mainly from dust generated by recycling iron and steel scrap. Production in the United States began in 1907, but wide use began after World War I.Metallic cadmium can be found in the Vilyuy River basin in Siberia.Rocks mined for phosphate fertilizers contain varying amounts of cadmium, resulting in a cadmium concentration of as much as 300 mg/kg in the fertilizers and a high cadmium content in agricultural soils. Coal can contain significant amounts of cadmium, which ends up mostly in coal fly ash.Cadmium in soil can be absorbed by crops such as rice and cocoa. Chinese ministry of agriculture measured in 2002 that 28% of rice it sampled had excess lead and 10% had excess cadmium above limits defined by law. Consumer Reports tested 28 brands of dark chocolate sold in the United States in 2022, and found cadmium in all of them, with 13 exceeding the California Maximum Allowable Dose level.Some plants such as willow trees and poplars have been found to clean both lead and cadmium from soil.Typical background concentrations of cadmium do not exceed 5 ng/m3 in the atmosphere; 2 mg/kg in soil; 1 μg/L in freshwater and 50 ng/L in seawater. Concentrations of cadmium above 10 μg/L may be stable in water having low total solute concentrations and p H and can be difficult to remove by conventional water treatment processes. Production: Cadmium is a common impurity in zinc ores, and it is most often isolated during the production of zinc. Some zinc ores concentrates from zinc sulfate ores contain up to 1.4% of cadmium. In the 1970s, the output of cadmium was 6.5 pounds (2.9 kg) per ton of zinc. Zinc sulfide ores are roasted in the presence of oxygen, converting the zinc sulfide to the oxide. Zinc metal is produced either by smelting the oxide with carbon or by electrolysis in sulfuric acid. Cadmium is isolated from the zinc metal by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated from the electrolysis solution.The British Geological Survey reports that in 2001, China was the top producer of cadmium with almost one-sixth of the world's production, closely followed by South Korea and Japan. Applications: Cadmium is a common component of electric batteries, pigments, coatings, and electroplating. Applications: Batteries In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel-cadmium batteries. Nickel-cadmium cells have a nominal cell potential of 1.2 V. The cell consists of a positive nickel hydroxide electrode and a negative cadmium electrode plate separated by an alkaline electrolyte (potassium hydroxide). The European Union put a limit on cadmium in electronics in 2004 of 0.01%, with some exceptions, and in 2006 reduced the limit on cadmium content to 0.002%. Another type of battery based on cadmium is the silver-cadmium battery. Applications: Electroplating Cadmium electroplating, consuming 6% of the global production, is used in the aircraft industry to reduce corrosion of steel components. This coating is passivated by chromate salts. A limitation of cadmium plating is hydrogen embrittlement of high-strength steels from the electroplating process. Therefore, steel parts heat-treated to tensile strength above 1300 MPa (200 ksi) should be coated by an alternative method (such as special low-embrittlement cadmium electroplating processes or physical vapor deposition). Applications: Titanium embrittlement from cadmium-plated tool residues resulted in banishment of those tools (and the implementation of routine tool testing to detect cadmium contamination) in the A-12/SR-71, U-2, and subsequent aircraft programs that use titanium. Applications: Nuclear fission Cadmium is used in the control rods of nuclear reactors, acting as a very effective neutron poison to control neutron flux in nuclear fission. When cadmium rods are inserted in the core of a nuclear reactor, cadmium absorbs neutrons, preventing them from creating additional fission events, thus controlling the amount of reactivity. The pressurized water reactor designed by Westinghouse Electric Company uses an alloy consisting of 80% silver, 15% indium, and 5% cadmium. Applications: Televisions QLED TVs have been starting to include cadmium in construction. Some companies have been looking to reduce the environmental impact of human exposure and pollution of the material in televisions during production. Anticancer drugs Complexes based on heavy metals have great potential for the treatment of a wide variety of cancers but their use is often limited due to toxic side effects. However, scientists are advancing in the field and new promising cadmium complex compounds with reduced toxicity have been discovered. Compounds Cadmium oxide was used in black and white television phosphors and in the blue and green phosphors of color television cathode ray tubes. Cadmium sulfide (CdS) is used as a photoconductive surface coating for photocopier drums. Applications: Various cadmium salts are used in paint pigments, with CdS as a yellow pigment being the most common. Cadmium selenide is a red pigment, commonly called cadmium red. To painters who work with the pigment, cadmium provides the most brilliant and durable yellows, oranges, and reds – so much so that during production, these colors are significantly toned down before they are ground with oils and binders or blended into watercolors, gouaches, acrylics, and other paint and pigment formulations. Because these pigments are potentially toxic, users should use a barrier cream on the hands to prevent absorption through the skin even though the amount of cadmium absorbed into the body through the skin is reported to be less than 1%.In PVC, cadmium was used as heat, light, and weathering stabilizers. Currently, cadmium stabilizers have been completely replaced with barium-zinc, calcium-zinc and organo-tin stabilizers. Cadmium is used in many kinds of solder and bearing alloys, because it has a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal. Applications: Semiconductors Cadmium is an element in some semiconductor materials. Cadmium sulfide, cadmium selenide, and cadmium telluride are used in some photodetectors and solar cells. HgCdTe detectors are sensitive to mid-infrared light and used in some motion detectors. Applications: Laboratory uses Helium–cadmium lasers are a common source of blue or ultraviolet laser light. Lasers at wavelengths of 325, 354 and 442 nm are made using this gain medium; some models can switch between these wavelengths. They are notably used in fluorescence microscopy as well as various laboratory uses requiring laser light at these wavelengths.Cadmium selenide quantum dots emit bright luminescence under UV excitation (He-Cd laser, for example). The color of this luminescence can be green, yellow or red depending on the particle size. Colloidal solutions of those particles are used for imaging of biological tissues and solutions with a fluorescence microscope.In molecular biology, cadmium is used to block voltage-dependent calcium channels from fluxing calcium ions, as well as in hypoxia research to stimulate proteasome-dependent degradation of Hif-1α.Cadmium-selective sensors based on the fluorophore BODIPY have been developed for imaging and sensing of cadmium in cells. One powerful method for monitoring cadmium in aqueous environments involves electrochemistry. By employing a self-assembled monolayer one can obtain a cadmium selective electrode with a ppt-level sensitivity. Biological role and research: Cadmium has no known function in higher organisms and is considered toxic. Cadmium is considered an environmental pollutant that causes health hazard to living organisms. Administration of cadmium to cells causes oxidative stress and increases the levels of antioxidants produced by cells to protect against macro molecular damage.However a cadmium-dependent carbonic anhydrase has been found in some marine diatoms. The diatoms live in environments with very low zinc concentrations and cadmium performs the function normally carried out by zinc in other anhydrases. This was discovered with X-ray absorption near edge structure (XANES) spectroscopy.Cadmium is preferentially absorbed in the kidneys of humans. Up to about 30 mg of cadmium is commonly inhaled throughout human childhood and adolescence. Cadmium is under research regarding its toxicity in humans, potentially elevating risks of cancer, cardiovascular disease, and osteoporosis. Environment: The biogeochemistry of cadmium and its release to the environment has been the subject of review, as has the speciation of cadmium in the environment. Safety: Individuals and organizations have been reviewing cadmium's bioinorganic aspects for its toxicity. The most dangerous form of occupational exposure to cadmium is inhalation of fine dust and fumes, or ingestion of highly soluble cadmium compounds. Inhalation of cadmium fumes can result initially in metal fume fever, but may progress to chemical pneumonitis, pulmonary edema, and death.Cadmium is also an environmental hazard. Human exposure is primarily from fossil fuel combustion, phosphate fertilizers, natural sources, iron and steel production, cement production and related activities, nonferrous metals production, and municipal solid waste incineration. Other sources of cadmium include bread, root crops, and vegetables. Safety: There have been a few instances of general population poisoning as the result of long-term exposure to cadmium in contaminated food and water. Research into an estrogen mimicry that may induce breast cancer is ongoing, as of 2012. In the decades leading up to World War II, mining operations contaminated the Jinzū River in Japan with cadmium and traces of other toxic metals. As a consequence, cadmium accumulated in the rice crops along the riverbanks downstream of the mines. Some members of the local agricultural communities consumed the contaminated rice and developed itai-itai disease and renal abnormalities, including proteinuria and glucosuria. The victims of this poisoning were almost exclusively post-menopausal women with low iron and low body stores of other minerals. Similar general population cadmium exposures in other parts of the world have not resulted in the same health problems because the populations maintained sufficient iron and other mineral levels. Thus, although cadmium is a major factor in the itai-itai disease in Japan, most researchers have concluded that it was one of several factors.Cadmium is one of six substances banned by the European Union's Restriction of Hazardous Substances (RoHS) directive, which regulates hazardous substances in electrical and electronic equipment, but allows for certain exemptions and exclusions from the scope of the law.The International Agency for Research on Cancer has classified cadmium and cadmium compounds as carcinogenic to humans. Although occupational exposure to cadmium is linked to lung and prostate cancer, there is still uncertainty about the carcinogenicity of cadmium in low environmental exposure. Recent data from epidemiological studies suggest that intake of cadmium through diet is associated with a higher risk of endometrial, breast, and prostate cancer as well as with osteoporosis in humans. A recent study has demonstrated that endometrial tissue is characterized by higher levels of cadmium in current and former smoking females.Cadmium exposure is associated with a large number of illnesses including kidney disease, early atherosclerosis, hypertension, and cardiovascular diseases. Although studies show a significant correlation between cadmium exposure and occurrence of disease in human populations, a molecular mechanism has not yet been identified. One hypothesis holds that cadmium is an endocrine disruptor and some experimental studies have shown that it can interact with different hormonal signaling pathways. For example, cadmium can bind to the estrogen receptor alpha, and affect signal transduction along the estrogen and MAPK signaling pathways at low doses.The tobacco plant absorbs and accumulates heavy metals such as cadmium from the surrounding soil into its leaves. Following tobacco smoke inhalation, these are readily absorbed into the body of users. Tobacco smoking is the most important single source of cadmium exposure in the general population. An estimated 10% of the cadmium content of a cigarette is inhaled through smoking. Absorption of cadmium through the lungs is more effective than through the gut. As much as 50% of the cadmium inhaled in cigarette smoke may be absorbed. Safety: On average, cadmium concentrations in the blood of smokers is 4 to 5 times greater than non-smokers and in the kidney, 2–3 times greater than in non-smokers. Despite the high cadmium content in cigarette smoke, there seems to be little exposure to cadmium from passive smoking.In a non-smoking population, food is the greatest source of exposure. High quantities of cadmium can be found in crustaceans, mollusks, offal, frog legs, cocoa solids, bitter and semi-bitter chocolate, seaweed, fungi and algae products. However, grains, vegetables, and starchy roots and tubers are consumed in much greater quantity in the U.S., and are the source of the greatest dietary exposure there. Most plants bio-accumulate metal toxins such as cadmium and when composted to form organic fertilizers, yield a product that often can contain high amounts (e.g., over 0.5 mg) of metal toxins for every kilogram of fertilizer. Fertilizers made from animal dung (e.g., cow dung) or urban waste can contain similar amounts of cadmium. The cadmium added to the soil from fertilizers (rock phosphates or organic fertilizers) become bio-available and toxic only if the soil pH is low (i.e., acidic soils). Safety: Zinc, copper, calcium, and iron ions, and selenium with vitamin C are used to treat cadmium intoxication, though it is not easily reversed. Safety: Regulations Because of the adverse effects of cadmium on the environment and human health, the supply and use of cadmium is restricted in Europe under the REACH Regulation.The EFSA Panel on Contaminants in the Food Chain specifies that 2.5 μg/kg body weight is a tolerable weekly intake for humans. The Joint FAO/WHO Expert Committee on Food Additives has declared 7 μg/kg body weight to be the provisional tolerable weekly intake level. The state of California requires a food label to carry a warning about potential exposure to cadmium on products such as cocoa powder.The U.S. Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit (PEL) for cadmium at a time-weighted average (TWA) of 0.005 ppm. The National Institute for Occupational Safety and Health (NIOSH) has not set a recommended exposure limit (REL) and has designated cadmium as a known human carcinogen. The IDLH (immediately dangerous to life and health) level for cadmium is 9 mg/m3. Safety: In addition to mercury, the presence of cadmium in some batteries has led to the requirement of proper disposal (or recycling) of batteries. Safety: Product recalls In May 2006, a sale of the seats from Arsenal F.C.'s old stadium, Highbury in London, England was cancelled when the seats were discovered to contain trace amounts of cadmium. Reports of high levels of cadmium use in children's jewelry in 2010 led to a US Consumer Product Safety Commission investigation. The U.S. CPSC issued specific recall notices for cadmium content in jewelry sold by Claire's and Wal-Mart stores. Safety: In June 2010, McDonald's voluntarily recalled more than 12 million promotional Shrek Forever After 3D Collectible Drinking Glasses because of the cadmium levels in paint pigments on the glassware. The glasses were manufactured by Arc International, of Millville, New Jersey, USA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mannan endo-1,6-alpha-mannosidase** Mannan endo-1,6-alpha-mannosidase: Mannan endo-1,6-α-mannosidase (EC 3.2.1.101, exo-1,6-β-mannanase, endo-α-1→6-D-mannanase, endo-1,6-β-mannanase, mannan endo-1,6-β-mannosidase, 1,6-α-D-mannan mannanohydrolase) is an enzyme with systematic name 6-α-D-mannan mannanohydrolase. It catalyses the random hydrolysis of (1→6)-α-D-mannosidic linkages in unbranched (1→6)-mannans
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benign metastasizing leiomyoma** Benign metastasizing leiomyoma: Benign metastasizing leiomyoma is a rare condition characterized by the growth of uterine leiomyoma in the other regions especially the lungs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nonlinear complementarity problem** Nonlinear complementarity problem: In applied mathematics, a nonlinear complementarity problem (NCP) with respect to a mapping ƒ : Rn → Rn, denoted by NCPƒ, is to find a vector x ∈ Rn such that and xTf(x)=0 where ƒ(x) is a smooth mapping. The case of a discontinuous mapping was discussed by Habetler and Kostreva (1978).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arogenate dehydratase** Arogenate dehydratase: Arogenate dehydratase (ADT) (EC 4.2.1.91) is an enzyme that catalyzes the chemical reaction L-arogenate → L phenylalanine + H2O + CO2Certain forms of the protein have the potential to catalyze a second reaction, L-prephenate → L-phenylpyruvate + H2O + CO2This enzyme participates in phenylalanine, tyrosine, and tryptophan biosynthesis (an example structure is shown to the right. Nomenclature: This enzyme belongs to the family of lyases, specifically the hydro-lyases, which cleave carbon-oxygen bonds. The systematic name of this enzyme class is L-arogenate hydro-lyase (decarboxylating; L-phenylalanine-forming). Other names in common use include: arogenate dehydratase L-arogenate hydro-lyase (decarboxylating) cyclohexadienyl dehydratase carbocyclohexadienyl dehydratase pheC ADT Reaction: The carboxyl and hydroxide groups (shown in red) attached to the 2,5-cyclohexene ring are eliminated from L-arogenate, leaving as carbon dioxide and water. The 2,5-cyclohexene ring becomes a phenyl ring, and L-phenylalanine is formed. Reaction: Certain forms of ADT have been shown to exhibit some prephenate dehydratase (PDT) activity in addition to the standard ADT activity described above. Known as cyclohexadienyl dehydratases or carbocyclohexadienyl dehydratases (listed above), these forms of the enzyme catalyze the same type of reaction (a decarboxylation and a dehydration) on prephenate. The carboxyl and hydroxide groups (in red) attached to the 2,5-cyclohexene ring are removed, leaving phenylpyruvate. Function: ADT catalyzes a reaction categorized by two major changes in the structure of the substrate, these being a decarboxylation and a dehydration; the enzyme removes a carboxyl group and a water molecule (respectively). Both potential products of this reaction (L-arogenate and phenylpyruvate) occur at or near the end of the biosynthetic pathway. Total synthesis of L-arogenate has been reported. Structure: The structure of arogenate dehydratases are described as having, for the most part, three major sections. ADTs contain an N-terminal transit peptide, a PDT-like domain, and an ACT (Aspartokinase, chorismate mutase, TyrA) domain. Homologues: Homologues for ADT have been isolated in Arabidopsis thaliana (rabbit-ear cress), Nicotiana sylvestris (tobacco), Spinacia oleracea (spinach), Petunia hybrida, Sorghum bicolor, Oryza sativa, and Pinus pinaster which are all considered higher-order plants. Erwinia herbicola and Pseudomonas aeruginosa are known to have homologues for cyclohexadienyl dehydratase. Of the plants with ADT homologues, both Arabidopsis thaliana, Petunia hybrida, and Pinus pinaster are known to have paralogues for the gene (six, three, and nine, respectively).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hilbert projection theorem** Hilbert projection theorem: In mathematics, the Hilbert projection theorem is a famous result of convex analysis that says that for every vector x in a Hilbert space H and every nonempty closed convex C⊆H, there exists a unique vector m∈C for which ‖c−x‖ is minimized over the vectors c∈C ; that is, such that ‖m−x‖≤‖c−x‖ for every c∈C. Finite dimensional case: Some intuition for the theorem can be obtained by considering the first order condition of the optimization problem. Consider a finite dimensional real Hilbert space H with a subspace C and a point x. If m∈C is a minimizer or minimum point of the function N:C→R defined by := ‖c−x‖ (which is the same as the minimum point of c↦‖c−x‖2 ), then derivative must be zero at m. In matrix derivative notation Since ∂c is a vector in C that represents an arbitrary tangent direction, it follows that m−x must be orthogonal to every vector in C. Statement: Detailed elementary proof Proof by reduction to a special case It suffices to prove the theorem in the case of x=0 because the general case follows from the statement below by replacing C with C−x. Properties: Expression as a global minimum The statement and conclusion of the Hilbert projection theorem can be expressed in terms of global minimums of the followings functions. Their notation will also be used to simplify certain statements. Properties: Given a non-empty subset C⊆H and some x∈H, define a function A global minimum point of dC,x, if one exists, is any point m in domain ⁡dC,x=C such that in which case dC,x(m)=‖m−x‖ is equal to the global minimum value of the function dC,x, which is: Effects of translations and scalings When this global minimum point m exists and is unique then denote it by min (C,x); explicitly, the defining properties of min (C,x) (if it exists) are: The Hilbert projection theorem guarantees that this unique minimum point exists whenever C is a non-empty closed and convex subset of a Hilbert space. However, such a minimum point can also exist in non-convex or non-closed subsets as well; for instance, just as long is C is non-empty, if x∈C then min (C,x)=x. Properties: If C⊆H is a non-empty subset, s is any scalar, and x,x0∈H are any vectors then which implies: Examples The following counter-example demonstrates a continuous linear isomorphism A:H→H for which min min (C,x)). Endow := R2 with the dot product, let := (0,1), and for every real s∈R, let := {(x,sx):x∈R} be the line of slope s through the origin, where it is readily verified that min (Ls,x0)=s1+s2(1,s). Pick a real number r≠0 and define A:R2→R2 by := (rx,y) (so this map scales the x− coordinate by r while leaving the y− coordinate unchanged). Then A:R2→R2 is an invertible continuous linear operator that satisfies A(Ls)=Ls/r and A(x0)=x0, so that min (A(Ls),A(x0))=sr2+s2(1,s) and min (Ls,x0))=s1+s2(r,s). Consequently, if := Ls with s≠0 and if (r,s)≠(±1,1) then min min (C,x0)).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Natalie Ahn** Natalie Ahn: Natalie G. Ahn is a professor of chemistry and biochemistry at the University of Colorado at Boulder. Her research is focused on understanding the mechanisms of cell signaling, with a speciality in phosphorylation and cancers. Ahn's work uses the tools of "classical chemistry" to work on understanding the genetic code and how genetics affects life processes. She has been a professor at the University of Colorado at Boulder since 2003, where she is a distinguished professor. She was a Howard Hughes Medical Institute investigator between 1994 and 2014. In 2018, she was elected to the National Academy of Sciences and named a fellow of the American Academy of Arts and Sciences. Biography: Past education, research, and awards Ahn earned her bachelor's degree in chemistry from the University of Washington, Seattle in 1979. Ahn conducted research in Lyle Jensen's lab, focusing on X-ray crystallography. Her participation in this research aided in better understanding of protein folding and visualizing of the 3-D structure of proteins by using computational techniques with X-ray crystallography. Additionally, Ahn worked as an undergraduate research assistant in David Teller's lab, which investigated protein hydrodynamics, the study of the motion of proteins relative to their aqueous environment, which they can either be suspended or dissolved within.In 1985, she received her PhD in chemistry at the University of California, Berkeley. Here, Ahn worked with Judith Klinman, studying enzymology.Ahn's first postdoctoral job was studying hormone receptor binding at the University of Washington with Christoph de Haen. Ahn then moved to Edwin Kreb's lab, where she began her career in signal transduction. In this lab, Ahn was "one of the first to describe MAP kinases and MAP kinase kinases." She started working at the University of Colorado Boulder in 1992. Ahn was part of the Searle Scholars Program to fund young scientists' work in 1993. She was one of eight project collaborators who won a grant from the W.M. Keck Foundation for identifying proteins in a single cell type. In 2012, she was named College Professor of Distinction at the University of Colorado. In 2014, she became part of the Subcellular Pan-Omics for Advanced Rapid Threat Assessment (SPARTA) team which is a biochemical project supported by the Defense Advanced Research Projects Agency (DARPA). Biography: Current research Ahn is currently working at the University of Colorado and is conducting research on cell signaling, information and proteomics, and molecular biophysics. Specific topics of her research include: Proteomics and Signal Transduction: The lab's goal is to investigate new mechanisms that are responsible for regulation and cell signaling. In order to do this, Ahn uses mass spectrometry for protein profiling in combination with biochemical and cellular approaches to better understand a cell's response to signaling pathways. Biography: In addition, Ahn investigates the internal motions of protein kinases, specifically studying their coupling protein dynamics and catalytic function. Biography: Ahn studies the development of cancer by examining "signaling pathways that are activated in melanoma and influence cancer progression and cell behavior." Wnt5A signaling: Wnt5A is responsible for controlling embryonic body axis formation and can be found at high levels in melanomas, resulting in cell invasion. Ahn and her lab discovered the "Wnt5a receptor-actin-myosin-polarity (WRAMP) structure," which aids in directional cell movements by triggering membrane retraction. Ahn was able to determine WRAMP structure using organelle proteomics. Biography: B-Raf signaling: In half of melanoma cells, the B-Raf protein contains a missense mutation (V600E mutation), which is responsible for cell transformation, invasion, and metastasis. In order to profile phosphoproteins, Ahn uses negative precursor ion mass spectrometry to discover and count phosphopeptides. Ahn has identified numerous different proteins with this method and with this information studied how cell mechanisms used in cancer therapy were affected by protein-protein signaling. Biography: Proteomics Technologies: In Ahn's lab, she uses multi dimensional liquid chromatography-MS/MS to identify over 8,000 proteins in each sequence of MS. Ahn's goal is to be more accurate and sensitive in these 2-D-LC-MS/MS techniques in assign proteins. Biography: Protein Kinase Dynamics: In Ahn's lab, she uses hydrogen-exchange mass spectrometry (HX-MS) to discover and better understand protein motions on the inside of kinases, where energy fluctuations occur. Ahn's goal is to understand how the different protein dynamics are able to regulate catalytic activity in specific enzymes, most notably protein kinase. Ahn conducts research which focuses on ERK2 MAP kinases, that provide an ideal model due to their clear link between activity and protein dynamics. Biography: Community service Ahn was elected president of the American Society for Biochemistry and Molecular Biology (ASBMB) in the summer of 2015. She was previously a member of the council. She began attending the ASBMB annual meeting while still a PhD student at the University of California, Berkeley and gave her first public research talk at one of these meetings. Selected works: Ahn, N. G. (1993). "The MAP kinase cascade. Discovery of a new signal transduction pathway". Molecular and cellular biochemistry. 127–128: 201–9. PMID 7935352.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acute lymphoblastic leukemia** Acute lymphoblastic leukemia: Acute lymphoblastic leukemia (ALL) is a cancer of the lymphoid line of blood cells characterized by the development of large numbers of immature lymphocytes. Symptoms may include feeling tired, pale skin color, fever, easy bleeding or bruising, enlarged lymph nodes, or bone pain. As an acute leukemia, ALL progresses rapidly and is typically fatal within weeks or months if left untreated.In most cases, the cause is unknown. Genetic risk factors may include Down syndrome, Li–Fraumeni syndrome, or neurofibromatosis type 1. Environmental risk factors may include significant radiation exposure or prior chemotherapy. Evidence regarding electromagnetic fields or pesticides is unclear. Some hypothesize that an abnormal immune response to a common infection may be a trigger. The underlying mechanism involves multiple genetic mutations that results in rapid cell division. The excessive immature lymphocytes in the bone marrow interfere with the production of new red blood cells, white blood cells, and platelets. Diagnosis is typically based on blood tests and bone marrow examination.ALL is typically treated initially with chemotherapy aimed at bringing about remission. This is then followed by further chemotherapy typically over a number of years. Treatment usually also includes intrathecal chemotherapy since systemic chemotherapy can have limited penetration into the central nervous system and the central nervous system is a common site for relapse of acute lymphoblastic leukemia.Treatment can also include radiation therapy if spread to the brain has occurred. Stem cell transplantation may be used if the disease recurs following standard treatment. Additional treatments such as Chimeric antigen receptor T cell immunotherapy are being used and further studied.ALL affected about 876,000 people globally in 2015 and resulted in about 111,000 deaths. It occurs most commonly in children, particularly those between the ages of two and five. In the United States it is the most common cause of cancer and death from cancer among children. ALL is notable for being the first disseminated cancer to be cured. Survival for children increased from under 10% in the 1960s to 90% in 2015. Survival rates remain lower for babies (50%) and adults (35%). Signs and symptoms: Initial symptoms can be nonspecific, particularly in children. Over 50% of children with leukemia had one or more of five features: a liver one can feel (64%), a spleen one can feel (61%), pale complexion (54%), fever (53%), and bruising (52%). Additionally, recurrent infections, feeling tired, arm or leg pain, and enlarged lymph nodes can be prominent features. The B symptoms, such as fever, night sweats, and weight loss, are often present as well.Central nervous system (CNS) symptoms such as cranial neuropathies due to meningeal infiltration are identified in less than 10% of adults and less than 5% of children, particularly mature B-cell ALL (Burkitt leukemia) at presentation.The signs and symptoms of ALL are variable and include: Generalized weakness and feeling tired Anemia Dizziness Headache, vomiting, lethargy, neck stiffness, or cranial nerve palsies (CNS involvement) Frequent or unexplained fever and infection Weight loss and/or loss of appetite Excessive and unexplained bruising Bone pain, joint pain (caused by the spread of "blast" cells to the surface of the bone or into the joint from the marrow cavity) Breathlessness Enlarged lymph nodes, liver, and/or spleen Pitting edema (swelling) in the lower limbs and/or abdomen Petechiae, which are tiny red spots or lines in the skin due to low platelet levels Testicular enlargement Mediastinal mass Cause: The cancerous cell in ALL is the lymphoblast. Normal lymphoblasts develop into mature, infection-fighting B-cells or T-cells, also called lymphocytes. Signals in the body control the number of lymphocytes so neither too few nor too many are made. In ALL, both the normal development of some lymphocytes and the control over the number of lymphoid cells become defective.ALL emerges when a single lymphoblast gains many mutations to genes that affect blood cell development and proliferation. In childhood ALL, this process begins at conception with the inheritance of some of these genes. These genes, in turn, increase the risk that more mutations will occur in developing lymphoid cells. Certain genetic syndromes, like Down Syndrome, have the same effect. Environmental risk factors are also needed to help create enough genetic mutations to cause disease. Evidence for the role of the environment is seen in childhood ALL among twins, where only 10–15% of both genetically identical twins get ALL. Since they have the same genes, different environmental exposures explain why one twin gets ALL and the other does not.Infant ALL is a rare variant that occurs in babies less than one year old. KMT2A (formerly MLL) gene rearrangements are most common and occur in the embryo or fetus before birth. These rearrangements result in increased expression of blood cell development genes by promoting gene transcription and through epigenetic changes. In contrast to childhood ALL, environmental factors are not thought to play a significant role. Aside from the KMT2A rearrangement, only one extra mutation is typically found. Environmental exposures are not needed to help create more mutations. Cause: Risk factors Genetics Common inherited risk factors include mutations in ARID5B, CDKN2A/2B, CEBPE, IKZF1, GATA3, PIP4K2A and, more rarely, TP53. These genes play important roles in cellular development, proliferation, and differentiation. Individually, most of these mutations are low risk for ALL. Significant risk of disease occurs when a person inherits several of these mutations together.The uneven distribution of genetic risk factors may help explain differences in disease rates among ethnic groups. For instance, the ARID5B mutation is less common in ethnic African populations.Several genetic syndrome also carry increased risk of ALL. These include: Down syndrome, Fanconi anemia, Bloom syndrome, X-linked agammaglobulinemia, severe combined immunodeficiency, Shwachman–Diamond syndrome, Kostmann syndrome, neurofibromatosis type 1, ataxia-telangiectasia, paroxysmal nocturnal hemoglobinuria, and Li–Fraumeni syndrome. Fewer than 5% of cases are associated with a known genetic syndrome.Rare mutations in ETV6 and PAX5 are associated with a familial form of ALL with autosomal dominant patterns of inheritance. Cause: Environmental The environmental exposures that contribute to emergence of ALL is contentious and a subject of ongoing debate.High levels of radiation exposure from nuclear fallout is a known risk factor for developing leukemia. Evidence whether lesser radiation, as from x-ray imaging during pregnancy, increases risk of disease remains inconclusive. Studies that have identified an association between x-ray imaging during pregnancy and ALL found only a slightly increased risk. Exposure to strong electromagnetic radiation from power lines has also been associated with a slightly increased risk of ALL. This result is questioned as no causal mechanism linking electromagnetic radiation with cancer is known.High birth weight (greater than 4000 g or 8.8 lbs) is also associated with a small increased risk. The mechanism connecting high birth weight to ALL is also not known.Evidence suggests that secondary leukemia can develop in individuals treated with certain types of chemotherapy, such as epipodophyllotoxins and cyclophosphamide. Cause: Infections There is some evidence that a common infection, such as influenza, may indirectly promote the emergence of ALL. The delayed-infection hypothesis states that ALL results from an abnormal immune response to infection in a person with genetic risk factors. Delayed development of the immune system due to limited disease exposure may result in excessive production of lymphocytes and increased mutation rate during an illness. Several studies have identified lower rates of ALL among children with greater exposure to illness early in life. Very young children who attend daycare have lower rates of ALL. Evidence from many other studies looking at disease exposure and ALL is inconclusive. Some researchers have linked the hygiene hypothesis. Mechanism: Several characteristic genetic changes lead to the creation of a leukemic lymphoblast. These changes include chromosomal translocations, intrachromosomal rearrangements, changes in the number of chromosomes in leukemic cells, and additional mutations in individual genes. Chromosomal translocations involve moving a large region of DNA from one chromosome to another. This move can result in placing a gene from one chromosome that promotes cell division to a more actively transcribed area on another chromosome. The result is a cell that divides more often. An example of this includes the translocation of C-MYC, a gene that encodes a transcription factor that leads to increased cell division, next to the immunoglobulin heavy- or light-chain gene enhancers, leading to increased C-MYC expression and increased cell division. Other large changes in chromosomal structure can result in the placement of two genes directly next to each other. The result is the combination of two usually separate proteins into a new fusion protein. This protein can have a new function that promotes the development of cancer. Examples of this include the ETV6–RUNX1 fusion gene that combines two factors that promote blood cell development and the BCR-ABL1 fusion gene of the Philadelphia chromosome. BCR–ABL1 encodes an always-activated tyrosine kinase that causes frequent cell division. These mutations produce a cell that divides more often, even in the absence of growth factors.Other genetic changes in B-cell ALL include changes to the number of chromosomes within the leukemic cells. Gaining at least five additional chromosomes, called high hyperdiploidy, occurs more commonly. Less often, chromosomes are lost, called hypodiploidy, which is associated with a poorer prognosis. Additional common genetic changes in B-cell ALL involve non-inherited mutations to PAX5 and IKZF1. In T-cell ALL, LYL1, TAL1, TLX1, and TLX3 rearrangements can occur.ALL results when enough of these genetic changes are present in a single lymphoblast. In childhood ALL, for example, one fusion gene translocation is often found along with six to eight other ALL-related genetic changes. The initial leukemic lymphoblast copies itself into an excessive number of new lymphoblasts, none of which can develop into functioning lymphocytes. These lymphoblasts build up in the bone marrow and may spread to other sites in the body, such as lymph nodes, the mediastinum, the spleen, the testicles, and the brain, leading to the common symptoms of the disease. Diagnosis: Diagnosing ALL begins with a thorough medical history, physical examination, complete blood count, and blood smears. While many symptoms of ALL can be found in common illnesses, persistent or unexplained symptoms raise suspicion of cancer. Because many features on the medical history and exam are not specific to ALL, further testing is often needed. A large number of white blood cells and lymphoblasts in the circulating blood can be suspicious for ALL because they indicate a rapid production of lymphoid cells in the marrow. The higher these numbers typically point to a worse prognosis. While white blood cell counts at initial presentation can vary significantly, circulating lymphoblast cells are seen on peripheral blood smears in the majority of cases.A bone marrow biopsy provides conclusive proof of ALL, typically with >20% of all cells being leukemic lymphoblasts. A lumbar puncture (also known as a spinal tap) can determine whether the spinal column and brain have been invaded. Brain and spinal column involvement can be diagnosed either through confirmation of leukemic cells in the lumbar puncture or through clinical signs of CNS leukemia as described above. Laboratory tests that might show abnormalities include blood count, kidney function, electrolyte, and liver enzyme tests.Pathological examination, cytogenetics (in particular the presence of Philadelphia chromosome), and immunophenotyping establish whether the leukemic cells are myeloblastic (neutrophils, eosinophils, or basophils) or lymphoblastic (B lymphocytes or T lymphocytes). Cytogenetic testing on the marrow samples can help classify disease and predict how aggressive the disease course will be. Different mutations have been associated with shorter or longer survival. Immunohistochemical testing may reveal TdT or CALLA antigens on the surface of leukemic cells. TdT is a protein expressed early in the development of pre-T and pre-B cells, whereas CALLA is an antigen found in 80% of ALL cases and also in the "blast crisis" of CML. Diagnosis: Medical imaging (such as ultrasound or CT scanning) can find invasion of other organs commonly the lung, liver, spleen, lymph nodes, brain, kidneys, and reproductive organs. Diagnosis: Immunophenotyping In addition to cell morphology and cytogenetics, immunophenotyping, a laboratory technique used to identify proteins that are expressed on their cell surface, is a key component in the diagnosis of ALL. The preferred method of immunophenotyping is through flow cytometry. In the malignant lymphoblasts of ALL, expression of terminal deoxynucleotidyl transferase (TdT) on the cell surface can help differentiate malignant lymphocyte cells from reactive lymphocytes, white blood cells that are reacting normally to an infection in the body. On the other hand, myeloperoxidase (MPO), a marker for the myeloid lineage, is typically not expressed. Because precursor B cell and precursor T cells look the same, immunophenotyping can help differentiate the subtype of ALL and the level of maturity of the malignant white blood cells. The subtypes of ALL as determined by immunophenotype and according to the stages of maturation. Diagnosis: An extensive panel of monoclonal antibodies to cell surface markers, particularly CD or cluster of differentiation markers, are used to classify cells by lineage. Below are immunological markers associated with B cell and T cell ALL. Diagnosis: Cytogenetics Cytogenetic analysis has shown different proportions and frequencies of genetic abnormalities in cases of ALL from different age groups. This information is particularly valuable for classification and can in part explain the different prognoses of these groups. In regards to genetic analysis, cases can be stratified according to ploidy, a number of sets of chromosomes in the cell, and specific genetic abnormalities, such as translocations. Hyperdiploid cells are defined as cells with more than 50 chromosomes, while hypodiploid are defined as cells with less than 44 chromosomes. Hyperdiploid cases tend to carry a good prognosis while hypodiploid cases do not. For example, the most common specific abnormality in childhood B-ALL is the t(12;21) ETV6–RUNX1 translocation, in which the RUNX1 gene, encoding a protein involved in transcriptional control of hemopoiesis, has been translocated and repressed by the ETV6–RUNX1 fusion protein.Below is a table with the frequencies of some cytogenetic translocations and molecular genetic abnormalities in ALL. Diagnosis: Classification French-American-British Historically, prior to 2008, ALL was classified morphologically using the French-American-British (FAB) system that heavily relied on morphological assessment. The FAB system takes into account information on size, cytoplasm, nucleoli, basophilia (color of cytoplasm), and vacuolation (bubble-like properties). Diagnosis: While some clinicians still use the FAB scheme to describe tumor cell appearance, much of this classification has been abandoned because of its limited impact on treatment choice and prognostic value.: 491 World Health Organization In 2008, the World Health Organization classification of acute lymphoblastic leukemia was developed in an attempt to create a classification system that was more clinically relevant and could produce meaningful prognostic and treatment decisions. This system recognized differences in genetic, immunophenotype, molecular, and morphological features found through cytogenetic and molecular diagnostics tests.: 1531–1535  This subtyping helps determine the prognosis and the most appropriate treatment for each specific case of ALL. Diagnosis: The WHO subtypes related to ALL are: B-lymphoblastic leukemia/lymphoma Not otherwise specified (NOS) with recurrent genetic abnormalities with t(9;22)(q34.1;q11.2);BCR-ABL1 with t(v;11q23.3);KMT2A rearranged with t(12;21)(p13.2;q22.1); ETV6-RUNX1 with t(5;14)(q31.1;q32.3) IL3-IGH with t(1;19)(q23;p13.3);TCF3-PBX1 with hyperdiploidy with hypodiploidy T-lymphoblastic leukemia/lymphoma Acute leukemias of ambiguous lineage Acute undifferentiated leukemia Mixed phenotype acute leukemia (MPAL) with t(9;22)(q34.1;q11.2); BCR–ABL1 MPAL with t(v;11q23.3); KMT2A rearranged MPAL, B/myeloid, NOS MPAL, T/myeloid, NOS Treatment: The aim of treatment is to induce a lasting remission, defined as the absence of detectable cancer cells in the body (usually less than 5% blast cells in the bone marrow). Over the past several decades, there have been strides to increase the efficacy of treatment regimens, resulting in increased survival rates. Possible treatments for acute leukemia include chemotherapy, steroids, radiation therapy, intensive combined treatments (including bone marrow or stem cell transplants), targeted therapy, and/or growth factors. Chemotherapy Chemotherapy is the initial treatment of choice, and most people with ALL receive a combination of medications. There are no surgical options because of the body-wide distribution of the malignant cells. In general, cytotoxic chemotherapy for ALL combines multiple antileukemic drugs tailored to each person. Chemotherapy for ALL consists of three phases: remission induction, intensification, and maintenance therapy. Treatment: Adult chemotherapy regimens mimic those of childhood ALL; however, are linked with a higher risk of disease relapse with chemotherapy alone. Two subtypes of ALL (B-cell ALL and T-cell ALL) require special considerations when it comes to selecting an appropriate treatment regimen in adults with ALL. B-cell ALL is often associated with cytogenetic abnormalities (specifically, t(8;14), t (2;8), and t(8;22)), which require aggressive therapy consisting of brief, high-intensity regimens. T-cell ALL responds to cyclophosphamide-containing agents the most. Treatment: Radiation therapy Radiation therapy (or radiotherapy) is used on painful bony areas, in high disease burdens, or as part of the preparations for a bone marrow transplant (total body irradiation). In the past, physicians commonly utilized radiation in the form of whole-brain radiation for central nervous system prophylaxis, to prevent the occurrence and/or recurrence of leukemia in the brain. Recent studies showed that CNS chemotherapy provided results as favorable but with fewer developmental side effects. As a result, the use of whole-brain radiation has been more limited. Most specialists in adult leukemia have abandoned the use of radiation therapy for CNS prophylaxis, instead using intrathecal chemotherapy. Treatment: Biological therapy Selection of biological targets on the basis of their combinatorial effects on the leukemic lymphoblasts can lead to clinical trials for improvement in the effects of ALL treatment. Tyrosine-kinase inhibitors (TKIs), such as imatinib, are often incorporated into the treatment plan for people with Bcr-Abl1+ (Ph+) ALL. However, this subtype of ALL is frequently resistant to the combination of chemotherapy and TKIs and allogeneic stem cell transplantation is often recommended upon relapse. Treatment: Immunotherapy Chimeric antigen receptors (CARs) have been developed as a promising immunotherapy for ALL. This technology uses a single chain variable fragment (scFv) designed to recognize the cell surface marker CD19 as a method of treating ALL. Treatment: CD19 is a molecule found on all B-cells and can be used as a means of distinguishing the potentially malignant B-cell population. In this therapy, mice are immunized with the CD19 antigen and produce anti-CD19 antibodies. Hybridomas developed from mouse spleen cells fused to a myeloma cell line can be developed as a source for the cDNA encoding the CD19 specific antibody. The cDNA is sequenced and the sequence encoding the variable heavy and variable light chains of these antibodies are cloned together using a small peptide linker. This resulting sequence encodes the scFv. This can be cloned into a transgene, encoding what will become the endodomain of the CAR. Varying arrangements of subunits serve as the endodomain, but they generally consist of the hinge region that attaches to the scFv, a transmembrane region, the intracellular region of a costimulatory molecule such as CD28, and the intracellular domain of CD3-zeta containing ITAM repeats. Other sequences frequently included are: 4-1bb and OX40. The final transgene sequence, containing the scFv and endodomain sequences is then inserted into immune effector cells that are obtained from the person and expanded in vitro. In trials these have been a type of T-cell capable of cytotoxicity.Inserting the DNA into the effector cell can be accomplished by several methods. Most commonly, this is done using a lentivirus that encodes the transgene. Pseudotyped, self-inactivating lentiviruses are an effective method for the stable insertion of a desired transgene into the target cell. Other methods include electroporation and transfection, but these are limited in their efficacy as transgene expression diminishes over time. Treatment: The gene-modified effector cells are then transplanted back into the person. Typically this process is done in conjunction with a conditioning regimen such as cyclophosphamide, which has been shown to potentiate the effects of infused T-cells. This effect has been attributed to making an immunologic space within which the cells populate. The process as a whole result in an effector cell, typically a T-cell, that can recognize a tumor cell antigen in a manner that is independent of the major histocompatibility complex and which can initiate a cytotoxic response. Treatment: In 2017, tisagenlecleucel was approved by the FDA as a CAR-T therapy for people with acute B-cell lymphoblastic leukaemia who did not respond adequately to other treatments or have relapsed. In a 22-day process, the "drug" is customized for each person. T cells purified from each person are modified by a virus that inserts genes that encode a chimaeric antigen receptor into their DNA, one that recognizes leukemia cells. Treatment: Relapsed ALL Typically, people who experience a relapse in their ALL after initial treatment have a poorer prognosis than those who remain in complete remission after induction therapy. It is unlikely that recurrent leukemia will respond favorably to the standard chemotherapy regimen that was initially implemented, and instead, these people should be trialed on reinduction chemotherapy followed by allogeneic bone marrow transplantation. These people in relapse may also receive blinatumomab, as it has shown to increase remission rates and overall survival rates, without increased toxic effects.Low dose palliative radiation may also help reduce the burden of tumor inside or outside the central nervous system and alleviate some symptoms. Treatment: Recently, there has also been evidence and approval of use for dasatinib, a tyrosine kinase inhibitor. It has shown efficacy in cases of people with Ph1-positive and imatinib-resistant ALL, but more research needs to be done on long-term survival and time to relapse. Treatment: Side effects Chemotherapies or stem cell transplantations may require a platelet transfusion to prevent bleeding. Moreover, patients undergoing a stem cell transplantation can develop a graft-versus-host disease (GvHD). It was evaluated whether mesenchymal stromal cells can be used to prevent a GvHD. The evidence is very uncertain about the therapeutic effect of mesenchymal stromal cells to treat graft-versus-host diseases after a stem cell transplantation on the all-cause mortality and complete disappear of chronic acute graft-versus-host diseases. Mesenchymal stromal cells may results in little to no difference in the all-cause mortality, relapse of malignant disease and incidence of acute and chronic graft-versus-host diseases if they are used for prophylactic reason. Treatment: Supportive therapy Adding physical exercises to the standard treatment for adult patients with haematological malignancies like ALL may result in little to no difference in mortality, quality of life, and physical functioning. These exercises may result in a slight reduction in depression. Furthermore, aerobic physical exercises probably reduce fatigue. The evidence is very uncertain about the effect on anxiety and serious adverse events. Treatment: Gene therapy Brexucabtagene autoleucel (Tecartus) was approved for the treatment of adults with relapsed or refractory B-cell precursor acute lymphoblastic leukemia in October 2021.Each dose of brexucabtagene autoleucel is a customized treatment created using the recipient's own immune system to help fight the lymphoma. The recipient's T cells, a type of white blood cell, are collected and genetically modified to include a new gene that facilitates the targeting and killing of the lymphoma cells. These modified T cells are then infused back into the recipient. Prognosis: Prior to the development of chemotherapy regimens and hematopoietic stem cell transplant, children were surviving a median length of 3 months, largely due to either infection or bleeding. Since the advent of chemotherapy, the prognosis for childhood leukemia has improved greatly and children with ALL are estimated to have a 95% probability of achieving a successful remission after 4 weeks of initiating treatment. People in pediatric care with ALL in developed countries have a greater than 80% five-year survival rate. It is estimated that 60–80% of adults undergoing induction chemotherapy achieve complete remission after 4 weeks, and those over the age of 70 have a cure rate of 5%. Prognosis: However, there are differing prognoses for ALL among individuals depending on a variety of factors: Gender: Females tend to fare better than males. Ethnicity: Caucasians are more likely to develop acute leukemia than African-Americans, Asians, or Hispanics. However, they also tend to have a better prognosis than non-Caucasians. Prognosis: Age at diagnosis: children 1–10 years of age are most likely to develop ALL and to be cured of it. Cases in older people are more likely to result from chromosomal abnormalities (e.g., the Philadelphia chromosome) that make treatment more difficult and prognoses poorer. Older people are also likely to have co-morbid medical conditions that make it even more difficult to tolerate ALL treatment. Prognosis: White blood cell count at diagnosis of greater than 30,000 (B-ALL) or 100,000 (T-ALL) is associated with worse outcomes Cancer spreading into the Central nervous system (brain or spinal cord) has worse outcomes. Prognosis: Morphological, immunological, and genetic subtypes Person's response to initial treatment and longer length of time required (greater than 4 weeks) to reach complete remission Early relapse of ALL Minimal residual disease Genetic disorders, such as Down syndrome, and other chromosomal abnormalities (aneuploidy and translocations)Cytogenetics, the study of characteristic large changes in the chromosomes of cancer cells, is an important predictor of outcome. Some cytogenetic subtypes have a worse prognosis than others. These include: Person with t(9,22) positive-ALL (30% of adult ALL cases) and other Bcr-abl-rearranged leukemias are more likely to have a poor prognosis, but survival rates may rise with treatment consisting of chemotherapy and Bcr-abl tyrosine kinase inhibitors. Prognosis: A translocation between chromosomes 4 and 11 occurs in about 4% of cases and is most common in infants under 12 months.Hyperdiploidy (>50 chromosomes) and t(12;21) are good prognostic factors and also makeup 50% of pediatric ALL cases.Unclassified ALL is considered to have an intermediate prognosis risk, somewhere in-between the good and poor risk categories. Epidemiology: ALL affected about 876,000 people and resulted in 111,000 deaths globally in 2015. It occurs in both children and adults with highest rates seen between the ages three and seven years. Around 75% of cases occur before the age of 6 with a secondary rise after the age of 40. It is estimated to affect 1 in 1500 children.Accounting for the broad age profiles of those affected, ALL newly occurs in about 1.7 per 100,000 people per year. ALL represents approximately 20% of adults and 80% of childhood leukemias, making it the most common childhood cancer. Although 80 to 90% of children will have a long term complete response with treatment,: 1527  it remains the leading cause of cancer-related deaths among children. 85% of cases are of B-cell lineage and have an equal number of cases in both males and females. The remaining 15% of T-cell lineage have a male predominance. Epidemiology: Globally, ALL typically occurs more often in Caucasians, Hispanics, and Latin Americans than in Africans.: 1617  In the US, ALL is more common in children from Caucasian (36 cases/million) and Hispanic (41 cases/million) descent when compared to those from African (15 cases/million) descent. Pregnancy: Leukemia is rarely associated with pregnancy, affecting only about 1 in 10,000 pregnant women. The management of leukemia in a pregnant woman depends primarily on the type of leukemia. Acute leukemias normally require prompt, aggressive treatment, despite significant risks of pregnancy loss and birth defects, especially if chemotherapy is given during the developmentally sensitive first trimester.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Genetic load** Genetic load: Genetic load is the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. The average individual taken from a population with a low genetic load will generally, when grown in the same conditions, have more surviving offspring than the average individual from a population with a high genetic load. Genetic load can also be seen as reduced fitness at the population level compared to what the population would have if all individuals had the reference high-fitness genotype. High genetic load may put a population in danger of extinction. Fundamentals: Consider n genotypes A1,…,An , which have the fitnesses w1,…,wn and frequencies p1,…,pn , respectively. Ignoring frequency-dependent selection, the genetic load L may be calculated as: max max where max is either some theoretical optimum, or the maximum fitness observed in the population. In calculating the genetic load, w1…wn must be actually found in at least a single copy in the population, and w¯ is the average fitness calculated as the mean of all the fitnesses weighted by their corresponding frequencies: w¯=∑i=1npiwi where the ith genotype is Ai and has the fitness and frequency wi and pi respectively. Fundamentals: One problem with calculating genetic load is that it is difficult to evaluate either the theoretically optimal genotype, or the maximally fit genotype actually present in the population. This is not a problem within mathematical models of genetic load, or for empirical studies that compare the relative value of genetic load in one setting to genetic load in another. Causes: Deleterious mutation Deleterious mutation load is the main contributing factor to genetic load overall. The Haldane-Muller theorem of mutation–selection balance says that the load depends only on the deleterious mutation rate and not on the selection coefficient. Specifically, relative to an ideal genotype of fitness 1, the mean population fitness is exp ⁡(−U) where U is the total deleterious mutation rate summed over many independent sites. The intuition for the lack of dependence on the selection coefficient is that while a mutation with stronger effects does more harm per generation, its harm is felt for fewer generations. Causes: A slightly deleterious mutation may not stay in mutation–selection balance but may instead become fixed by genetic drift when its selection coefficient is less than one divided by the effective population size. In asexual populations, the stochastic accumulation of mutation load is called Muller's ratchet, and occurs in the absence of beneficial mutations, when after the most-fit genotype has been lost, it cannot be regained by genetic recombination. Deterministic accumulation of mutation load occurs in asexuals when the deleterious mutation rate exceeds one per replication. Sexually reproducing species are expected to have lower genetic loads. This is one hypothesis for the evolutionary advantage of sexual reproduction. Purging of deleterious mutations in sexual populations is facilitated by synergistic epistasis among deleterious mutations.High load can lead to a small population size, which in turn increases the accumulation of mutation load, culminating in extinction via mutational meltdown.The accumulation of deleterious mutations in humans has been of concern to many geneticists, including Hermann Joseph Muller, James F. Crow, Alexey Kondrashov, W. D. Hamilton, and Michael Lynch. Causes: Beneficial mutation In sufficiently genetically loaded populations, new beneficial mutations create fitter genotypes than those previously present in the population. When load is calculated as the difference between the fittest genotype present and the average, this creates a substitutional load. The difference between the theoretical maximum (which may not actually be present) and the average is known as the "lag load". Motoo Kimura's original argument for the neutral theory of molecular evolution was that if most differences between species were adaptive, this would exceed the speed limit to adaptation set by the substitutional load. However, Kimura's argument confused the lag load with the substitutional load, using the former when it is the latter that in fact sets the maximal rate of evolution by natural selection.More recent "travelling wave" models of rapid adaptation derive a term called the "lead" that is equivalent to the substitutional load, and find that it is a critical determinant of the rate of adaptive evolution. Causes: Inbreeding Inbreeding increases homozygosity. In the short run, an increase in inbreeding increases the probability with which offspring get two copies of a recessive deleterious alleles, lowering fitnesses via inbreeding depression. In a species that habitually inbreeds, e.g. through self-fertilization, a proportion of recessive deleterious alleles can be purged.Likewise, in a small population of humans practicing endogamy, deleterious alleles can either overwhelm the population's gene pool, causing it to become extinct, or alternately, make it fitter. Causes: Recombination/segregation Combinations of alleles that have evolved to work well together may not work when recombined with a different suite of coevolved alleles, leading to outbreeding depression. Segregation load occurs in the presence of overdominance, i.e. when heterozygotes are more fit than either homozygote. In such a case, the heterozygous genotype gets broken down by Mendelian segregation, resulting in the production of homozygous offspring. Therefore, there is segregation load as not all individuals have the theoretical optimum genotype. Recombination load arises through unfavorable combinations across multiple loci that appear when favorable linkage disequilibria are broken down. Recombination load can also arise by combining deleterious alleles subject to synergistic epistasis, i.e. whose damage in combination is greater than that predicted from considering them in isolation. Causes: Migration Migration load is the result of nonnative organisms that aren't adapted to a particular environment coming into that environment. If they breed with individuals who are adapted to that environment, their offspring will not be as fit as they would have been if both of their parents had been adapted to that particular environment. Migration load can also occur in asexually reproducing species, but in this case, purging of low fitness genotypes is more straightforward.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isobutyraldehyde** Isobutyraldehyde: Isobutyraldehyde is the chemical compound with the formula (CH3)2CHCHO. It is an aldehyde, isomeric with n-butyraldehyde (butanal). Isobutyraldehyde is made, often as a side-product, by the hydroformylation of propene. Its odour is described as that of wet cereal or straw. It undergoes the Cannizaro reaction even though it has alpha hydrogen atom. It is a colorless volatile liquid. Synthesis: Isobutyraldehyde is produced industrially by the hydroformylation of propene. Several million tons are produced annually. Synthesis: Biological routes In the context of butanol fuel, isobutyraldehyde is of interest as a precursor to isobutanol. E. coli as well as several other organisms has been genetically modified to produce isobutanol. α-Ketoisovalerate, derived from oxidative deamination of valine, is prone to decarboxylation to give isobutyraldehyde, which is susceptible to reduction to the alcohol: (CH3)2CHC(O)CO2H → (CH3)2CHCHO + CO2 (CH3)2CHCHO + NADH + H+ → (CH3)2CHCH2OH + NAD+ Other routes It can also be produced using engineered bacteria.Strong mineral acids catalyse the rearrangement of methallyl alcohol to isobutyraldehyde. Reactions: Hydrogenation of the aldehyde gives isobutanol. Oxidation gives methacrolein or methacrylic acid. Condensation with formaldehyde gives hydroxypivaldehyde.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shaft-driven bicycle** Shaft-driven bicycle: A shaft-driven bicycle is a bicycle that uses a drive shaft instead of a chain to transmit power from the pedals to the wheel. Shaft drives were introduced in the 1880s, but were mostly supplanted by chain-driven bicycles due to the gear ranges possible with sprockets and derailleurs. Around the 2000s, due to advancements in internal gear technology, a small number of modern shaft-driven bicycles have been introduced. Shaft-driven bicycle: Shaft-driven bikes have a large bevel gear where a conventional bike would have its chain ring. This meshes with another bevel gear mounted on the drive shaft. The use of bevel gears allows the axis of the drive torque from the pedals to be turned through 90 degrees. The drive shaft then has another bevel gear near the rear wheel hub which meshes with a bevel gear on the hub where the rear sprocket would be on a conventional bike, and canceling out the first drive torque change of axis. Shaft-driven bicycle: The 90-degree change of the drive plane that occurs at the bottom bracket and again at the rear hub uses bevel gears for the most efficient performance, though other mechanisms could be used, e.g. hobson's joints, worm gears or crossed helical gears. The drive shaft is often mated to a hub gear which is an internal gear system housed inside the rear hub. Manufacturers of internal hubs suitable for use with shaft drive systems include NuVinci, Rohloff, Shimano, SRAM, and Sturmey-Archer. History: The first shaft drives for cycles appear to have been invented independently in the United States and Britain. In 1880, the Orbicycle (which was actually a tricycle) by Thomas Moore powered by a shaft drive was sold in London, England. A. Fearnhead, of 354 Caledonian Road, North London, developed one in 1890 and received a patent in October 1891. His prototype shaft was enclosed within a tube running along the top of the chainstay; later models were enclosed within the actual chainstay. In the United States, Walter Stillman filed for a patent on a shaft-driven bicycle on Dec. 10, 1890, which was granted on July 21, 1891.The shaft drive was not well accepted in Britain, so in 1894 Fearnhead took it to the United States where Colonel Pope of the Columbia firm bought the exclusive American rights. Belatedly, the British makers took it up, with Humber in particular plunging heavily on the deal. Curiously enough, the greatest of all the Victorian cycle engineers, Professor Archibald Sharp, was against shaft drive; in his classic 1896 book "Bicycles and Tricycles", he wrote "The Fearnhead Gear ... if bevel-wheels could be accurately and cheaply cut by machinery, it is possible that gears of this description might supplant, to a great extent, the chain-drive gear; but the fact that the teeth of the bevel-wheels cannot be accurately milled is a serious obstacle to their practical success".In the United States, they had been made by the League Cycle Company as early as 1893. Soon after, the French company Metropole marketed their Acatane. By 1897 Columbia began aggressively to market the chainless bicycle it had acquired from the League Cycle Company. Chainless bicycles were moderately popular in 1898 and 1899, although sales were still much smaller than regular bicycles, primarily due to the high cost. They were also somewhat less efficient than regular bicycles: there was roughly an 8 percent loss in the gearing, in part due to limited manufacturing technology at the time. The rear wheel was also more difficult to remove to change flats. Many of these deficiencies have been overcome in the past century. History: In 1902, The Hill-Climber Bicycle Mfg. Company sold a three-speed shaft-driven bicycle in which the shifting was implemented with three sets of bevel gears. While a small number of chainless bicycles were available, for the most part, shaft-driven bicycles disappeared from view for most of the 20th century. There is, however, still a niche market for chainless bikes, especially for commuters, and there is a number of manufacturers who offer them either as part of a larger range or as a primary specialization. Notable examples are Biomega in Denmark and Brik in the Netherlands. Comparison of shaft vs chain: Shaft drives operate at a very consistent rate of efficiency and performance, without adjustments or maintenance, though their efficiency has been lower than that of a properly adjusted and lubricated chain, possibly because of insufficiently precise machining or alignment of the bevel gears. Shaft drives are typically more complex to disassemble when repairing flat rear tires, and the manufacturing cost is typically higher. Comparison of shaft vs chain: A fundamental issue with bicycle shaft-drive systems is the requirement to transmit the torque of the rider through bevel gears with much smaller radii than typical bicycle sprockets. This requires both high quality gears and heavier frame construction.Since shaft-drives require gear hubs for shifting, they accrue all the benefits and drawbacks associated with such mechanisms.Most of the advantages claimed for a shaft drive can be realized by using a fully enclosed chain case. Some of the other issues addressed by the shaft drive, such as protection for clothing and from ingress of dirt, can be met through the use of chain guards. The reduced need for adjustment in shaft-drive bikes also applies to a similar extent to chain or belt-driven hub-geared bikes. Not all hub gear systems are shaft compatible.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mars suit** Mars suit: A Mars suit or Mars space suit is a space suit for EVAs on the planet Mars. Compared to a suit designed for space-walking in the near vacuum of low Earth orbit, Mars suits have a greater focus on actual walking and a need for abrasion resistance. Mars' surface gravity is 37.8% of Earth's, approximately 2.3 times that of the Moon, so weight is a significant concern, but there are fewer thermal demands compared to open space. At the surface the suits would contend with the atmosphere of Mars, which has a pressure of about 0.6 to 1 kilopascal (0.087 to 0.145 psi). On the surface, radiation exposure is a concern, especially solar flare events, which can dramatically increase the amount of radiation over a short time. Mars suit: Some of the issues a Mars suit for surface operations would face include having enough oxygen for the person as the air is mostly carbon dioxide; in addition the air is also at a much lower pressure than Earth's atmosphere at sea level. Other issues include the Martian dust, low temperatures, and radiation. Overview: One design for a Mars suit from the 2010s, the NASA Z-2 suit, would have electroluminescent patches to help crew members identify one another. Three types of tests planned for the Z-2 include tests in a vacuum chamber, tests in NASA's Neutral Buoyancy Laboratory (a large pool for mimicking zero-g), and tests in a rocky desert area. (See also: Z series space suits.) The Mars 2020 Perseverance rover has a materials test that is hoped will aid Mars suit development, the SHERLOC experiment; it includes a test target with space suit materials. The test will measure how these suit materials are affected by the Martian environment. Six materials have been chosen for testing: Orthofabric, Teflon, nGimat-coated Teflon, Dacron, Vectran, and Polycarbonate. The test will help select the best materials for future Mars space suits. Orthofabric is a polymeric material composed of a weave of GORE-TEX fibers, Nomex, and Kevlar-29.NASA tested possible Mars space suit materials by exposing them to Mars-equivalent ultraviolet (UV) radiation for 2500 hours, and then studied how the materials were affected. One of the concerns for the Mars suits is how materials respond to chemically reactive Mars dust and exposure to ultraviolet, especially over the lengths of time and amount of use the suits are expected to function.One researcher working on a design for Mars surface EVA suits was inspired in part by Medieval armor suits. Some ideas for a Mars suits are a Heads-up display projected in the visor, built-in communications equipment, life support, and a voice-recognition assistant.Examples of design concerns: High-speed winds filled with abrasive Mars dust. Overview: Radiation such as cosmic rays. Low temperatures down to −130 °C (−202 °F; 143 K). Exposure to ultraviolet light.One Mars mission design aspect is whether the Mars suits should also be made to work in space, or should be for the surface only. Overview: Designs The Biosuit is a mechanical counterpressure suit, resulting in a body hugging form. In this type of suit, the pressure would come from the structure and elasticity of the material, whereas with prior space-worn suits the pressure comes from pressurized gas, like a filled balloon. The gas pressure can make a flexible suit very rigid, like an inflated balloon.The Aoudo suit by the Austrian Space Forum is a space suit simulator for planetary surfaces. The suit ventilates with ambient air, but has a host of features to help simulate a space suit as well as tests enhancing technologies like a heads-up display inside the helmet. Overview: The AX-5 was part of a line of hard-suits developed at NASA Ames. Current suits are either soft or hybrid suits and use a lower-pressure pure oxygen atmosphere, which means people going on EVA must pre-breathe oxygen to avoid getting decompression sickness. A hard-suit can use a high-pressure atmosphere, eliminating the need to pre-breathe, but without being too hard to move like a high pressure soft suit would be. Overview: A simulated Mars suit was used for the HI-SEAS Earth-based spaceflight analog tests of the 2010s in Hawaii, USA.Mars suit design has been used as a topic for technology education. Comparison to Apollo lunar suit: The Apollo lunar EVA suit was called the Extravehicular Mobility Unit (EMU). Besides the pressure suit, this included the Portable Life Support System (backpack) and an emergency Oxygen Purge System (OPS) which provided 30 minutes of oxygen for emergency. The combined system weighed 212 pounds on Earth, but only 35.1 pounds on the Moon. Environmental design requirements: The most critical factors for immediate survivability and comfort on the Martian surface are to provide: sufficient pressure to prevent the boiling of body fluids; supply of oxygen and removal of carbon dioxide and water vapor for breathing; temperature control; and protection from cosmic radiation. Environmental design requirements: Pressure The atmospheric pressure on Mars varies with elevation and seasons, but there is not enough pressure to sustain life without a pressure suit. The lowest pressure the human body can tolerate, known as the Armstrong limit, is the pressure at which water boils (vaporizes) at the temperature of a human body, which is about 6.3 kilopascals (0.91 psi). The average surface pressure on Mars has been measured to be only about one-tenth of this, 0.61 kilopascals (0.088 psi). The highest pressure, at the lowest surface elevation, the bottom of Hellas Basin, is 1.24 kilopascals (0.180 psi), about twice the average. There is a seasonal variation over the Martian year (about two Earth years) as carbon dioxide (95.9% of the atmosphere) is sequentially frozen out, then sublimated back into the atmosphere when it is warmer, causing a global 0.2-kilopascal (0.029 psi) rise and fall in pressure.But the Martian atmosphere contains only 0.13–0.14% oxygen, compared to 20.9% of Earth's atmosphere. Thus breathing the Martian atmosphere is impossible for almost any organism; oxygen must be supplied, at a pressure in excess of the Armstrong limit. Environmental design requirements: Breathing Humans take in oxygen and expel carbon dioxide and water vapor when they breathe, and typically breathe between 12 and 20 times per minute at rest and up to 45 times per minute under high activity. At standard sea level conditions on Earth of 101.33 kilopascals (14.697 psi), humans are breathing in 20.9% oxygen, at a partial pressure of 21.2 kilopascals (3.07 psi). This is the required oxygen supply corresponding to normal Earth conditions. Humans generally require supplemental oxygen at altitudes above 15,000 feet (4.6 km), so the absolute minimum safe oxygen requirement is a partial pressure of 11.94 kilopascals (1.732 psi) For reference, the Apollo EMU used an operating pressure of 25.5 kilopascals (3.70 psi) on the Moon.Exhaled breath on Earth normally contains about 4% carbon dioxide and 16% oxygen, along with 78% nitrogen, plus about 0.2 to 0.3 liters of water. Carbon dioxide slowly becomes increasingly toxic in high concentrations, and must be scrubbed from the breathing gas. A concept to scrub carbon dioxide from breathing air is to use re-usable amine bead carbon dioxide scrubbers. While one carbon dioxide scrubber filters the astronaut's air, the other can vent scrubbed carbon dioxide to the Mars atmosphere. Once that process is completed, another scrubber can be used, and the one that was used can take a break. Another more traditional way to remove carbon dioxide from air is by a lithium hydroxide canister, however these need to be replaced periodically. Carbon dioxide removal systems are a standard part of habitable spacecraft designs, although their specifics vary. One idea to remove carbon dioxide is to use a zeolite molecular sieve, and then later the carbon dioxide can be removed from the material.If nitrogen is used to increase pressure as on the ISS, it is inert to humans, but can cause decompression sickness. Space suits typically operate at low pressure to make their balloon-like structure easier to move, so astronauts must spend a long time getting the nitrogen out of their system. The Apollo missions used a pure oxygen atmosphere in space except on the ground, to reduce risk of fire. There is also interest in hard suits that can handle higher internal pressures but are more flexible, so astronauts do not have to get the nitrogen out of their system before going on a spacewalk. Environmental design requirements: Temperature There can be large temperature swings on Mars; for example, at the equator, daytime temperature may reach 21 °C (70 °F) in the Martian summer, and drop down to −73 °C (−100 °F) at night. According to a 1958 NASA report, long-term human comfort requires temperatures in the 4 to 35 °C (40 to 95 °F) range at 50% humidity. Environmental design requirements: Radiation On Earth, in developed nations, humans are exposed to about 0.6 rads (6 mGy) per year, and aboard the International Space Station about 8 rads (80 mGy) per year. Humans can tolerate up to about 200 rads (2 Gy) of radiation without incurring permanent damage, however any radiation exposure carries risk so there is a focus on keeping exposure as low as possible. On the surface of Mars there are two main types of radiation: A steady dose from a variety of sources and solar proton events that can cause a dramatic increase in the amount of radiation for a short time. Solar flare events can cause a lethal dose to be delivered in hours if astronauts are caught unprotected, and this is a concern of NASA for human operations in space and on the surface of Mars. Mars does not have a large magnetic field in the same way as Earth, which shields the Earth from radiation, especially from solar flares. For example, the solar event which occurred on August 7, 1972, just 5 months after Apollo 16, produced so much radiation, including a wave of accelerated particles like protons, that NASA became concerned what would happen if such an event were to occur while astronauts were in space. If the astronauts get too much radiation, it increases their lifetime cancer risk and they can get radiation poisoning. Exposure to ionizing radiation can also cause cataracts, a problem with the eye.The atmosphere of Mars is much thinner than Earth's, so it does not stop as much radiation.The effect of radiation on medications taken on the mission is also of concern, especially if it alters their medical qualities. Additional design requirements: Operating in Mars suit on the surface creates a series of concerns for the human body, including an altered gravitational environment, a confined and isolated situation, a hostile exterior environment and closed environment inside, radiation, and extreme distance from Earth.An important consideration for the breathing air inside the suit, is that toxic gases do not get into the air supply. Reduced gravity environments can alter the distribution of fluids inside the body. One point of concern is changes in fine motor skill, especially if it interferes with the ability to use computer interfaces. Additional design requirements: Visors and UV A thin layer of gold on the visor plastic bubble of current space helmets, shields the face from harmful parts of the Sun's spectrum. Visor designs, in general, have a design goal of allowing the astronaut to see, but block ultraviolet and heat, besides the pressure requirements.It has been detected that ultraviolet light does reach the surface of Mars. Martian carbon dioxide tends to block ultraviolet light of wavelengths shorter than about 190 nm, however above that there is less blocking depending on the amount of dust and Rayleigh scattering. Significant amounts of UVB and UVC light are noted to reach the surface of Mars. Additional design requirements: Toilet and vomiting A human consideration for suits is the need to go the bathroom. Various methods have been employed in suits, and in the Shuttle-era NASA used maximum absorbency garments to enable stays of 10 hours in space and partial pressure suits.Another concern is vomiting, which has increased occurrence in spaceflight. Additional design requirements: Martian dust Another consideration is what would happen if astronauts somehow breathe in Mars dust. The health effect of Mars dust is a concern, based on known information about it which includes that it may be abrasive and/or reactive. Studies have been done with quartz dust and also compared it to lunar dust exposure. An Apollo 17 astronaut complained of hay fever like symptoms after his Moon walk. The lunar dust was known to cling to the space suits and be taken in with the astronauts when they came in to the Apollo Lunar Module. Use: An article in the magazine Nature noted that due to the reduced gravity, the dynamics of walking on Mars would be different than on Earth. This is because people fall forward as part of their gait when moving, the motion of the center of body mass resembling that of an inverted pendulum. Compared to the Earth, all else being equal it would be half the amount of work to move, however a walking speed on Mars would be 3.4 km per hour rather than 5.5 km per hour on Earth. This data was produced by simulating Martian gravity for the duration of an aircraft following a flight profile that causes this type of acceleration. The acceleration of gravity at the surface of Mars is calculated to be about 3.7 meters per second2. It is not known if this reduced gravity causes the same kind of reduced muscle mass and biological effects as seen when living in microgravity aboard the ISS for several months. The gravity is about 38% of Earth's gravity at the surface.Rock climbing tests with a low-pressure IVA (intra vehicle activity) suit were conducted in Oregon, USA. The difficulty of grasping rock with gloves including moving fingers and gaining friction with rocks was noted, and ice climbing axes were helpful for climbing surfaces. Mountaineering on Mars may be needed when the terrain environments exceeds the abilities of a rover vehicle, or to access a target of interest, or simply to get home to a base. One common mountaineering need is a highly mobile short-stay shelter to use for overnight stays when climbing, such as a tent, and an equivalent for Mars might support the ability to get out of a space suit. Suit design for climbing would likely be impacted by the needs for climbing including suit flexibility, especially in the hands and also in terms of durability.Another issue is the expected amount of use for the suits in probably human mission designs. For example, as of the late 2010s there had been over 500 EVA's from the start of spaceflight, whereas a single mission to Mars is expected to need 1000 EVAs.Typical Mars mission plans note that a person wearing a Mars suit would need to enter a pressurized rover through an airlock. Alternatively, a Mars suit would need to be worn on crewed unpressurized rovers to provide life support. There are several different options for an egress and entry airlock for a space suit, and one of these is to repressurize the entire compartment as on the Apollo lunar lander. Some other ideas are suitport, crewlock, and transit airlock. Need: The NASA Authorization Act of 2017 directed NASA to get humans near or on the surface of Mars by the early 2030s. Suitport for Mars: Mars space suits have been explored for integration with airlock design that combines an airlock and suit entry and egress with another vehicle, and is commonly known as a suitport. This has been considered as a way to integrate a crewed pressurized Mars rover with Mars space suit EVAs.The idea is that a person would slide into the suit through an airlock opening while the exterior of suit is outside the vehicle and exposed to the Martian environment. Then, the hatch would be closed, sealing off the interior of the vehicle, and the person would be supported by the suit's life support system. NASA tested the Z-1 space suit for extraterrestrial surface EVA with a suitport design in the 2010s. In the NASA Z-1 design there is a hatch at the rear of the space suit that can be docked with a suitable vehicle or structure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cubomania** Cubomania: Cubomania is a Surrealist technique of making collages by cutting an image into squares and reassembling without regard for the original image at random to create something new.The technique was invented by the Romanian surrealist Gherasim Luca. Luca introduced cubomania at two exhibitions in Bucharest, in 1945 and 1946, and in small publications. Luca positioned cubomania as a mix of Karl Marx's and André Breton's ideas. It was a critique of the alleged objectivity of social conditions and rejected the tyranny over liberty.It has been described as a "statistical method".Penelope Rosemont and Joseph Jablonski have suggested that cubomania can "subvert the enslaving 'message' of advertising and to free images from repressive contexts."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apps to analyse COVID-19 sounds** Apps to analyse COVID-19 sounds: Apps to analyse COVID-19 sounds are mobile software applications designed to collect respiratory sounds and aid diagnosis in response to the COVID-19 pandemic. Numerous applications are in development, with different institutions and companies taking various approaches to privacy and data collection. Current efforts are aimed at gathering data. In a later stage, it is possible that sound apps will have the capacity (and ethical approvals) to provide information back to users. In order to develop and train signal analysis approaches, large datasets are required. History: The COVID-19 outbreak was announced as a global pandemic by the World Health Organization in March 2020 and has affected a growing number of people globally. In this context, advanced artificial intelligence techniques are being considered as tools in aiding our response to global health crisis. Other COVID-19 apps which offer solutions for user tracking have been developed. At the same time a number of approaches which tries to use respiratory sounds and artificial intelligence to understand if the disease can be diagnosed have been proposed. A few studies are available as preprints (i.e. not yet peer-reviewed) documents. Methodologies: The potential for using speech and sound analysis by artificial intelligence to help in this scenario, by surveying which types of related or contextually significant phenomena can be automatically assessed from speech or sound has been recently overviewed. These include the automatic recognition and monitoring of breathing, dry and wet coughing or sneezing sounds, speech under cold, eating behaviour, sleepiness, or pain. Methodologies: Additionally, the potential use-cases of intelligent speech analysis for COVID-19 diagnosed patients has also been presented. In particular, by analysing speech recordings from these patients, an audio-only-based model to automatically categorise the health state of patients from four aspects, including the severity of illness, sleep quality, fatigue, and anxiety, is constructed. This work shows promise in estimating the severity of illness. Methodologies: Machine learning methods have been explored to recognize and diagnose coughs from different diseases. These included a low complexity, automated recognition and diagnostic tool for screening respiratory infections that utilizes convolutional neural networks (CNNs) to detect cough within environment audio and diagnose three potential illnesses (i.e. bronchitis, bronchiolitis and pertussis) based on their unique cough audio features.A large-scale crowdsourced dataset of respiratory sounds has been collected to aid diagnosis of COVID-19: coughs and breathing sounds are sufficient to distinguish users affected by COVID-19 versus those affected by asthma or healthy controls.Behind these studies is the ambition that automated systems to screen for respiratory diseases based on voice, raw cough or other sound data would have positive medical applications in both clinical and public health arenas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subdirect product** Subdirect product: In mathematics, especially in the areas of abstract algebra known as universal algebra, group theory, ring theory, and module theory, a subdirect product is a subalgebra of a direct product that depends fully on all its factors without however necessarily being the whole direct product. The notion was introduced by Birkhoff in 1944 and has proved to be a powerful generalization of the notion of direct product. Definition: A subdirect product is a subalgebra (in the sense of universal algebra) A of a direct product ΠiAi such that every induced projection (the composite pjs: A → Aj of a projection pj: ΠiAi → Aj with the subalgebra inclusion s: A → ΠiAi) is surjective. A direct (subdirect) representation of an algebra A is a direct (subdirect) product isomorphic to A. An algebra is called subdirectly irreducible if it is not subdirectly representable by "simpler" algebras. Subdirect irreducibles are to subdirect product of algebras roughly as primes are to multiplication of integers. Examples: Any distributive lattice L is subdirectly representable as a subalgebra of a direct power of the two-element distributive lattice. This can be viewed as an algebraic formulation of the representability of L as a set of sets closed under the binary operations of union and intersection, via the interpretation of the direct power itself as a power set. In the finite case such a representation is direct (i.e. the whole direct power) if and only if L is a complemented lattice, i.e. a Boolean algebra. Examples: The same holds for any semilattice when "semilattice" is substituted for "distributive lattice" and "subsemilattice" for "sublattice" throughout the preceding example. That is, every semilattice is representable as a subdirect power of the two-element semilattice. The chain of natural numbers together with infinity, as a Heyting algebra, is subdirectly representable as a subalgebra of the direct product of the finite linearly ordered Heyting algebras. The situation with other Heyting algebras is treated in further detail in the article on subdirect irreducibles. Examples: The group of integers under addition is subdirectly representable by any (necessarily infinite) family of arbitrarily large finite cyclic groups. In this representation, 0 is the sequence of identity elements of the representing groups, 1 is a sequence of generators chosen from the appropriate group, and integer addition and negation are the corresponding group operations in each group applied coordinate-wise. The representation is faithful (no two integers are represented by the same sequence) because of the size requirement, and the projections are onto because every coordinate eventually exhausts its group. Examples: Every vector space over a given field is subdirectly representable by the one-dimensional space over that field, with the finite-dimensional spaces being directly representable in this way. (For vector spaces, as for abelian groups, direct product with finitely many factors is synonymous with direct sum with finitely many factors, whence subdirect product and subdirect sum are also synonymous for finitely many factors.) Subdirect products are used to represent many small perfect groups in (Holt & Plesken 1989).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Serial (radio and television)** Serial (radio and television): In television and radio programming, a serial is a show that has a continuing plot that unfolds in a sequential episode-by-episode fashion. Serials typically follow main story arcs that span entire television seasons or even the complete run of the series, and sometimes spinoffs, which distinguishes them from episodic television that relies on more stand-alone episodes. Worldwide, the soap opera is the most prominent form of serial dramatic programming. In the UK the first serials were direct adaptations of well known literary works, usually consisting of a small number of episodes.Serials rely on keeping the full nature of the story hidden and revealing elements episode by episode, to encourage spectators to tune in to every episode to follow the plot. Often these shows employ recapping segments at the beginning and cliffhangers at the end of each episode. The invention of recording devices such as VCRs and DVRs along with the growing popularity of streaming services has made following this type of show easier, which has resulted in increased success and popularity. Prior to the advent of DVRs, television networks shunned serials in prime time as they made broadcast programming reruns more difficult and television producers shunned them because they were tougher to go into broadcast syndication years down the road. Serial (radio and television): Serials contrast with episodic television, with plots relying on a more independent stand-alone format. Procedural drama television programs are commonly episodic, sometimes including a serial subplot.Shorter serial programs known as telenovelas (and earlier, radionovelas), originating and often produced in Spanish- and Portuguese-speaking Latin America, have become popular worldwide. Terminology: The term "serial" refers to the intrinsic property of a series – namely its order. In literature, the term is used as a noun to refer to a format (within a genre) by which a story is told in contiguous (typically chronological) installments in sequential issues of a single periodical publication. Terminology: More generally, "serial" is applied in library and information science to materials "in any medium issued under the same title in a succession of discrete parts, usually numbered (or dated) and appearing at regular or irregular intervals with no predetermined conclusion."The term has been used for a radio or television production with a continuously evolving, unified plot and set of characters, spread over multiple episodes. In the United States, daytime soap operas have long had a serial structure. Television mini-series also commonly come in a serial form. Starting in the mid-1970s, series with soap opera-like stories began to be aired in prime time (e.g. Dallas, Dynasty). In the 1990s, shows like The X-Files and The Sopranos began to use a more serial structure, and now there are a much wider range of shows in serial form. History: The serial began with the advent of movie serials of the early 20th century. With the emergence of television and subsequent decline of the movie-going audience, production of movie serials ceased due to the decreasing revenues. But the serial lived on, moving instead to the small screen and the world of Broadcast syndication television reruns. History: Soap operas The television serial format as known today originated in radio, in the form of children's adventure shows and daily 15-minute programs known as soap operas (so-called because many of these shows were sponsored by soap companies, such as Colgate-Palmolive and Procter & Gamble). Soap operas were specifically engineered to appeal to women (with the intention of increasing sale of soap). They usually ran from Monday through Friday at the same time every day. A show called The Smith Family which ran only one night a week on WENR in Chicago during the early 1930s was credited as the "great-granddaddy of the soap operas" by radio historian Francis Chase, Jr. One of the other shows that helped pioneer the daytime soap opera/serial was The Guiding Light, which debuted on NBC radio in 1937, and then switched to CBS Television in 1952. The Guiding Light's final episode aired on September 18, 2009, having a total of 15,762 episodes air on CBS. Some of the characters in soap operas have been portrayed as long-suffering (a common theme even in some of today's serials along with the social and economical issues of the day). Children's adventure serials were more like film serials, with continuing characters involved in exploits with episodes that often ended in a cliffhanger situation; Westerns were a particularly popular format for children's serials on the radio. History: Guiding Light and such other daytime television program serials such as Search for Tomorrow, Love of Life, The Secret Storm, As the World Turns, The Edge of Night, The Doctors, Another World, Dark Shadows, One Life to Live, and All My Children were popular in the Golden and Silver Ages of television and still are today. History: Aside from the social issues, the style and presentation of these shows have changed. Whereas in the 1950s and 1960s the drama was underscored with traditional organ music, and in the 1970s and the 1980s a full orchestra provided the score, the daytime dramas of today use cutting-edged synth-driven music (in a way, music for soaps has come full-circle, from the keyboard to the keyboard). History: The nighttime serials are a different story, though the concept is also nothing new. In the 1960s, ABC aired the first real breakthrough nighttime serial, Peyton Place, inspired by the novel and theatrical film of the same name. After its cancellation, the format went somewhat dormant until Norman Lear produced Mary Hartman, Mary Hartman in 1976. In 1977, ABC created another comedy soap (aptly called Soap). Although the show was controversial for its time (with a homosexual character among its cast roster), it was (and still is today) a cult classic. History: The success of Dallas popularized serial storylines on prime-time television. Its end-of-season cliffhangers, such as "Who shot J. R.?" and "Bobby in the Shower?", influenced other shows like Dynasty (ABC's answer to Dallas), Knots Landing, Falcon Crest, The Colbys, Flamingo Road, Hotel, The Yellow Rose, Bare Essence, and Berrenger's. There were some serial shows such as Hill Street Blues and St. Elsewhere that did not officially fit into this category, but were nonetheless ratings hits season after season. History: While the last of the 1980s nighttime soaps ended during the first years of the following decade, then a second wave came with series like Beverly Hills, 90210, Melrose Place, Models, Inc., Savannah and Central Park West. But as the 1990s came to a close, the primetime soap as an official format gradually faded away, where it largely seems to remain as of the middle of the first decade of the 21st century in the United States. History: Other dramas Serialized storytelling can also be seen in other dramas. Heavily serialized dramas include Star Trek: Deep Space Nine, Babylon 5, The Sopranos, Twin Peaks, 24, Battlestar Galactica, Breaking Bad and its spin-off Better Call Saul, Dexter, The Wire, The Return of the Spirit and Downton Abbey. History: Series such as Buffy the Vampire Slayer, Veronica Mars, Homicide: Life on the Street, The Good Wife, and The X-Files fall somewhere between, featuring a new case each week that is resolved by the end of the episode, but also having an overarching mystery that receives focus in many episodes. The more serialized its storytelling, the less likely a show is to fare well in repeats. The format places a demand on episodes to be run in order, without which story arcs stretching over many episodes may be difficult for new viewers to delve into. Desperate Housewives also falls into the category while each season involves a new mystery that spans an entire season (and on one occasion, half the season) while planting hints throughout the episodes until the climax in the finale. History: To a lesser extent, series such as House and Fringe may also feature ongoing story arcs, but episodes are more self-encapsulated and so the series fall into a more conventional drama category. Fringe has experimented with "myth-alones", a hybrid that attempts to advance the story arc in a self-contained episode.In addition, it has been noted that the use of cliffhangers is still prevalent in adventure shows; however, they are now typically used just before a commercial break and the viewer need only wait a few minutes to see its resolution. In addition, many series have also made extensive use of the traditional end-of-episode cliffhanger format. This is most common in season finales which often end in a cliffhanger that would only be resolved in the next season's premiere. History: Over the course of its run, a show may change its focus. Matt Cherniss, executive vice president of programming at Fox says: "Sometimes early on, being a little more episodic allows more people into the room. And as the show goes on, by its nature, it might find itself becoming a little more serialized." Early in their runs, shows such as Lost, Buffy, Angel, Dollhouse and Torchwood put greater emphasis on the "story-of-the-week", but over time story arcs begin to dominate. In contrast, Alias became more focused on standalone stories in later seasons, because of pressures by network executives. Effect of a serial model on commercial success: Complex story arcs may have a negative effect on ratings by making entry more difficult for new viewers as well as confusing fans who have missed an episode. Networks see them as riskier than dramas that focus on a self-contained story of the week. Tom O'Neil of the Los Angeles Times notes: "They're chancy because these shows are hard to join midway through." As of 2012 CBS has not aired a serial drama in many years, in part because of the success of its non-serial procedurals. Marketing for Star Trek: Strange New Worlds (2022) emphasized its episodes being standalone, which cast and crew described as being similar to Star Trek: The Original Series.Scott Collins of the Los Angeles Times stated that "serialized storytelling ... though popular with hard-core fans and many critics, requires more dedication from viewers and has almost certainly tamped down ratings for many shows". He quoted an ad executive who states that close-ended story lines "[make] it easier for new viewers to tune in and figure out what's going on". According to Dick Wolf, serialized elements also make it more difficult for viewers to return to a show if they have missed some episodes. Cheers co-creator Les Charles regrets helping to make serialization common: "[W]e may have been partly responsible for what's going on now, where if you miss the first episode or two, you are lost. You have to wait until you can get the whole thing on DVD and catch up with it. If that blood is on our hands, I feel kind of badly about it. It can be very frustrating."Another problem is that many fans prefer to record these shows and binge watch the whole season in one session. These viewers are not included in TV ratings as they are much less likely to watch commercials than live viewers. The move away from live viewing and toward DVR or internet-streaming services has hurt many shows' prospects because there are fewer or no commercials and they may be fast-forwarded or out-of-date.Concerned about the toll on ratings of complex story arcs, networks sometimes ask showrunners to reduce serialization. Network executives believe that standalone episodes serve as a better jumping on point for new viewers, although this may result in a conflict with regular watchers who tend to prefer more focus on story arcs.Alias began as a more serialized show but later became more stand-alone under network pressure. During season 3 of the re-imagined Battlestar Galactica, showrunner Ronald D. Moore was also pressured to make episodes more stand-alone. This move resulted in negative criticism from both fans and critics, and Moore revealed in the Season 3 finale podcast that the network finally accepted that standalone episodes simply do not work for the story he is trying to tell. Effect of a serial model on commercial success: Moore has also stated that the network was reluctant to greenlight Caprica mainly because story-arc-heavy series notoriously have difficulty in picking up new viewers, as compared to a series composed of mostly standalone episodes.According to Todd A. Kessler, the second season of Damages will be less serialized in order to render the show more accessible to new viewers.Tim Kring, creator of Heroes, has also suggested that his show may move away from serialized storytelling: "I think the show needs to move towards [standalone episodes] in order to survive."Networks also discourage complex story arcs because they are less successful in reruns, and because standalone episodes can be rerun without concern for order.Entertainment Weekly and Chicago Tribune have expressed concern that declining ratings may lead to a major reduction in serialized storytelling. To highlight the situation, in the 2006–2007 season, no fewer than five high-concept serials were introduced, including Jericho, Kidnapped, Vanished, The Nine, and Drive, all of which experienced fairly quick cancellation due to low ratings. In 2010 and 2011, more high-profile, high-cost serials failed to achieve success, including V, The Event, and FlashForward. Effect of a serial model on commercial success: Some reviewers have also noted that serialized dramas are at a disadvantage at major awards shows such as the Primetime Emmy Awards. Such shows generally have to submit an atypical self-contained episode in order to gain recognition. Despite this, since 2000, every winner of the Primetime Emmy Award for Outstanding Drama Series has been a Serial Drama: The West Wing (2000–2003), The Sopranos (2004, 2007), Lost (2005), 24 (2006), Mad Men (2008–2011), Homeland (2012), Breaking Bad (2013–2014), Game of Thrones (2015-2016, 2018-2019), The Handmaid's Tale (2017), and Succession (2020). Effect of a serial model on commercial success: In terms of DVD sales, however, strongly serialized shows often perform better than shows which are strongly procedural. 24 (Season 6), Lost (Season 4), Heroes (Season 2), True Blood and even ratings minnow Battlestar Galactica (Season 4.0) sell significantly more units than hit procedurals such as CSI (Season 6), NCIS (Season 3, Season 5), CSI: Miami (Season 4, Season 5) and Criminal Minds (Season 2, Season 3). Effect of a serial model on commercial success: Serialized shows tend to develop a more dedicated fanbase interested in exploring the show online as well as becoming customers of additional merchandising.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SMC protein** SMC protein: SMC complexes represent a large family of ATPases that participate in many aspects of higher-order chromosome organization and dynamics. SMC stands for Structural Maintenance of Chromosomes. Classification: Eukaryotic SMCs Eukaryotes have at least six SMC proteins in individual organisms, and they form three distinct heterodimers with specialized functions: A pair of SMC1 and SMC3 constitutes the core subunits of the cohesin complexes involved in sister chromatid cohesion. SMC1 and SMC3 also have functions in the repair of DNA double-strained breaks in the process of homologous recombination. Likewise, a pair of SMC2 and SMC4 acts as the core of the condensin complexes implicated in chromosome condensation. SMC2 and SMC4 have the function of DNA repair as well. Condensin I plays a role in single-strained break repair but not in double-strained breaks. The opposite is true for Condensin II, which plays a role in homologous recombination. Classification: A dimer composed of SMC5 and SMC6 functions as part of a yet-to-be-named complex implicated in DNA repair and checkpoint responses.Each complex contains a distinct set of non-SMC regulatory subunits. Some organisms have variants of SMC proteins. For instance, mammals have a meiosis-specific variant of SMC1, known as SMC1β. The nematode Caenorhabditis elegans has an SMC4-variant that has a specialized role in dosage compensation.The following table shows the SMC proteins names for several model organisms and vertebrates: Prokaryotic SMCs SMC proteins are conserved from bacteria to humans. Most bacteria have a single SMC protein in individual species that forms a homodimer. Recently SMC proteins have been shown to aid the daughter cells DNA at the origin of replication to guarantee proper segregation. In a subclass of Gram-negative bacteria, including Escherichia coli, a distantly related protein known as MukB plays an equivalent role. Molecular structure: Primary structure SMC proteins are 1,000-1,500 amino-acid long. They have a modular structure that is composed of the following domains: Walker A ATP-binding motif coiled-coil region I hinge region coiled-coil region II Walker B ATP-binding motif; signature motif Secondary and tertiary structure SMC dimers form a V-shaped molecule with two long coiled-coil arms. To make such a unique structure, an SMC protomer is self-folded through anti-parallel coiled-coil interactions, forming a rod-shaped molecule. At one end of the molecule, the N-terminal and C-terminal domains form an ATP-binding domain. The other end is called a hinge domain. Two protomers then dimerize through their hinge domains and assemble a V-shaped dimer. The length of the coiled-coil arms is ~50 nm long. Such long "antiparallel" coiled coils are very rare and found only among SMC proteins (and their relatives such as Rad50). The ATP-binding domain of SMC proteins is structurally related to that of ABC transporters, a large family of transmembrane proteins that actively transport small molecules across cellular membranes. It is thought that the cycle of ATP binding and hydrolysis modulates the cycle of closing and opening of the V-shaped molecule. Still, the detailed mechanisms of action of SMC proteins remain to be determined. Molecular structure: Aggregation of SMC The SMC proteins have the potential to form a larger ring-like structure. The ability to create different architectural arrangements allows for various regulations of functions. Some of the possible configurations are double rings, filaments, and rosettes. Double rings are 4 SMC proteins bound at the heads and hinge, forming a ring. Filaments are a chain of alternating SMCs. Rosettes are rose-like structures with terminal segments in the inner region and hinge in the outer region. Genes: The following human genes encode SMC proteins: SMC1A SMC1B SMC2 SMC3 SMC4 SMC5 SMC6
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Silkypix Developer Studio** Silkypix Developer Studio: Silkypix Developer Studio is commercial and proprietary raw image processing software. It is often bundled with cameras from manufacturers such as Fujifilm, Panasonic and Pentax.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pectin** Pectin: Pectin (Ancient Greek: πηκτικός pēktikós: "congealed" and "curdled") is a heteropolysaccharide, a structural acid contained in the primary lamella, in the middle lamella, and in the cell walls of terrestrial plants. The principal, chemical component of pectin is galacturonic acid (a sugar acid derived from galactose) which was isolated and described by Henri Braconnot in 1825. Commercially produced pectin is a white-to-light-brown powder, produced from citrus fruits for use as an edible gelling agent, especially in jams and jellies, dessert fillings, medications, and sweets; and as a food stabiliser in fruit juices and milk drinks, and as a source of dietary fiber. Biology: Pectin is composed of complex polysaccharides that are present in the primary cell walls of a plant, and are abundant in the green parts of terrestrial plants. Biology: Pectin is the principal component of the middle lamella, where it binds cells. Pectin is deposited by exocytosis into the cell wall via vesicles produced in the Golgi apparatus. The amount, structure and chemical composition of pectin is different among plants, within a plant over time, and in various parts of a plant. Pectin is an important cell wall polysaccharide that allows primary cell wall extension and plant growth. During fruit ripening, pectin is broken down by the enzymes pectinase and pectinesterase, in which process the fruit becomes softer as the middle lamellae break down and cells become separated from each other. A similar process of cell separation caused by the breakdown of pectin occurs in the abscission zone of the petioles of deciduous plants at leaf fall.Pectin is a natural part of the human diet, but does not contribute significantly to nutrition. The daily intake of pectin from fruits and vegetables can be estimated to be around 5 g if approximately 500 g of fruits and vegetables are consumed per day. Biology: In human digestion, pectin binds to cholesterol in the gastrointestinal tract and slows glucose absorption by trapping carbohydrates. Pectin is thus a soluble dietary fiber. In non-obese diabetic (NOD) mice pectin has been shown to increase the incidence of diabetes.A study found that after consumption of fruit the concentration of methanol in the human body increased by as much as an order of magnitude due to the degradation of natural pectin (which is esterified with methanol) in the colon.Pectin has been observed to have some function in repairing the DNA of some types of plant seeds, usually desert plants. Pectinaceous surface pellicles, which are rich in pectin, create a mucilage layer that holds in dew that helps the cell repair its DNA.Consumption of pectin has been shown to slightly (3–7%) reduce blood LDL cholesterol levels. The effect depends upon the source of pectin; apple and citrus pectins were more effective than orange pulp fibre pectin. The mechanism appears to be an increase of viscosity in the intestinal tract, leading to a reduced absorption of cholesterol from bile or food. In the large intestine and colon, microorganisms degrade pectin and liberate short-chain fatty acids that have positive influence on health (prebiotic effect). Chemistry: Pectins, also known as pectic polysaccharides, are rich in galacturonic acid. Several distinct polysaccharides have been identified and characterised within the pectic group. Homogalacturonans are linear chains of α-(1–4)-linked D-galacturonic acid. Substituted galacturonans are characterised by the presence of saccharide appendant residues (such as D-xylose or D-apiose in the respective cases of xylogalacturonan and apiogalacturonan) branching from a backbone of D-galacturonic acid residues. Rhamnogalacturonan I pectins (RG-I) contain a backbone of the repeating disaccharide: 4)-α-D-galacturonic acid-(1,2)-α-L-rhamnose-(1. From many of the rhamnose residues, sidechains of various neutral sugars branch off. The neutral sugars are mainly D-galactose, L-arabinose and D-xylose, with the types and proportions of neutral sugars varying with the origin of pectin.Another structural type of pectin is rhamnogalacturonan II (RG-II), which is a less frequent, complex, highly branched polysaccharide. Rhamnogalacturonan II is classified by some authors within the group of substituted galacturonans since the rhamnogalacturonan II backbone is made exclusively of D-galacturonic acid units.Isolated pectin has a molecular weight of typically 60,000 to 130,000 g/mol, varying with origin and extraction conditions.In nature, around 80 percent of carboxyl groups of galacturonic acid are esterified with methanol. This proportion is decreased to a varying degree during pectin extraction. Pectins are classified as high- versus low-methoxy pectins (short HM-pectins versus LM-pectins), with more or less than half of all the galacturonic acid esterified. The ratio of esterified to non-esterified galacturonic acid determines the behaviour of pectin in food applications – HM-pectins can form a gel under acidic conditions in the presence of high sugar concentrations, while LM-pectins form gels by interaction with divalent cations, particularly Ca2+, according to the idealized ‘egg box’ model, in which ionic bridges are formed between calcium ions and the ionised carboxyl groups of the galacturonic acid.In high-methoxy pectins at soluble solids content above 60% and a pH value between 2.8 and 3.6, hydrogen bonds and hydrophobic interactions bind the individual pectin chains together. These bonds form as water is bound by sugar and forces pectin strands to stick together. These form a three-dimensional molecular net that creates the macromolecular gel. The gelling-mechanism is called a low-water-activity gel or sugar-acid-pectin gel.While low-methoxy pectins need calcium to form a gel, they can do so at lower soluble solids and higher pH than high-methoxy pectins. Normally low-methoxy pectins form gels with a range of pH from 2.6 to 7.0 and with a soluble solids content between 10 and 70%.The non-esterified galacturonic acid units can be either free acids (carboxyl groups) or salts with sodium, potassium, or calcium. The salts of partially esterified pectins are called pectinates, if the degree of esterification is below 5 percent the salts are called pectates, the insoluble acid form, pectic acid.Some plants, such as sugar beet, potatoes and pears, contain pectins with acetylated galacturonic acid in addition to methyl esters. Acetylation prevents gel-formation but increases the stabilising and emulsifying effects of pectin. Chemistry: Amidated pectin is a modified form of pectin. Here, some of the galacturonic acid is converted with ammonia to carboxylic acid amide. These pectins are more tolerant of varying calcium concentrations that occur in use.Thiolated pectin exhibits substantially improved gelling properties since this thiomer is able to crosslink via disulfide bond formation. These high gelling properties are advantageous for various pharmaceutical applications and applications in food industry.To prepare a pectin-gel, the ingredients are heated, dissolving the pectin. Upon cooling below gelling temperature, a gel starts to form. If gel formation is too strong, syneresis or a granular texture are the result, while weak gelling leads to excessively soft gels. Chemistry: Amidated pectins behave like low-ester pectins but need less calcium and are more tolerant of excess calcium. Also, gels from amidated pectin are thermoreversible; they can be heated and after cooling solidify again, whereas conventional pectin-gels will afterwards remain liquid.High-ester pectins set at higher temperatures than low-ester pectins. However, gelling reactions with calcium increase as the degree of esterification falls. Similarly, lower pH-values or higher soluble solids (normally sugars) increase gelling speeds. Suitable pectins can therefore be selected for jams and jellies, or for higher-sugar confectionery jellies. Sources and production: Pears, apples, guavas, quince, plums, gooseberries, and oranges and other citrus fruits contain large amounts of pectin, while soft fruits, like cherries, grapes, and strawberries, contain small amounts of pectin. Typical levels of pectin in fresh fruits and vegetables are: Apples, 1–1.5% Apricots, 1% Cherries, 0.4% Oranges, 0.5–3.5% Carrots 1.4% Citrus peels, 30% Rose hips, 15%The main raw materials for pectin production are dried citrus peels or apple pomace, both by-products of juice production. Pomace from sugar beets is also used to a small extent. Sources and production: From these materials, pectin is extracted by adding hot dilute acid at pH values from 1.5 to 3.5. During several hours of extraction, the protopectin loses some of its branching and chain length and goes into solution. After filtering, the extract is concentrated in a vacuum and the pectin is then precipitated by adding ethanol or isopropanol. An old technique of precipitating pectin with aluminium salts is no longer used (apart from alcohols and polyvalent cations, pectin also precipitates with proteins and detergents). Sources and production: Alcohol-precipitated pectin is then separated, washed, and dried. Treating the initial pectin with dilute acid leads to low-esterified pectins. When this process includes ammonium hydroxide (NH3(aq)), amidated pectins are obtained. After drying and milling, pectin is usually standardised with sugar, and sometimes calcium salts or organic acids, to optimise performance in a particular application. Uses: The main use for pectin is as a gelling agent, thickening agent and stabiliser in food. The classical application is giving the jelly-like consistency to jams or marmalades, which would otherwise be sweet juices. Pectin also reduces syneresis in jams and marmalades and increases the gel strength of low-calorie jams. For household use, pectin is an ingredient in gelling sugar (also known as "jam sugar") where it is diluted to the right concentration with sugar and some citric acid to adjust pH. In some countries, pectin is also available as a solution or an extract, or as a blended powder, for home jam making. Uses: For conventional jams and marmalades that contain above 60% sugar and soluble fruit solids, high-ester pectins are used. With low-ester pectins and amidated pectins, less sugar is needed, so that diet products can be made. Water extract of aiyu seeds is traditionally used in Taiwan to make aiyu jelly, where the extract gels without heating due to low-ester pectins from the seeds and the bivalent cations from the water.Pectin is used in confectionery jellies to give a good gel structure, a clean bite and to confer a good flavour release. Pectin can also be used to stabilise acidic protein drinks, such as drinking yogurt, to improve the mouth-feel and the pulp stability in juice based drinks and as a fat substitute in baked goods. Uses: Typical levels of pectin used as a food additive are between 0.5 and 1.0% – this is about the same amount of pectin as in fresh fruit.In medicine, pectin increases viscosity and volume of stool so that it is used against constipation and diarrhea. Until 2002, it was one of the main ingredients used in Kaopectate – a medication to combat diarrhea – along with kaolinite. It has been used in gentle heavy metal removal from biological systems. Pectin is also used in throat lozenges as a demulcent. Uses: In cosmetic products, pectin acts as a stabiliser. Pectin is also used in wound healing preparations and speciality medical adhesives, such as colostomy devices. Uses: Sriamornsak revealed that pectin could be used in various oral drug delivery platforms, e.g., controlled release systems, gastro-retentive systems, colon-specific delivery systems and mucoadhesive delivery systems, according to its intoxicity and low cost. It was found that pectin from different sources provides different gelling abilities, due to variations in molecular size and chemical composition. Like other natural polymers, a major problem with pectin is inconsistency in reproducibility between samples, which may result in poor reproducibility in drug delivery characteristics. Uses: In ruminant nutrition, depending on the extent of lignification of the cell wall, pectin is up to 90% digestible by bacterial enzymes. Ruminant nutritionists recommend that the digestibility and energy concentration in forages be improved by increasing pectin concentration in the forage. In cigars, pectin is considered an excellent substitute for vegetable glue and many cigar smokers and collectors use pectin for repairing damaged tobacco leaves on their cigars. Uses: Yablokov et al., writing in Chernobyl: Consequences of the Catastrophe for People and the Environment, quote research conducted by the Ukrainian Center of Radiation Medicine and the Belarusian Institute of Radiation Medicine and Endocrinology, concluded, regarding pectin's radioprotective effects, that "adding pectin preparations to the food of inhabitants of the Chernobyl-contaminated regions promotes an effective excretion of incorporated radionuclides" such as cesium-137. The authors reported on the positive results of using pectin food additive preparations in a number of clinical studies conducted on children in severely polluted areas, with up to 50% improvement over control groups.During the Second World War, Allied pilots were provided with maps printed on silk, for navigation in escape and evasion efforts. The printing process at first proved nearly impossible because the several layers of ink immediately ran, blurring outlines and rendering place names illegible until the inventor of the maps, Clayton Hutton, mixed a little pectin with the ink and at once the pectin coagulated the ink and prevented it from running, allowing small topographic features to be clearly visible. Legal status: At the Joint FAO/WHO Expert Committee Report on Food Additives and in the European Union, no numerical acceptable daily intake (ADI) has been set, as pectin is considered safe.In the United States, pectin is generally recognised as safe for human consumption. In the International Numbering System (INS), pectin has the number 440. In Europe, pectins are differentiated into the E numbers E440(i) for non-amidated pectins and E440(ii) for amidated pectins. There are specifications in all national and international legislation defining its quality and regulating its use. History: Pectin was first isolated and described in 1825 by Henri Braconnot, though the action of pectin to make jams and marmalades was known long before. To obtain well-set jams from fruits that had little or only poor quality pectin, pectin-rich fruits or their extracts were mixed into the recipe. During the Industrial Revolution, the makers of fruit preserves turned to producers of apple juice to obtain dried apple pomace that was cooked to extract pectin. Later, in the 1920s and 1930s, factories were built that commercially extracted pectin from dried apple pomace, and later citrus peel, in regions that produced apple juice in both the US and Europe. Pectin was first sold as a liquid extract, but is now most often used as dried powder, which is easier than a liquid to store and handle.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Damn Small Linux** Damn Small Linux: Damn Small Linux (DSL) is a discontinued computer operating system for the x86 family of personal computers. It is free and open-source software under the terms of the GNU GPL and other free and open source licenses. It was designed to run graphical user interface applications on older PC hardware, for example, machines with 486 and early Pentium microprocessors and very little random-access memory (RAM). DSL is a Live CD with a size of 50 megabytes (MB). What originally began as an experiment to see how much software could fit in 50 MB eventually became a full Linux distribution. It can be installed on storage media with small capacities, like bootable business cards, USB flash drives, various memory cards, and Zip drives. History: DSL was originally conceived and maintained by John Andrews. For five years the community included Robert Shingledecker who created the MyDSL system, DSL Control Panel and other features. After issues with the main developers, Robert was, by his account, exiled from the project. He currently continues his work on Tiny Core Linux which he created in April 2008. DSL was originally based on Model-K, a 22 MB stripped down version of Knoppix, but soon after was based on Knoppix proper, allowing much easier remastering and improvements. System requirements: DSL supports only x86 PCs. The minimum system requirements are a 486 processor and 8 MB of RAM. DSL has been demonstrated browsing the web with Dillo, running simple games and playing music on systems with a 486 processor and 16 MB of RAM. The system requirements are higher for running Mozilla Firefox and optional add-ons such as the OpenOffice.org office suite. Features: As of July 2014, version 4.4.10 of DSL, released November 18, 2008, was current. It includes the following software: Text editors: Beaver, Nano, Vim File managers: DFM, emelFM Graphics: mtPaint (raster graphics editor), xzgv (image viewer) Multimedia: gphone, XMMS with MPEG-1 and Video CD (VCD) support Office: Siag Office (spreadsheet program), Ted (word processor) with spell checker, Xpdf (viewer for Portable Document Format (PDF) documents) Internet: Web browsers: Dillo, Firefox, Netrik Sylpheed (E-mail client) naim (AOL Instant Messenger (AIM), ICQ, and IRC client) AxyFTP (File Transfer Protocol (FTP) client), BetaFTPD (FTP server) Monkey (web server) Server Message Block (SMB) client Rdesktop (Remote Desktop Protocol (RDP) client, Virtual Network Computing (VNC) viewer Others: Dynamic Host Configuration Protocol (DHCP) client, Secure Shell (SSH) and secure copy protocol (SCP) client and server; Point-to-Point Protocol (PPP), Point-to-Point Protocol over Ethernet (PPPoE), Asymmetric Digital Subscriber Line (ADSL) support; FUSE, Network File System (NFS), SSH Filesystem (SSHFS) support; UnionFS; generic and Ghostscript printing support; PC card, Universal Serial Bus (USB), Wi-Fi support; calculator, games, system monitor; many command-line toolsDSL has built-in scripts to download and install Advanced Packaging Tool (APT). Once APT is enabled, a user can install packages from Debian's repositories. Also, DSL hosts software ranging from large applications like OpenOffice.org and GNU Compiler Collection (GCC), to smaller ones such as aMSN, by means of the MyDSL system, which allows convenient one-click download and installing of software. Files hosted on MyDSL are called extensions. As of June 2008, the MyDSL servers were hosting over 900 applications, plugins, and other extensions. Boot options: Boot options are also called "cheat codes" in DSL. Automatic hardware detection may fail, or the user may want to use something other than the default settings (language, keyboard, VGA, fail safe graphics, text mode...). DSL allows the user to enter one or more cheat codes at the boot prompt. If nothing is entered, DSL will boot with the default options. Cheat codes affect many auto-detection and hardware options. Many cheat codes also affect the GUI. The list of cheat codes can be seen at boot time and also at the DSL Wiki. The MyDSL system: MyDSL is handled and maintained mostly by Robert Shingledecker and hosted by many organizations, such as ibiblio and Belgium's BELNET. There are 2 areas of MyDSL: regular and testing. The regular area contains extensions that have been proven stable enough for everyday use and is broken down into different areas such as apps, net, system, and uci (Universal Compressed ISO - Extensions in .uci format are mounted as a separate file system to minimize RAM use). The testing area is for newly submitted extensions that theoretically work well enough, but may have any number of bugs. Versions and ports: Release timeline Flavours The standard flavour of DSL is the Live CD. There are also other versions available: 'Frugal' installation: DSL's 'cloop' image is installed, as a single file, to a hard disk partition. This is likely more reliable and secure than a traditional hard drive installation, since the cloop image cannot be directly modified; any changes made are only stored in memory and discarded upon rebooting. Versions and ports: 'dsl-version-embedded.zip': Includes QEMU for running DSL inside Windows or Linux. 'dsl-version-initrd.iso': Integrates the normally-separate cloop image into the initrd image; this allows network booting, using PXE. As a regular toram boot, requires at least 128mb ram. 'dsl-version-syslinux.iso': Boots using syslinux floppy image emulation instead of isolinux; for very old PCs that cannot boot with isolinux. 'dsl-version-vmx.zip': A virtual machine hard drive image that can be run in VirtualBox, VMware Workstation or VMware Player. Versions and ports: DSL-N: A larger version of DSL that exceeds the 50 MB limit of business-card CDs. DSL-N uses version 2 of the GTK+ widget toolkit and version 2.6 of the Linux kernel. The latest release of DSL-N, 0.1RC4, is 95 MB in size. It is not actively maintained.One can also boot DSL using a boot-floppy created from one of the available floppy images ('bootfloppy.img'; 'bootfloppy-grub.img'; 'bootfloppy-usb.img'; or 'pcmciabootfloppy.img') on very old computers, where the BIOS does not support the El Torito Bootable CD Specification. The DSL kernel is loaded from the floppy disk into RAM, after which the kernel runs DSL from the CD or USB drive. Versions and ports: Ports and derivatives DSL was ported to the Xbox video game console as X-DSL. X-DSL requires a modified Xbox. It can run as a Live CD or be installed to the Xbox hard drive. Users have also run X-DSL from a USB flash drive, using the USB adaptor included with Phantasy Star Online, which plugs into the memory card slot and includes one USB 1.1 port. X-DSL boots into a X11-based GUI; the Xbox controller can be used to control the mouse pointer and enter text using a virtual keyboard. X-DSL has a Fluxbox desktop, with programs for E-mail, web browsing, word processing and playing music. X-DSL can be customized by downloading extensions from the same MyDSL servers as DSL. Versions and ports: Linux distributions derived from Damn Small Linux include Hikarunix, used for a CD image that runs the game of Go released in 2005, and Damn Vulnerable Linux. Live USB: A Live USB of Damn Small Linux can be created manually or with applications like UNetbootin. See List of tools to create Live USB systems for full list. Status: Due to infighting among the project's originators and main developers, DSL development seemed to be at a standstill for a long time, and the future of the project was uncertain, much to the dismay of many of the users. On July 8, 2012, John Andrews (the original developer) announced that a new release was being developed. The DSL website, including the forums which were once inaccessible, were back, as well. The first RC of the new 4.11 was released on August 3, 2012, followed by a second one on September 26. The damnsmalllinux.org site was inaccessible again sometime in 2015 to February 2016. As of March 27, 2016, it was again accessible for some time, but as of February 10, 2019 was inaccessible yet again. As of 2021 it was accessible.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Foam latex** Foam latex: Foam latex or latex foam rubber is a lightweight form of latex containing bubbles known as cells, created from liquid latex. The foam is generally created though the Dunlop or Talalay process in which a liquid latex is foamed and then cured in a mold to extract the foam.Structural enhancements are applied to a foam by making different choices of polymers used for the foam or through the use of fillers in the foam. Historically, natural rubber latex is used for the foam, but a similar commercial contender is styrene-butadiene latex, which is especially designed for use in latex foams. Mineral fillers may also be used for the enhancement of properties like stability, load bearing, or flame resistance, but these fillers often come at the cost of lowered tensile strength and extension at break, which are generally desirable properties in the product.Latex foam has properties of energy absorption, thermal conductivity, and compression that make them suitable for many commercial applications like upholstery, soundproofing, thermal insulation (especially in construction), and transportation of goods.Foam latex is also used in masks and facial prosthetics to change a person's outward appearance. The Wizard of Oz was one of the first films to make extensive use of foam latex prosthetics in the 1930s. Since then, it has been a staple of film, television, and stage productions, in addition to use in a number of other fields. Single use plastics and polymer foams are often disposed of in landfills, and there is a growing concern about the amount of space this waste takes up. In an effort to make the foams more environmentally friendly, research is being done into fillers than can achieve the same enhancements as mineral while also increasing biodegradability of the product. Examples of such fillers include eggshell powders and rice husk powders. Structure: Latex foam is a form of latex that is lightweight and expanded. Cellular air bubbles are created inside liquid latex, and they can be shaped into different shapes and sizes. The extension of the foam is defined by the amount of air inside of these cells. Lower density and more extended foams tend to have cells which are more polyhedral, while less extended foams tend to have more spherical cells.While the density of the foam ( ρf ) can be measured, a more important property is relative density of the foam to the density of the original latex base ( ρs ). This is expressed as ϕ=ρfρs . Polymer foams will also have some ratio of closed cells to open cells (air bubbles which have been burst open), which can be measured through the water permeability of the foam. Creation: To create foam latex, a liquid latex base is mixed with various additives and whipped into a foam, then poured or injected into a mold and baked in an oven to cure. The main components of foam latex are the latex base, a foaming agent (to help it whip into a froth), a gelling agent (to convert the liquid foam into a gel), and a curing agent (to turn the gelled foam latex into a solid when baked). A number of additional additives can also be added depending on the required use of the foam. Creation: Dunlop Process The Dunlop process can be performed in batch form and in a continuous form. The following is a description for the batch process. Different ingredients for the latex foam are prepared, including the choice of liquid latex, compounding agents, and stabilizers, are prepared for usage. Deammoniated liquid latex is mixed with stabilizer and other ingredients, either as dispersions or emulsions depending on solubility in water. The compound is gently stirred and allowed to mix. Fillers may be added at this point. The compound may be left to mature for 24 hours. A Hobart mixer whips the compound to cause it to foam, incorporating differently sized bubbles into it and allowing it to expand to a desired size. The whipping speed is reduced, and the bubbles assume a more regular size. A foam stabilizer can be added now. Creation: A gelling agent can be added next, and then the compound is poured into a mold where it is allowed to gel and cure over time.Uniformity is a highly sought after property commercially, and performing the Dunlop process in a continuous manner rather than in batches helps increase the uniformity of the produced foams. Other advantages of the continuous process is the decreased labor cost and lowered waste product from the mold. The continuous process includes the use of a machine with different chambers for the creation and foaming of the mixture, addition of fillers, and molding and curing. Creation: Talalay Process Different ingredients for the latex foam are prepared, including the choice of liquid latex, compounding agents, and stabilizers, are prepared for usage. Deammoniated liquid latex is mixed with stabilizer and other ingredients, either as dispersions or emulsions depending on solubility in water. The compound is gently stirred and allowed to mix. Fillers may be added at this point. The compound may be left to mature for 24 hours. Through decomposition of hydrogen peroxide by yeast, bubbles are created which cause the foaming of the compound inside of the specialized mold. A vacuum is applied to the mold to promote expansion. The compound is then quickly frozen to create air bubbles. Finally, the compound is allowed to cure and removed from the mold.The disuse of a gelling agent in preference for carbon dioxide makes the process more environmentally friendly, but the Talalay process is still not widely used for specialized latex foams industrially. Properties: Expansion and density In general, latex foams have lower density than the original polymer they are made of. This density can be measured regularly by taking a volume and mass measurement of the material. For a volume measurement of irregularly shaped foam, the foam pieces can be coated with wax and inserted in a known volume of water to measure volume change in the container. The purpose of the wax is to prevent water permeation into the foam, which may lead to a lower perceived volume (and higher perceived density as a result) if not accounted for. The density of a foam decreases as the expansion of the foam increases. Expansion, in turn, relates to the amount of air inside the cells of the foam. The more air inside the cells, the larger the expansion. Properties: Compression Latex foams demonstrate a stress-strain curve with three regions when compressed. This relates to the resistive force expressed by the foam when a load or force is applied to it. The shape of different regions of the curve will reflect some important quality of the foam relating to compression or relaxation stress and strain behaviour of the material.First, the foam will show a linear Hookian increase in stress. This happens because the gas contained in foam cells is compressed, and the walls of the cells maintain their structure. In the second region, the cell walls are being crushed, and no additional stress is experienced, and so the stress plateaus. In the third region, the foam increases in density as crushed cell wall material is compressed into itself. This leads to a steep increase in stress in the region of densification. Properties: Resistance to dynamic fatigue Relating to the longevity of the material, the resistance to dynamic fatigue is tested by recursively compressing a foam and allowing it to relax. The resistance of the foam to dynamic fatigue can then be measured either by visually observing the structure of the cells to note what proportion of cell walls has broken or ruptured, or by measuring the change in physical properties like the thickness of the material. Properties: Thermal Conductivity The low thermal conductivity of latex foams is affected by four factors: heat conduction of the polymer, heat conduction of the gas within the air bubbles, convection of gas inside the cells (less important for small to medium size cells), and radiation through the foam.There are several ways conductivity can be affected through these factors: lower temperature to lower heat radiation; decrease cell size to decrease convection and radiation (due to more reflections within the cell walls); decrease foam density to decrease conduction through the solid polymer; replace air for a less conductive gas inside the cells. Properties: Energy Absorption Energy absorption is a particularly important quality of latex foam. Properties: Most energy absorption occurs in the first and second regions of the strain-stress curve. In less elastomeric polymers, the cell walls are more brittle and therefore can get crushed more easily. In this case, most of the absorption occurs in the second region of the curve caused by the deformation and crushing of cell walls. This means that each cell can only contribute once to such absorption (that is, cells are getting crushed and therefore used up).For a more elastomeric polymer, the cell walls are more flexible and can take more impact. The cell wall in this case may bend and the cell becomes squeezed, but the cell will eventually return to its original shape. Most energy absorption therefore occurs in the first region of the stress-strain graph. The foam can also handle more instances of impact as the cells do not become depleted as easily. This is a significant environmental improvement. Classification and Additives: Choice of Polymer Traditional polymer choices Historically, natural rubber latex was used, and foams were produced using the Dunlop processes. Styrene-butadiene rubber latex rose to prominence once high-solids concentrates, which were designed specifically for foaming, began to be sold on the market. Properties of this polymer were fairly similar to natural rubber latex, so the competition between the two choices here is mostly economical. Classification and Additives: Polymer choices for variation in properties Other kinds of polymers were chosen for their properties and how they affect the properties of the foam in turn. For example, polychloroprene foam rubber is more difficult to burn and provides a less flammable alternative to traditional latex foam. Acrylonitrile-butadienelatex foam rubber is resistant to swelling in hydrocarbon oils. Classification and Additives: Fillers Structural fillers These are fillers meant to increase the stability and load bearing capabilities of the foam latex while increasing expansion and therefore lowering the coast of materials. However, adding fillers also affects the desirable properties of the latex foam, such as by decreasing extension at break and resistance to repeated occurrences of stress and relaxation.Mineral fillers like kaolinite clays and calcium carbonates can be added during the whipping phase (in the batch process) or mixing phase (in the continuous process) to the latex foam. Wet-ground micas can similarly be added into the latex during foaming, and they tend to have a lower impact on tensile strength and extension at break. However, micas tend to cause more shrinking to the product at the unmolding phase. Classification and Additives: Flame Retardants Since latex foams are a fire hazard, there are efforts to incorporate fillers into the foams to decrease their flammability. Such fillers include chlorinated paraffin hydrocarbons, antimony trioxide, zinc borate, and hydrated aluminum oxide. Classification and Additives: Naturally Sourced Fillers These are materials that improve the structural properties of latex foam while also making it more environmentally friendly through increased biodegradability. A particular interest is using organic waste products to create these fillers.Eggshell powder is an example of such a filler which can be added into the latex foam to manipulate the properties of the product and increase its environmental friendliness. Similarly to mineral fillers, eggshell powder increases compression stress, compression set, hardness, and density of the foam while decreasing tensile strength and extension at break. This filler also decreases the thermal stability of the material produced, but adding resin, another possible organic filler, was found to increase the tensile strength of eggshell powder filled natural rubber polymer foam.Another proposed filler with similar properties was rice husk powder, which increases load bearing properties of the foam while decreasing tensile strength and extension at break. This was also found to increase the biodegradability of the foam for improved control over post-consumer waste of these products. Applications: Transportation Due to their energy absorption properties, latex foams are useful for transportation applications, such as in packaging to decrease impact on the shipped product or in vehicle upholstery. While packaging foams may be single-use with low resistance to dynamic fatigue, upholstery tends to benefit from being denser and more resistant to fatigue as it absorbs lower impacts but needs to do so more repeatedly. Applications: Furniture Latex foams can be used in items like bedding, upholstery, and pillows for cushioning purposes due to their expressed stress-strain curve when experiencing a load. Soundproofing Due to their containing of air bubbles, latex foams carry some soundproofing properties. In particular, both natural rubber and styrene-butadiene latex foam are found to be good at soundproofing, but styrene-butadiene foams tend to be better for this purpose. Separation of oil and water Oil pollution in water bodies is a major environmental concern. Separating oil and water is helpful both to clean the water and recover the oil. Latex foams are hydrophobic and absorbent, in addition to being resilient and recyclable, and can therefore be used to absorb the oil in water-oil mixtures to separate them. Applications: Sports, arts, and recreation Foam latex is used in masks and facial prosthetics to change a person's outward appearance. The Wizard of Oz was one of the first films to make extensive use of foam latex prosthetics in the 1930s.Theatrical latex foam is a specialized latex foam which is softer than commercial latex foam. It can be used in various arts and crafts including puppetry and costumes because of its ability to pick up small details of painting as well as its strength. Miss Piggy, Statler and Waldorf in Jim Henson's The Muppet Show as well as characters in Henson's next production, The Dark Crystal, were some of the first puppets created from latex foams used on a large scale.Artists such as Lordi and GWAR wear costumes that include this material.Latex foam is also widespread in the manufacture of modern soccer goalkeeper gloves. The material has proven to be the most effective way of allowing players to grip the football in wet and dry playing conditions, as well as providing damping properties which help in catching. A variety of treatments are applied to latex foam to produce different types of foam with varying properties to assist performance. Some, for example, are designed to offer a high level of grip; whereas others are designed to offer maximum durability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spanish Sign Language** Spanish Sign Language: Spanish Sign Language (Spanish: Lengua de Signos Española, LSE) is a sign language used mainly by deaf people in Spain and the people who live with them. Although there are not many reliable statistics, it is estimated that there are over 100,000 speakers, 20-30% of whom use it as a second language. Spanish Sign Language: From a strictly linguistic point of view, Spanish Sign Language refers to a sign language variety employed in an extensive central-interior area of the Iberian Peninsula, having Madrid as a cultural and linguistic epicenter, with other varieties used in regions such as Asturias, Aragon, Murcia, parts of western Andalusia and near the Province of Burgos.Mutual intelligibility with the rest of the sign languages used in Spain is generally high due to a highly shared lexicon. However, Catalan Sign Language, Valencian Sign Language as well as the Spanish Sign Language dialects used in eastern Andalusia, Canary Islands, Galicia and Basque Country are the most distinctive lexically (between 10 and 30% difference in the use of nouns, depending on the case). Only the Catalan and Valencian Sign Languages share less than 75% of their vocabulary with the rest of the Spanish dialects, which makes them particularly marked, distinct dialects or even languages separate from Spanish Sign Language, depending on the methods used to determine language versus dialect. Some linguists consider both these and the Spanish Sign language three variants of a polymorphic sign language.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Costovertebral angle** Costovertebral angle: The costovertebral angle (Latin: arcus costovertebralis) is the acute angle formed on either side of the human back between the twelfth rib and the vertebral column.The kidney lies directly below this area, so is the place where, with percussion (Latin: sucussio renalis), pain is elicited when the person has kidney inflammation. The presence of pain is marked as a positive Murphy's punch sign or as costovertebral angle tenderness.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mermin–Wagner theorem** Mermin–Wagner theorem: In quantum field theory and statistical mechanics, the Mermin–Wagner theorem (also known as Mermin–Wagner–Hohenberg theorem, Mermin–Wagner–Berezinskii theorem, or Coleman theorem) states that continuous symmetries cannot be spontaneously broken at finite temperature in systems with sufficiently short-range interactions in dimensions d ≤ 2. Intuitively, this means that long-range fluctuations can be created with little energy cost, and since they increase the entropy, they are favored. Mermin–Wagner theorem: This is because if such a spontaneous symmetry breaking occurred, then the corresponding Goldstone bosons, being massless, would have an infrared divergent correlation function. The absence of spontaneous symmetry breaking in d ≤ 2 dimensional systems was rigorously proved by David Mermin, Herbert Wagner (1966), and Pierre Hohenberg (1967) in statistical mechanics and by Sidney Coleman (1973) in quantum field theory. That the theorem does not apply to discrete symmetries can be seen in the two-dimensional Ising model. Introduction: Consider the free scalar field φ of mass m in two Euclidean dimensions. Its propagator is: G(x)=⟨φ(x)φ(0)⟩=∫d2k(2π)2eik⋅xk2+m2. For small m, G is a solution to Laplace's equation with a point source: ∇2G=δ(x). This is because the propagator is the reciprocal of ∇2 in k space. To use Gauss's law, define the electric field analog to be E = ∇G. The divergence of the electric field is zero. In two dimensions, using a large Gaussian ring: E=12πr. So that the function G has a logarithmic divergence both at small and large r. Introduction: log ⁡(r) The interpretation of the divergence is that the field fluctuations cannot stay centred around a mean. If you start at a point where the field has the value 1, the divergence tells you that as you travel far away, the field is arbitrarily far from the starting value. This makes a two dimensional massless scalar field slightly tricky to define mathematically. If you define the field by a Monte Carlo simulation, it doesn't stay put, it slides to infinitely large values with time. Introduction: This happens in one dimension too, when the field is a one dimensional scalar field, a random walk in time. A random walk also moves arbitrarily far from its starting point, so that a one-dimensional or two-dimensional scalar does not have a well defined average value. Introduction: If the field is an angle, θ, as it is in the Mexican hat model where the complex field A = Reiθ has an expectation value but is free to slide in the θ direction, the angle θ will be random at large distances. This is the Mermin–Wagner theorem: there is no spontaneous breaking of a continuous symmetry in two dimensions. XY model transition: While the Mermin–Wagner theorem prevents any spontaneous symmetry breaking on a global scale, ordering transitions of Kosterlitz–Thouless–type may be allowed. This is the case for the XY model where the continuous (internal) O(2) symmetry on a spatial lattice of dimension d ≤ 2, i.e. the (spin-)field's expectation value, remains zero for any finite temperature (quantum phase transitions remain unaffected). However, the theorem does not prevent the existence of a phase transition in the sense of a diverging correlation length ξ. To this end, the model has two phases: a conventional disordered phase at high temperature with dominating exponential decay of the correlation function exp ⁡(−r/ξ) for r/ξ≫1 , and a low-temperature phase with quasi-long-range order where G(r) decays according to some power law for "sufficiently large", but finite distance r (a ≪ r ≪ ξ with a the lattice spacing). Heisenberg model: We will present an intuitive way to understand the mechanism that prevents symmetry breaking in low dimensions, through an application to the Heisenberg model, that is a system of n-component spins Si of unit length |Si| = 1, located at the sites of a d-dimensional square lattice, with nearest neighbour coupling J. Its Hamiltonian is H=−J∑⟨i,j⟩Si⋅Sj. Heisenberg model: The name of this model comes from its rotational symmetry. Consider the low temperature behavior of this system and assume that there exists a spontaneously broken symmetry, that is a phase where all spins point in the same direction, e.g. along the x-axis. Then the O(n) rotational symmetry of the system is spontaneously broken, or rather reduced to the O(n − 1) symmetry under rotations around this direction. We can parametrize the field in terms of independent fluctuations σα around this direction as follows: 1. Heisenberg model: with |σα| ≪ 1, and Taylor expand the resulting Hamiltonian. We have Si⋅Sj=(1−∑ασiα2)(1−∑ασjα2)+∑ασiασjα=1−12∑α(σiα2+σjα2)+∑ασiασjα+O(σ4)=1−12∑α(σiα−σjα)2+… whence H=H0+12J∑⟨i,j⟩∑α(σiα−σjα)2+⋯ Ignoring the irrelevant constant term H0 = −JNd and passing to the continuum limit, given that we are interested in the low temperature phase where long-wavelength fluctuations dominate, we get H=12J∫ddx∑α(∇σα)2+…. The field fluctuations σα are called spin waves and can be recognized as Goldstone bosons. Indeed, they are n-1 in number and they have zero mass since there is no mass term in the Hamiltonian. Heisenberg model: To find if this hypothetical phase really exists we have to check if our assumption is self-consistent, that is if the expectation value of the magnetization, calculated in this framework, is finite as assumed. To this end we need to calculate the first order correction to the magnetization due to the fluctuations. This is the procedure followed in the derivation of the well-known Ginzburg criterion. Heisenberg model: The model is Gaussian to first order and so the momentum space correlation function is proportional to k−2. Thus the real space two-point correlation function for each of these modes is ⟨σα(r)σα(0)⟩=1βJ∫1addk(2π)deik⋅rk2 where a is the lattice spacing. The average magnetization is ⟨S1⟩=1−12∑α⟨σα2⟩+… and the first order correction can now easily be calculated: ∑α⟨σα2(0)⟩=(n−1)1βJ∫1addk(2π)d1k2. Heisenberg model: The integral above is proportional to ∫1akd−3dk and so it is finite for d > 2, but appears to be divergent for d ≤ 2 (logarithmically for d = 2). This divergence signifies that fluctuations σα are large so that the expansion in the parameter |σα| ≪ 1 performed above is not self-consistent. One can naturally expect then that beyond that approximation, the average magnetization is zero. Heisenberg model: We thus conclude that for d ≤ 2 our assumption that there exists a phase of spontaneous magnetization is incorrect for all T > 0, because the fluctuations are strong enough to destroy the spontaneous symmetry breaking. This is a general result: Mermin–Wagner–Hohenberg Theorem. There is no phase with spontaneous breaking of a continuous symmetry for T > 0, in d ≤ 2 dimensions.The result can also be extended to other geometries, such as Heisenberg films with an arbitrary number of layers, as well as to other lattice systems (Hubbard model, s-f model). Generalizations: Much stronger results than absence of magnetization can actually be proved, and the setting can be substantially more general. In particular: The Hamiltonian can be invariant under the action of an arbitrary compact, connected Lie group G. Generalizations: Long-range interactions can be allowed (provided that they decay fast enough; necessary and sufficient conditions are known).In this general setting, Mermin–Wagner theorem admits the following strong form (stated here in an informal way): All (infinite-volume) Gibbs states associated to this Hamiltonian are invariant under the action of G.When the assumption that the Lie group be compact is dropped, a similar result holds, but with the conclusion that infinite-volume Gibbs states do not exist. Generalizations: Finally, there are other important applications of these ideas and methods, most notably to the proof that there cannot be non-translation invariant Gibbs states in 2-dimensional systems. A typical such example would be the absence of crystalline states in a system of hard disks (with possibly additional attractive interactions). It has been proved however that interactions of hard-core type can lead in general to violations of Mermin–Wagner theorem. History: Already in 1930, Felix Bloch has argued by diagonalizing the Slater determinant for fermions, that magnetism in 2D should not exist. Some easy arguments, which are summarized below, were given by Rudolf Peierls based on entropic and energetic considerations. Also Lev Landau did some work about symmetry breaking in two dimensions. History: Energetic argument One reason for the lack of global symmetry breaking is, that one can easily excite long wavelength fluctuations which destroy perfect order. ``Easily excited´´ means, that the energy for those fluctuations tend to zero for large enough systems. Let's consider a magnetic model (e.g. the XY-model in one dimension). It is a chain of magnetic moments of length L . We consider harmonic approximation, where the forces (torque) between neighbouring moments increase linearly with the angle of twisting γi . This implies, that the energy due to twisting increases quadratically Ei∝γi2 . The total energy is the sum of all twisted pairs of magnetic moments Eges∝∑iγi2 . If one considers the excited mode with the lowest energy in one dimension (see figure), then the moments on the chain of length L are tilted by 2π along the chain. The relative angle between neighbouring moments is the same for all pairs of moments in this mode and equals γi=2π/N , if the chain consists of N magnetic moments. It follows that the total energy of this lowest mode is Eges∝N⋅γi2=N4π2N2∝L4π2L2 . It decreases with increasing system size ∝1/L and tends to zero in the thermodynamic limit L→∞ , N→∞ , L/N=const. History: . For arbitrary large systems follows, that the lowest modes do not cost any energy and will be thermally excited. Simultaneously, the long range order is destroyed on the chain. In two dimensions (or in a plane) the number of magnetic moments is proportional to the area of the plain N∝L2 . The energy for the lowest excited mode is then =∝ L24π2L2 , which tends to a constant in the thermodynamic limit. Thus the modes will be excited at sufficiently large temperatures. In three dimensions, the number of magnetic moments is proportional to the volume V=L3 and the energy of the lowest mode is =∝ L34π2L2 . It diverges with system size and will thus not be excited for large enough systems. Long range order is not affected by this mode and global symmetry breaking is allowed. History: Entropic argument An entropic argument against perfect long range order in crystals with D<3 is as follows (see figure): consider a chain of atoms/particles with an average particle distance of ⟨a⟩ . Thermal fluctuations between particle 0 and particle 1 will lead to fluctuations of the average particle distance of the order of ξ0,1 , thus the distance is given by a=⟨a⟩±ξ0,1 . The fluctuations between particle −1 and 0 will be of the same size: |ξ−1,0|=|ξ0,1| . We assume that the thermal fluctuations are statistically independent (which is evident if we consider only nearest neighbour interaction) and the fluctuations between −1 and particle +1 (with double the distance) has to be summed statistically independent (or incoherent): ξ−1,1=2⋅ξ0,1 . For particles N-times the average distance, the fluctuations will increase with the square root ξ0,N=N⋅ξ0,1 if neighbouring fluctuations are summed independently. Although the average distance ⟨a⟩ is well defined, the deviations from a perfect periodic chain increase with the square root of the system size. In three dimensions, one has to walk along three linearly independent directions to cover the whole space; in a cubic crystal, this is effectively along the space diagonal, to get from particle 0 to particle 3 . As one can easily see in the figure, there are six different possibilities to do this. This implies, that the fluctuations on the six different pathways cannot be statistically independent, since they pass the same particles at position 0 and 3 . Now, the fluctuations of the six different ways have to be summed in a coherent way and will be of the order of ξ – independent of the size of the cube. The fluctuations stay finite and lattice sites are well defined. For the case of two dimensions, Herbert Wagner and David Mermin have proved rigorously, that fluctuations distances increase logarithmically with systems size ξ∝ln(L) . This is frequently called the logarithmic divergence of displacements. Crystals in 2D: The image shows a (quasi-) two-dimensional crystal of colloidal particles. These are micrometre-sized particles dispersed in water and sedimented on a flat interface, thus they can perform Brownian motions only within a plane. The sixfold crystalline order is easy to detect on a local scale, since the logarithmic increase of displacements is rather slow. The deviations from the (red) lattice axis are easy to detect, too, here shown as green arrows. The deviations are basically given by the elastic lattice vibrations (acoustic phonons). A direct experimental proof of Mermin–Wagner–Hohenberg fluctuations would be, if the displacements increase logarithmic with the distance of a locally fitted coordinate frame (blue). This logarithmic divergence goes along with an algebraic (slow) decay of positional correlations. The spatial order of a 2D crystal is called quasi-long-range (see also such hexatic phase for the phase behaviour of 2D ensembles). Crystals in 2D: Interestingly, significant signatures of Mermin–Wagner–Hohenberg fluctuations have not been found in crystals but in disordered amorphous systems.This work did not investigate the logarithmic displacements of lattice sites (which are difficult to quantify for a finite system size), but the magnitude of the mean squared displacement of the particles as function of time. This way, the displacements are not analysed in space but in the time domain. The theoretical background is given by D. Cassi, as well as F. Merkl and H. Wagner. This work analyses the recurrence probability of random walks and spontaneous symmetry breaking in various dimensions. The finite recurrence probability of a random walk in one and two dimension shows a dualism to the lack of perfect long-range order in one and two dimensions, while the vanishing recurrence probability of a random walk in 3D is dual to existence of perfect long-range order and the possibility of symmetry breaking. Limits: Real magnets usually do not have a continuous symmetry, since the spin-orbit coupling of the electrons imposes an anisotropy. For atomic systems like graphene, one can show that monolayers of cosmological (or at least continental) size are necessary to measure a significant size of the amplitudes of fluctuations. A recent discussion about the Mermin–Wagner-Hohenberg–Theorems and its limitations is given by Bertrand Halperin. Limits: The most severe physical limitation are finite-size effects in 2D, because the suppression due to infrared fluctuations is only logarithmic in the size. The sample would have to be larger than the observable universe for a 2D superconducting transition to be suppressed below ~100 K. For magnetism, there is a roughly order-of-magnitude suppression of Tc, which still allows magnetic order in 2D samples at ~10 K. However, because disorder and interlayer coupling compete with finite-size effects at restoring order, it cannot be said a priori which of them is responsible for the observation of magnetic ordering in a given 2D sample. Remarks: The discrepancy between the Mermin–Wagner–Hohenberg theorem (ruling out long range order in 2D) and the first computer simulations (Alder&Wainwright), which indicated crystallization in 2D, once motivated Michael Kosterlitz and David Thouless, to work on topological phase transitions in 2D. This work is awarded with the 2016 Nobel-prize in physics (together with Duncan Haldane).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geomagnetic pole** Geomagnetic pole: The geomagnetic poles are antipodal points where the axis of a best-fitting dipole intersects the surface of Earth. This theoretical dipole is equivalent to a powerful bar magnet at the center of Earth, and comes closer than any other point dipole model to describing the magnetic field observed at Earth's surface. In contrast, the magnetic poles of the actual Earth are not antipodal; that is, the line on which they lie does not pass through Earth's center. Geomagnetic pole: Owing to motion of fluid in the Earth's outer core, the actual magnetic poles are constantly moving (secular variation). However, over thousands of years, their direction averages to the Earth's rotation axis. On the order of once every half a million years, the poles reverse (i.e., north switches place with south) although the time frame of this switching can be anywhere from every 10 thousand years to every 50 million years. The poles also swing in an oval of around 50 miles (80 km) in diameter daily due to solar wind deflecting the magnetic field.Although the geomagnetic pole is only theoretical and cannot be located directly, it arguably is of more practical relevance than the magnetic (dip) pole. This is because the poles describe a great deal about the Earth's magnetic field, determining for example where auroras can be observed. The dipole model of the Earth's magnetic field consists of the location of geomagnetic poles and the dipole moment, which describes the strength of the field. Definition: As a first-order approximation, the Earth's magnetic field can be modeled as a simple dipole (like a bar magnet), tilted about 9.6° with respect to the Earth's rotation axis (which defines the Geographic North and Geographic South Poles) and centered at the Earth's center. The North and South Geomagnetic Poles are the antipodal points where the axis of this theoretical dipole intersects the Earth's surface. Thus, unlike the actual magnetic poles, the geomagnetic poles always have an equal degree of latitude and supplementary degrees of longitude respectively (2017: Lat. 80.5°N, 80.5°S; Long. 72.8°W, 107.2°E). If the Earth's magnetic field were a perfect dipole, the field lines would be vertical to the surface at the Geomagnetic Poles, and they would align with the North and South magnetic poles, with the North Magnetic Pole at the south end of dipole. However, the approximation is imperfect, and so the Magnetic and Geomagnetic Poles lie some distance apart. Location: Like the North Magnetic Pole, the North Geomagnetic Pole attracts the north pole of a bar magnet and so is in a physical sense actually a magnetic south pole. It is the center of the 'open' magnetic field lines which connect to the interplanetary magnetic field and provide a direct route for the solar wind to reach the ionosphere. As of 2020, it will be located at 80.65°N 72.68°W / 80.65; -72.68 (Geomagnetic North Pole 2020 est), on Ellesmere Island, Nunavut, Canada, compared to 2015, when it was located at 80.37°N 72.62°W / 80.37; -72.62 (Geomagnetic North Pole 2015 est), also on Ellesmere Island.The South Geomagnetic Pole is the point where the axis of this best-fitting tilted dipole intersects the Earth's surface in the southern hemisphere. As of 2020, it is located at 80.65°S 107.32°E / -80.65; 107.32 (Geomagnetic South Pole 2020 est), whereas in 2005, it was calculated to be located at 79.74°S 108.22°E / -79.74; 108.22 (Geomagnetic South Pole 2005 est), near Vostok Station. Because the Earth's actual magnetic field is not an exact dipole, the (calculated) North and South Geomagnetic Poles do not coincide with the North and South Magnetic Poles. If the Earth's magnetic fields were exactly dipolar, the north pole of a magnetic compass needle would point directly at the North Geomagnetic Pole. In practice, it does not because the geomagnetic field that originates in the core has a more complex non-dipolar part, and magnetic anomalies in the Earth's crust also contribute to the local field.The locations of geomagnetic poles are calculated by a statistical fit to measurements of the Earth's field by satellites and in geomagnetic observatories. This can be the International Geomagnetic Reference Field (covering a wide time-span in history) or the U.S. World Magnetic Model (only covering a five-year period). Movement: The geomagnetic poles move over time because the geomagnetic field is produced by motion of the molten iron alloys in the Earth's outer core. (See geodynamo.) Over the past 150 years, the poles have moved westward at a rate of 0.05° to 0.1° per year and closer to the true poles at 0.01° per year.Over several thousand years, the average location of the geomagnetic poles coincides with the geographical poles. Paleomagnetists have long relied on the geocentric axial dipole (GAD) hypothesis, which states that — aside from during geomagnetic reversals — the time-averaged position of the geomagnetic poles has always coincided with the geographic poles. There is considerable paleomagnetic evidence supporting this hypothesis. Geomagnetic reversal: Over the life of the Earth, the orientation of Earth's magnetic field has reversed many times, with geomagnetic north becoming geomagnetic south and vice versa – an event known as a geomagnetic reversal. Evidence of geomagnetic reversals can be seen at mid-ocean ridges where tectonic plates move apart. As magma seeps out of the mantle and solidifies to become new ocean floor, the magnetic minerals in it are magnetized in the direction of the magnetic field. The study of this remanence is called palaeomagnetism. Thus, starting at the most recently formed ocean floor, one can read out the direction of the magnetic field in previous times as one moves farther away to older ocean floor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aortic orifice** Aortic orifice: The aortic orifice, (aortic opening) is a circular opening, in front and to the right of the left atrioventricular orifice, from which it is separated by the anterior cusp of the bicuspid valve. It is guarded by the aortic semilunar valve. The portion of the ventricle immediately below the aortic orifice is termed the aortic vestibule, and has fibrous instead of muscular walls.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RELB** RELB: Transcription factor RelB is a protein that in humans is encoded by the RELB gene. Interactions: RELB has been shown to interact with NFKB2, NFKB1, and C22orf25. Activation and function: In resting cells, RelB is sequestered by the NF-κB precursor protein p100 in the cytoplasm. A select set of TNF-R superfamily members, including lymphotoxin β-receptor (LTβR), BAFF-R, CD40 and RANK, activate the non-canonical NF-κB pathway. In this pathway, NIK stimulates the processing of p100 into p52, which in association with RelB appears in the nucleus as RelB:p52 NF-κB heterodimers. RelB:p52 activates the expression homeostatic lymphokines, which instruct lymphoid organogenesis and determine the trafficking of naive lymphocytes in the secondary lymphoid organs. Activation and function: Recent studies has suggested that the functional non-canonical NF-κB pathway is modulated by canonical NF-κB signalling. For example, syntheses of the constituents of the non-canonical pathway, viz RelB and p52, are controlled by canonical IKK2-IκB-RelA:p50 signalling. Moreover, generation of canonical and non-canonical dimers, viz RelA:p50 and RelB:p52, within the cellular milieu are mechanistically interlinked. These analyses suggest that an integrated NF-κB system network underlies activation of both RelA and RelB containing dimer and that a malfunctioning canonical pathway will lead to an aberrant cellular response also through the non-canonical pathway. Activation and function: Most intriguingly, a recent study identified that TNF-induced canonical signalling subverts non-canonical RelB:p52 activity in the inflamed lymphoid tissues limiting lymphocyte ingress. Mechanistically, TNF inactivated NIK in LTβR‐stimulated cells and induced the synthesis of Nfkb2 mRNA encoding p100; these together potently accumulated unprocessed p100, which attenuated the RelB activity. A role of p100/Nfkb2 in dictating lymphocyte ingress in the inflamed lymphoid tissue may have broad physiological implications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Laminated fabric** Laminated fabric: A laminated fabric is a two (or more) layer construction with a polymer film bonded to a fabric. Laminated fabrics are used in rainwear, automotive, and other applications. Windstopper is an example of such fabrics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Callout** Callout: In publishing, a callout or call-out is a short string of text connected by a line, arrow, or similar graphic to a feature of an illustration or technical drawing, and giving information about that feature. The term is also used to describe a short piece of text set in larger type than the rest of the page and intended to attract attention. Callout: In documents that need to be translated often a neutral callout is used. By using numbers or letters as callout in combination with an image caption, translation is more efficient since the same graphic can be used in all languages. A similar device in word processing is a special text box with or without a small "tail" that can be pointed to different locations on a document.In the utility industry, a callout is an instruction to report for emergency or special work at an unusual time or place. Arts: In music, call-out hooks are small portions of a song, usually seven to ten seconds of a song's hook used by radio stations "in market research to assist in gauging the popularity of a song by the recognizability of its hook".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Six phases of a big project** Six phases of a big project: The six phases of a big project is a cynical take on the outcome of large projects, with an unspoken assumption about their seemingly inherent tendency towards chaos. It can be seen as a parody of the traditional process groups in a project lifecycle. The list is reprinted in slightly different variations in any number of project management books as a cautionary tale. Six phases of a big project: One such example gives the phases as: Unbounded enthusiasm, Total disillusionment, Panic, hysteria and overtime, Frantic search for the guilty, Punishment of the innocent, and Reward for the uninvolved.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Box mangle** Box mangle: The box mangle is said to have been invented in the 17th century. It consisted of a heavy frame containing a large box filled with rocks, resting on a series of long wooden rollers. Damp laundry could be laid flat under rollers, or wound round the rollers: sometimes enclosed in a sheet in order to keep the laundry clean. When the rollers were filled, one or two people pulled on levers or turned cranks to move the heavy box back and forth over the rollers. The mangle's primary purpose was to press household linen and clothing smooth.This was a mechanical version of the hand-held mangle boards and rollers/pins used in many parts of northern Europe. Nowadays the word mangle suggests a wringing device for removing water from laundry in some English-speaking countries, but the box mangle was used for pressing and smoothing, and was an alternative to hot ironing for larger items. Flat items, like sheets and tablecloths, usually needed no further ironing. The box mangle was a large and expensive affair and required a fair bit of labor to operate it. It was often used by very large households, commercial laundries or by self-employed mangle women who served their local area. In the 19th century new designs made it easier to operate, and before the middle of the century the upright, space-saving type with cloth pressed between two rollers had become familiar. In the late 19th century the commercial steam laundry replaced the box mangle with the steam mangle, turned by steam power.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intraoral camera** Intraoral camera: Intraoral cameras (IOCs) are cameras used by dentists or doctors to show a patient the interior of their mouth, as an alternative to using a mirror. They were first introduced in 1989 and are now widely used in dental offices. IOCs allow the patient to see a clear picture of the inside of their mouth, aiding the dentist in consulting with them on various treatment options. Images can be saved to a patient's file for future reference. Features: The wand form factor is the industry standard, lightweight, compact, and maneuverable in the patient's mouth. Features: Various design options are also available: Wireless or corded with PC-USB, VGA, RCA, or S-Video connectivity Lightweight (approx, .25 lb / 110g) LED lighting Fixed or variable focus mechanisms (Dial and Slide) Magnification up to 100X Angle of view 0˚ or 90˚ 45˚ mirror attachment Periodontal pocket probe attachment with scale for measurement Attachment for single tooth closeups Fingertip image capture or foot switches SD card storage Specialized imaging software
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Encyclopedia of Triangle Centers** Encyclopedia of Triangle Centers: The Encyclopedia of Triangle Centers (ETC) is an online list of thousands of points or "centers" associated with the geometry of a triangle. It is maintained by Clark Kimberling, Professor of Mathematics at the University of Evansville. Encyclopedia of Triangle Centers: As of 14 June 2023, the list identifies 54,031 triangle centers.Each point in the list is identified by an index number of the form X(n)—for example, X(1) is the incenter. The information recorded about each point includes its trilinear and barycentric coordinates and its relation to lines joining other identified points. Links to The Geometer's Sketchpad diagrams are provided for key points. The Encyclopedia also includes a glossary of terms and definitions. Encyclopedia of Triangle Centers: Each point in the list is assigned a unique name. In cases where no particular name arises from geometrical or historical considerations, the name of a star is used instead. For example, the 770th point in the list is named point Acamar. Notable points: The first 10 points listed in the Encyclopedia are: Other points with entries in the Encyclopedia include: Similar, albeit shorter, lists exist for quadri-figures (quadrilaterals and systems of four lines) and polygon geometry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radiation stress** Radiation stress: In fluid dynamics, the radiation stress is the depth-integrated – and thereafter phase-averaged – excess momentum flux caused by the presence of the surface gravity waves, which is exerted on the mean flow. The radiation stresses behave as a second-order tensor. The radiation stress tensor describes the additional forcing due to the presence of the waves, which changes the mean depth-integrated horizontal momentum in the fluid layer. As a result, varying radiation stresses induce changes in the mean surface elevation (wave setup) and the mean flow (wave-induced currents). For the mean energy density in the oscillatory part of the fluid motion, the radiation stress tensor is important for its dynamics, in case of an inhomogeneous mean-flow field. The radiation stress tensor, as well as several of its implications on the physics of surface gravity waves and mean flows, were formulated in a series of papers by Longuet-Higgins and Stewart in 1960–1964. Radiation stress derives its name from the analogous effect of radiation pressure for electromagnetic radiation. Physical significance: The radiation stress – mean excess momentum-flux due to the presence of the waves – plays an important role in the explanation and modeling of various coastal processes: Wave setup and setdown – the radiation stress consists in part of a radiation pressure, exerted at the free surface elevation of the mean flow. If the radiation stress varies spatially, as it does in the surf zone where the wave height reduces by wave breaking, this results in changes of the mean surface elevation called wave setup (in case of an increased level) and setdown (for a decreased water level); Wave-driven current, especially a longshore current in the surf zone – for oblique incidence of waves on a beach, the reduction in wave height inside the surf zone (by breaking) introduces a variation of the shear-stress component Sxy of the radiation stress over the width of the surf zone. This provides the forcing of a wave-driven longshore current, which is of importance for sediment transport (longshore drift) and the resulting coastal morphology; Bound long waves or forced long waves, part of the infragravity waves – for wave groups the radiation stress varies along the group. As a result, a non-linear long wave propagates together with the group, at the group velocity of the modulated short waves within the group. While, according to the dispersion relation, a long wave of this length should propagate at its own – higher – phase velocity. The amplitude of this bound long wave varies with the square of the wave height, and is only significant in shallow water; Wave–current interaction – in varying mean-flow fields, the energy exchanges between the waves and the mean flow, as well as the mean-flow forcing, can be modeled by means of the radiation stress. Definitions and values derived from linear wave theory: One-dimensional wave propagation For uni-directional wave propagation – say in the x-coordinate direction – the component of the radiation stress tensor of dynamical importance is Sxx. It is defined as: Sxx=∫−hη(p+ρu~2)dz¯−12ρg(h+η¯)2, where p(x,z,t) is the fluid pressure, u~(x,z,t) is the horizontal x-component of the oscillatory part of the flow velocity vector, z is the vertical coordinate, t is time, z = −h(x) is the bed elevation of the fluid layer, and z = η(x,t) is the surface elevation. Further ρ is the fluid density and g is the acceleration by gravity, while an overbar denotes phase averaging. The last term on the right-hand side, ½ρg(h+η)2, is the integral of the hydrostatic pressure over the still-water depth. Definitions and values derived from linear wave theory: To lowest (second) order, the radiation stress Sxx for traveling periodic waves can be determined from the properties of surface gravity waves according to Airy wave theory: Sxx=(2cgcp−12)E, where cp is the phase speed and cg is the group speed of the waves. Further E is the mean depth-integrated wave energy density (the sum of the kinetic and potential energy) per unit of horizontal area. From the results of Airy wave theory, to second order, the mean energy density E equals: E=12ρga2=18ρgH2, with a the wave amplitude and H = 2a the wave height. Note this equation is for periodic waves: in random waves the root-mean-square wave height Hrms should be used with Hrms = Hm0 / √2, where Hm0 is the significant wave height. Then E = 1⁄16ρgHm02. Definitions and values derived from linear wave theory: Two-dimensional wave propagation For wave propagation in two horizontal dimensions the radiation stress S is a second-order tensor with components: S=(SxxSxySyxSyy). With, in a Cartesian coordinate system (x,y,z): Sxx=∫−hη(p+ρu~2)dz¯−12ρg(h+η¯)2,Sxy=∫−hη(ρu~v~)dz¯=Syx,Syy=∫−hη(p+ρv~2)dz¯−12ρg(h+η¯)2, where u~ and v~ are the horizontal x- and y-components of the oscillatory part u~(x,y,z,t) of the flow velocity vector. Definitions and values derived from linear wave theory: To second order – in wave amplitude a – the components of the radiation stress tensor for progressive periodic waves are: and Syy=[ky2k2cgcp+(cgcp−12)]E, where kx and ky are the x- and y-components of the wavenumber vector k, with length k = |k| = √kx2+ky2 and the vector k perpendicular to the wave crests. The phase and group speeds, cp and cg respectively, are the lengths of the phase and group velocity vectors: cp = |cp| and cg = |cg|. Dynamical significance: The radiation stress tensor is an important quantity in the description of the phase-averaged dynamical interaction between waves and mean flows. Here, the depth-integrated dynamical conservation equations are given, but – in order to model three-dimensional mean flows forced by or interacting with surface waves – a three-dimensional description of the radiation stress over the fluid layer is needed. Dynamical significance: Mass transport velocity Propagating waves induce a – relatively small – mean mass transport in the wave propagation direction, also called the wave (pseudo) momentum. To lowest order, the wave momentum Mw is, per unit of horizontal area: Mw=kkEcp, which is exact for progressive waves of permanent form in irrotational flow. Above, cp is the phase speed relative to the mean flow: with σ=ω−k⋅v¯, with σ the intrinsic angular frequency, as seen by an observer moving with the mean horizontal flow-velocity v while ω is the apparent angular frequency of an observer at rest (with respect to 'Earth'). The difference k⋅v is the Doppler shift.The mean horizontal momentum M, also per unit of horizontal area, is the mean value of the integral of momentum over depth: M=∫−hηρvdz¯=ρ(h+η¯)v¯+Mw, with v(x,y,z,t) the total flow velocity at any point below the free surface z = η(x,y,t). The mean horizontal momentum M is also the mean of the depth-integrated horizontal mass flux, and consists of two contributions: one by the mean current and the other (Mw) is due to the waves. Dynamical significance: Now the mass transport velocity u is defined as: u¯=Mρ(h+η¯)=v¯+Mwρ(h+η¯). Observe that first the depth-integrated horizontal momentum is averaged, before the division by the mean water depth (h+η) is made. Mass and momentum conservation Vector notation The equation of mean mass conservation is, in vector notation: ∂∂t[ρ(h+η¯)]+∇⋅[ρ(h+η¯)u¯]=0, with u including the contribution of the wave momentum Mw. Dynamical significance: The equation for the conservation of horizontal mean momentum is: ∂∂t[ρ(h+η¯)u¯]+∇⋅[ρ(h+η¯)u¯⊗u¯+S+12ρg(h+η¯)2I]=ρg(h+η¯)∇h+τw−τb, where u ⊗ u denotes the tensor product of u with itself, and τw is the mean wind shear stress at the free surface, while τb is the bed shear stress. Further I is the identity tensor, with components given by the Kronecker delta δij. Note that the right hand side of the momentum equation provides the non-conservative contributions of the bed slope ∇h, as well the forcing by the wind and the bed friction. Dynamical significance: In terms of the horizontal momentum M the above equations become: ∂∂t[ρ(h+η¯)]+∇⋅M=0,∂M∂t+∇⋅[u¯⊗M+S+12ρg(h+η¯)2I]=ρg(h+η¯)∇h+τw−τb. Component form in Cartesian coordinates In a Cartesian coordinate system, the mass conservation equation becomes: ∂∂t[ρ(h+η¯)]+∂∂x[ρ(h+η¯)u¯x]+∂∂y[ρ(h+η¯)u¯y]=0, with ux and uy respectively the x and y components of the mass transport velocity u. The horizontal momentum equations are: ∂∂t[ρ(h+η¯)u¯x]+∂∂x[ρ(h+η¯)u¯xu¯x+Sxx+12ρg(h+η¯)2]+∂∂y[ρ(h+η¯)u¯xu¯y+Sxy]=ρg(h+η¯)∂∂xh+τw,x−τb,x,∂∂t[ρ(h+η¯)u¯y]+∂∂x[ρ(h+η¯)u¯yu¯x+Syx]+∂∂y[ρ(h+η¯)u¯yu¯y+Syy+12ρg(h+η¯)2]=ρg(h+η¯)∂∂yh+τw,y−τb,y. Dynamical significance: Energy conservation For an inviscid flow the mean mechanical energy of the total flow – that is the sum of the energy of the mean flow and the fluctuating motion – is conserved. However, the mean energy of the fluctuating motion itself is not conserved, nor is the energy of the mean flow. The mean energy E of the fluctuating motion (the sum of the kinetic and potential energies satisfies: ∂E∂t+∇⋅[(u¯+cg)E]+S:(∇⊗u¯)=τw⋅u¯−τb⋅u¯−ε, where ":" denotes the double-dot product, and ε denotes the dissipation of mean mechanical energy (for instance by wave breaking). The term S:(∇⊗u¯) is the exchange of energy with the mean motion, due to wave–current interaction. The mean horizontal wave-energy transport (u + cg) E consists of two contributions: u E : the transport of wave energy by the mean flow, and cg E : the mean energy transport by the waves themselves, with the group velocity cg as the wave-energy transport velocity.In a Cartesian coordinate system, the above equation for the mean energy E of the flow fluctuations becomes: ∂E∂t+∂∂x[(u¯x+cg,x)E]+∂∂y[(u¯y+cg,y)E]+Sxx∂u¯x∂x+Sxy(∂u¯y∂x+∂u¯x∂y)+Syy∂u¯y∂y=(τw,x−τb,x)u¯x+(τw,y−τb,y)u¯y−ε. Dynamical significance: So the radiation stress changes the wave energy E only in case of a spatial-inhomogeneous current field (ux,uy).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lyotropy** Lyotropy: Lyotropy (a portmanteau of lyo- "dissolve" and -tropic "change") refers to concentration-dependent physical effects in solutions and often more specifically to ion-specific behavior in aqueous solutions. History: Ions in an aqueous solutions display ion specific behavior that has been commonly exemplified by the Hofmeister series. Stemming from observations by Franz Hofmeister in the 1870s with egg white lysozyme, lyotropic effects led to a classification of ions on their abilities to salt in or salt out proteins.Because of the positive charge of lysozyme, the original series turned out to be different that the series for most proteins. Thus, the series can change depending on the protein in solution and the concentrations of the ions in solution. Lyotropy- like the Hofmeister series- classifies ions and their abilities to salt in/ salt out proteins. History: In 1936, Voet investigated lyotropic behavior to quantify the effects of salt action on molecules and predict the behavior using mathematical models. Using agar and gelatin, he formulated an equation to predict the salting-out action of different ions for other colloids. Lyotropic numbers, Nlyo, based on this work appear to be related to the charge density of the ions.Lyotropic activity also influences swelling of gels, surface tension, rate of saponification processes, viscosity of salt solutions, and heats of hydration. Ion pairing equilibria in sea water: The current understanding of ion-pairing equilibria in an aqueous environment can also be traced to the Eigen-Tamm model that introduced the use of two equilibria states for ion pairs: the contact ion pair (CIP) and the separated ion pair (SIP). As an early application of ion-pairing equilibria, Kester and Pytkowicz studied the role of sulfate and divalent cation ion-pairing in seawater.This ion-specific behavior was also elucidated through Collin's Law of Matching Water Affinities that describes the strength of ion-pairing in terms of ion size and counterion, while also incorporating coordination state and entropy. Modern computational approaches to simulation of ion-pairing involve molecular dynamic simulations and ab initio calculations that often incorporate polarizable continuum solvent models. Implications: Following the law of matching water affinities, chaotrope-chaotrope and kosmotropes-kosmotrope pairs prefer the CIP state; whereas, chaotrope-kosmotrope prefer SIP or unpaired states. Another important lyotropic effect is the pairing of ions to charged headgroups of biological molecules. Vlachy et al. proposed that from chaotropic to kosmotropic headgroups, the ordering follows carboxylate, sulfate and sulfonate groups. In this context, a sodium ion (Na+) will prefer a carboxylate, and a potassium ion (K+) will prefer a sulfonate, which has important partitioning effects in biological systems. Protein solubility depends on pH and salt concentration, where small changes in the local environment can lead to Hofmeister series reversals.In aqueous solutions of glycans, lyotropic ion-pairing effects often dominate molecular interactions by controlling salt-bridge binding. Modern computational approaches in salt-bridge formation in proteins demonstrate mechanisms underlying the favorable arginine-arginine pairing that is due to reduction in electrostatic repulsion due to pi-stacking interactions.In carbohydrates, electrostatic and ion pairing are the dominant mechanisms for molecular interactions. Modern computational approaches in salt bridge formation in protein demonstrate that the favorable arginine-arginine pairing (i.e. conserved arginine) is due to reduction in electrostatic repulsion. Electrolyotropy: Electrolyotropy incorporates Donnan-potential spatial gradients and ion-specific pairing, and is used to determine the distribution of the ions and electric potential by modeling charges as being either fixed or free. A canonical example is a surface-tethered polyelectrolyte brush with a variety of different fixed charged groups interacting with free ions and ion-pairs to minimize Gibbs free energy. Using streaming current measurements in a microslit electrokinetic system (MES), electrolyotropic theory can be used to determine stoichiometric dissociation constants of ion pairing. This approach proves useful in characterizing complex polyelectrolytes and mixtures of ions in solutions like those found in biological systems. In fact, pH and salt concentrations directly affect stoichiometric dissociation constants. Application of electrolyotropic theory has been proposed as a model of mucosal tissue.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CING (biomolecular NMR structure)** CING (biomolecular NMR structure): In biomolecular structure, CING stands for the Common Interface for NMR structure Generation and is known for structure and NMR data validation.NMR spectroscopy provides diverse data on the solution structure of biomolecules. CING combines many external programs and internalized algorithms to direct an author of a new structure or a biochemist interested in an existing structure to regions of the molecule that might be problematic in relation to the experimental data. CING (biomolecular NMR structure): The source code is maintained open to the public at Google Code. There is a secure web interface iCing available for new data. Applications: 9000+ validation reports for existing Protein Data Bank structures in NRG-CING. CING has been applied to automatic predictions in the CASD-NMR experiment with results available at CASD-NMR. Validated NMR data: Protein or Nucleic acid structure together called Biomolecular structure Chemical shift (Nuclear Overhauser effect) Distance restraint Dihedral angle restraint RDC or Residual dipolar coupling restraint NMR (cross-)peak Software: Following software is used internally or externally by CING: 3DNA Collaborative Computing Project for NMR CYANA (Software) DSSP (algorithm) MOLMOL Matplotlib Nmrpipe PROCHECK/Aqua POV-Ray ShiftX TALOS+ WHAT_CHECK Wattos XPLOR-NIH YasaraAlgorithms Saltbridge Disulfide bridge Outlier Funding: The NRG-CING project was supported by the European Community grants 213010 (eNMR) and 261572 (WeNMR).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Effluent** Effluent: Effluent is wastewater from sewers or industrial outfalls that flows directly into surface waters, either untreated or after being treated at a facility. The term has slightly different meanings in certain contexts, and may contain various pollutants depending on the source. Definition: Effluent is defined by the United States Environmental Protection Agency (EPA) as "wastewater–treated or untreated–that flows out of a treatment plant, sewer, or industrial outfall. Generally refers to wastes discharged into surface waters". The Compact Oxford English Dictionary defines effluent as "liquid waste or sewage discharged into a river or the sea". Wastewater is not usually described as effluent while being recycled, re-used, or treated until it is released to surface water. Wastewater percolated or injected into groundwater may not be described as effluent if soil is assumed to perform treatment by filtration or ion exchange; although concealed flow through fractured bedrock, lava tubes, limestone caves, or gravel in ancient stream channels may allow relatively untreated wastewater to emerge as springs. Description: Effluent in the artificial sense is in general considered to be water pollution, such as the outflow from a sewage treatment facility or an industrial wastewater discharge. An effluent sump pump, for instance, pumps waste from toilets installed below a main sewage line. In the context of waste water treatment plants, effluent that has been treated is sometimes called secondary effluent, or treated effluent. This cleaner effluent is then used to feed the bacteria in biofilters.In the context of a thermal power station and other industrial facilities, the output of the cooling system may be referred to as the effluent cooling water, which is noticeably warmer than the environment and is called thermal pollution.: 375  In chemical engineering practice, effluent is the stream exiting a chemical reactor.Effluent may carry pollutants such as fats, oils and greases; solvents, detergents and other chemicals; heavy metal; other solids; and food waste. Possible sources include a wide range of manufacturing industries, mining industries, oil and gas extraction, and service industries. Treatment: There are several kinds of wastewater which are treated at the appropriate type of treatment plant. Domestic wastewater (also called municipal wastewater or sewage) is processed at a sewage treatment plant. For industrial wastewater, treatment either takes place in a separate industrial wastewater treatment facility, or in a sewage treatment plant (usually after some form of pre-treatment). Other types of wastewater treatment plants include agricultural wastewater treatment and leachate treatment plants. Treatment: Treating wastewater efficiently is challenging, but improved technology allows for enhanced removal of specific materials, increased re-use of water, and energy production from waste. Pollution control regulation: United States effluent guidelines In the United States, the Clean Water Act requires all direct effluent discharges to surface waters to be regulated with permits under the National Pollutant Discharge Elimination System (NPDES). Indirect dischargers–facilities which send their wastewater to municipal sewage treatment plants–may be subject to pretreatment requirements. NPDES permits require discharging facilities to limit or treat effluent to the levels that result from using the most effective treatment technologies possible at a practical cost to mitigate the effects of discharges on the receiving waters. EPA has published technology-based regulations, called "effluent guidelines", for 59 industrial categories. The agency reviews the standards annually, conducts research on various categories, and makes revisions as appropriate. Noncompliance with these standards and all other conditions in the permits is punishable by law. Each year, effluent guidelines regulations prevent billions of pounds of contaminants from being released into bodies of water.EPA regulations require effluent limitations to be expressed as mass-based limits (rather than concentration-based limits) in the permits, so that discharging facilities will not use dilution as a substitute for treatment. In cases where setting mass-based limits are infeasible, the permit authority must set conditions in the permit that prohibit dilution. Pollution control regulation: United States sewage treatment standards The U.S. "Secondary Treatment Regulation" is the national standard for municipal sewage treatment plants.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sodium cobalt oxide** Sodium cobalt oxide: Sodium cobalt oxide, also called sodium cobaltate, is any of a range of compounds of sodium, cobalt, and oxygen with the general formula NaxCoO2 for 0 < x ≤ 1. The name is also used for hydrated forms of those compounds, NaxCoO2·yH2O. Sodium cobalt oxide: The anhydrous compound was first synthesized in the 1970s. It conducts like a metal, and has exceptional thermoelectric properties (for 0.5 ≤ x ≤ 0.75) combining a large Seebeck coefficient with low resistivity, as discovered in 1997 by Ichiro Terasaki's research group. A hydrate form was found to be superconducting below 5 K. The compound, and its manganese analog, could be a cheaper alternative to the analogous lithium compounds. Structure: Like other alkali-cobalt oxides, sodium cobaltate has a layer structure. Layers of monovalent sodium cations (Na+) alternate with two-dimensional anionic sheets of cobalt and oxygen atoms. Each cobalt atom is bound to six oxygen atoms forming an octahedron, with two faces parallel to the layer plane. The octahedra share edges, resulting in a layer of cobalt atoms sandwiched between two layers of oxygen atoms, all three with regular triangular roughly planar lattice . The structure is reminiscent of cuprate superconductors, except that the copper atom arrangement in the latter is a square lattice.The cobalt atoms have formal oxidation state 4−x. Namely, the fully reduced compound NaCoO2 can be interpreted as Na+·Co3+·(O2−)2. As the compound is oxidized, sodium cations exit the structure and the cobalt formally approaches the Co4+ state. Structure: For x above 0.5, the sodium ions adopt many different arrangements in which Na ions occupy two inequivalent Wyckoff sites, 2b and 2d, of the space group P63/mmc. In galvanostatic experiments, the arrangements transition at specific values of x as the sodium content is electrolytically varied. The diffusion rate of the ions, plotted as a function of x, shows sharp dips (from about 10−7 to 10−10 cm2/s at ambient temperature) at values of x that correspond to particular regular arrangements, namely 1/3, 1/2, and 5/7. Smaller and broader dips are observed around some other simple ratios, like 5/9.For x = 0.8, at 100 K the vacancies in the sodium layer are arranged in clusters of three. The clusters are arranged in stripes, with a fixed offset between the clusters in adjacent stripes. In those conditions, the diffusion rate of the sodium atoms is minimal. At about 290 K, the structure becomes partially disordered, with the offset of between adjacent stripes becoming random. creating channels that allow their quasi-unidimensional diffusion. The sodium lattice "melts" at about 370 K, allowing two-dimensional diffusion.As x increases, the conductivity along the main crystal planes increases, until about x = 0.85, and is roughly independent of x thereafter. The temperature dependency at those higher concentrations has metallic character. The thermopower S increases with x up to 0.97, but drops for higher x. For each composition, as a function of temperature it increases rapidly until about 130 K, and then decreases gradually. The figure of merit Z = S/ρκ (where ρ is the in-plane resistivity and κ is the thermal conductivity) is maximum for x about 0.89 at about 65 K. Preparation: The fully reduced compound NaCoO2 can be prepared by dissolving stoichiometric amounts of sodium acetate C2H3O2Na and cobalt tartrate C4H4O6Co in ethanol with a gelling agent, drying and calcinating the resulting gel, and annealing it at 650 °C.The compound Na0.5CoO2 (or NaCo2O4) can be obtained in the form of platelets up to 6 mm wide from metallic cobalt powder, by treatment with molten sodium chloride and sodium hydroxide at 550 °C.The compound NaxCoO2 with x around 0.8 can be obtained by treating a mixture of sodium carbonate Na2CO3 and cobalt(II,III) oxide Co3O4 at 850–1050 °C. Single crystals of Na0.8CoO2 can be grown by the optical floating-zone technique.Higher values of x can be obtained by immersing thermally grown crystals of Na0.71CoO2 in a hot solution prepared from sodium metal and benzophenone in tetrahydrofuran for several days at 100 C.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Outswinger** Outswinger: An outswinger is a type of delivery of the ball in the sport of cricket. In such a delivery the ball curves—or "swings"—out and away from the batter's body and the wicket. By contrast, an inswinger swings in toward the batter and the wicket. Outswingers are bowled by swing bowlers. The term has also been adopted into football commentary, in which context it describes a cross of the ball kicked so that it curves outward from and across the face of the goal, rather than on goal. Method: An outswinger is bowled by holding the cricket ball with the seam at an angle and the first two fingers running along either side of the seam. The ball must be released at 12 o'clock height. The hands should move slightly towards the left at follow through and must push down for more back-spin. Once the ball has worn on one side the shiny side should face the leg side and the seam towards first or second slip for swing. The difference of pressure caused by movement of air over the rough and smooth surfaces tends to push the ball to the left. The result is that the ball curves, or swings to the left. In case of a new ball point the seam to the direction of swing. The difference of the separation of air by the seam the ball moves away from the batter. Advantages: From a right-handed batter's point of view, the swing is away from his body towards his right, i.e. towards the off side. This swing away from the body is the source of the name outswinger. To a left-handed batter, the swing is in towards the body and towards the leg side which from a technical point of view makes the outswinger, now an inswinger. Advantages: Outswingers may be considered to be one of the more difficult fast deliveries for a right-handed batter to play. This is because the ball moves away from his body. This means that any miscalculation can result in an outside edge off the bat and a catch going to the wicket-keeper or slips fielders. Tactical use: To a right-handed batter, a fast bowler will generally concentrate on bowling repeated outswingers, aiming to tempt the batter to play away from his body and get him out in one of the ways described above. However, sometimes a fast bowler may attempt to deceive the batter by bowling an off cutter instead of a standard outswinger, and look to get a batter out either bowled or lbw. More commonly, variation is in the length of the ball, with yorkers and bouncers. An effective delivery length is one that places the ball approaching the top of the stumps; usually between two-thirds to three- quarters up the length of the pitch.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Computer-aided auscultation** Computer-aided auscultation: Computer-aided auscultation (CAA), or computerized assisted auscultation, is a digital form of auscultation. It includes the recording, visualization, storage, analysis and sharing of digital recordings of heart or lung sounds. The recordings are obtained using an electronic stethoscope or similarly suitable recording device. Computer-aided auscultation: Computer-aided auscultation is designed to assist health care professionals who perform auscultation as part of their diagnostic process. Commercial CAA products are usually classified as clinical decision support systems that support medical professionals in making a diagnosis. As such they are medical devices and require certification or approval from a competent authority (e.g. FDA approval, CE conformity issued by notified body). Benefits of CAA: Compared to traditional auscultation, computer-aided auscultation (CAA) offers a range of improvements beneficial to multiple stakeholders: CAA can yield more accurate and objective results and is likely to outperform the auscultation skills and subjective interpretation of humans. With the use of CAA, auscultation is no longer a method reserved for specialists and physicians. For instance, nurses and paramedics can easily be instructed to use CAA systems correctly on their patients. CAA opens up new opportunities for telemedicine. Real-time tele-auscultation can help specialists located anywhere in the world to diagnose rare conditions observed in patients in developing countries or remote areas. CAA opens up new opportunities for health monitoring and health management. CAA allows analysis findings to be documented electronically. The results can be stored and retrieved as needed and possibly included in electronic patient records. Standardized auscultation data derived from CAA can help national payers and providers implement more efficient and cost-effective screening programs. CAA can be used for teaching and training purposes with medical and nursing students. Functional principle: In a CAA system, sounds are recorded through an electronic stethoscope. The audio data is transferred to an electronic device via Bluetooth or an audio cable connection. Special software on that device visualizes, stores and analyzes the data. With some of the more sophisticated CAA systems, the CAA analysis yields results that can be used to objectify diagnoses (decision support system). Components in a CAA system: The components of a CAA system depend on its complexity. Whereas some of the simpler systems provide only visualization or storage options, other systems combine visualization, storage, analysis and the ability to electronically manage said data. Components in a CAA system: Electronic stethoscope Electronic stethoscopes (also digital stethoscopes) convert acoustic sound waves into digital electrical signals. These signals are then amplified by means of transducers and currently reach levels up to 100 times higher than traditional acoustic stethoscopes. Additionally, electronic stethoscopes can be used to filter out background noise, a feature that can be safety-relevant and facilitate more accurate diagnoses. Whereas sound amplification and filtering are the main functions of an electronic stethoscope, the ability to access the sounds through external means via Bluetooth or audio cables makes them an ideal sound-capturing device for CAA systems. Components in a CAA system: Device running Graphical User Interface Devices that can be used to connect to an electronic stethoscope and record the audio signal (e.g. heart or lung sounds) include PC, laptop and mobile devices like smartphones or tablets. Generally, CAA systems include software that can visualize the incoming audio signal. More sophisticated CAA systems include live noise detection algorithms, designed to help the user achieve the best possible recording quality. Components in a CAA system: Analysis software A key feature of CAA systems is the automated analysis of the recorded audio signals by signal processing algorithms. Such algorithms can run directly on the device used for making the recording, or be hosted in a cloud connected to the device. The degree of autonomy of currently available analysis algorithms varies greatly. While some systems operate fully autonomously, early PC-based systems required significant user interaction and interpretation of results, and other analysis systems require some degree of assistance by the user like manual confirmation/correction of estimated heart rates. Components in a CAA system: Storage of auscultation based data Recorded sounds and associated analytical and patient data can be electronically stored, managed or archived. Patient identifying information might be handled or stored in the process. If the stored data classifies as PHI (protected health information), a system hosting such data must be compliant with country-specific data protection laws like HIPAA for the US or the Data Protection Directive for the EU. Storage options for current CAA systems range from the basic ability to retrieve a downloadable PDF report to a comprehensive cloud-based interface for electronic management of all auscultation-based data. Components in a CAA system: Cloud-based user interface The user can review all their patient records (including replaying the audio files) via a user interface, e.g. via a web-portal in the browser or stand-alone software on the electronic device. Other functionalities include sharing records with other users, exporting patient records and integration into EHR systems. CAA of the heart: Computer-aided auscultation aimed at detecting and characterizing heart murmurs is called computer-aided heart auscultation (also known as automatic heart sound analysis). Motivation Auscultation of the heart using a stethoscope is the standard examination method worldwide to screen for heart defects by identifying murmurs. It requires that an examining physician have acute hearing and extensive experience. An accurate diagnosis remains challenging for various reasons including noise, high heart rates, and the ability to distinguish innocent from pathological murmurs. Properly performed, the auscultatory examination of the heart is commonly regarded as an inexpensive, widely available tool in the detection and management of heart disease. The auscultation skills of physicians, however, have been reported to be declining. CAA of the heart: This leads to missed disease diagnoses and/or excessive costs for unnecessary and expensive diagnostic testing. A study suggests that more than one third of previously undiagnosed congenital heart defects in newborns are missed by their 6-week examination. More than 60% of referrals to medical specialists for costly echocardiography are due to a misdiagnosis of an innocent murmur. CAA of the heart thus has the potential to become a cost-effective screening and diagnostic tool, provided that its underlying algorithms have been clinical tested in stringent, blinded fashions for their ability to detect the difference between normal and abnormal heart sounds. CAA of the heart: Heart murmurs and CAA Heart murmurs (or cardiac murmurs) are audible noises through a stethoscope, generated by a turbulent flow of blood. Heart murmurs need to be distinguished from heart sounds which are primarily generated by the beating heart and the heart valves snapping open and shut. Generally, heart murmurs are classified as innocent (also called physiological or functional) or pathological (abnormal). Innocent murmurs are usually harmless, often caused by physiological conditions outside the heart, and the result of certain benign structural defects. Pathological murmurs are most often associated with heart valve problems but may also be caused by a wide array of structural heart defects. Various characteristics constitute a qualitative description of heart murmurs, including timing (systolic murmur and diastolic murmur), shape, location, radiation, intensity, pitch and quality. CAA systems typically categorize heart sounds and murmurs as Class I and Class III according to the American Heart Association: Class I: pathological murmur Class III: innocent murmur or no murmurMore sophisticated CAA systems provide additional descriptive murmur information like murmur timing, grading, or the ability to identify the positions of the S1/S2 heart sounds. Heart sound analysis The detection of heart murmurs in CAA systems is based on the analysis of digitally recorded heart sounds. CAA of the heart: Most approaches use the following four stages: Heart rate detection: In the first stage, the heart rate is determined based on the audio signal of the heart. It is a crucial step for the following stages and high accuracy is required. Automated heart rate determination based on acoustic recordings is challenging because the heart rate can range from 40-200bpm, noise and murmurs can camouflage the peaks of the heart sounds (S1 and S2), and irregular heartbeats can disturb the quasi-periodic nature of the heartbeat. CAA of the heart: Heart sound segmentation: After the heart rate has been detected, the two main phases of the heartbeat (systole and diastole) are identified. This differentiation is important since most murmurs occur in specific phases during the heartbeat. External noise from the environment or internal noise from the patient (e.g. breathing) make heart sound segmentation challenging. Feature extraction: Having identified the phases of the heartbeat, information (features) from the heart sound is extracted that enters a further classification stage. Features can range from simple energy-based approaches to higher-order multi-dimensional quantities. Feature classification: During classification, the features extracted in the previous stage are used to classify the signal and assess the presence and type of a murmur. The main challenge is to differentiate no-murmur recordings from low-grade innocent murmurs, and innocent murmurs from pathological murmurs. Usually machine-learning approaches are applied to construct a classifier based on training data. Clinical evidence of CAA systems The most common types of performance measures for CAA systems are based on two approaches: retrospective (non-blinded) studies using existing data and prospective blinded clinical studies on new patients. In retrospective CAA studies, a classifier is trained with machine learning algorithms using existing data. The performance of the classifier is then assessed using the same data. Different approaches are used to do this (e.g., k-Fold cross-validation, leave-one-out cross-validation). CAA of the heart: The main shortcoming of judging the quality (sensitivity, specificity) of a CAA system based on retrospective performance data alone comes from the risk that the approaches used can overestimate the true performance of a given system. Using the same data for training and validation can itself lead to significant overfitting of the validation set, because most classifiers can be designed to analyse known data very well, but might not be general enough to correctly classify unknown data; i.e. the results look much better than they would if tested on new, unseen patients. “The true performance of a selected network (CAA system) should be confirmed by measuring its performance on a third independent set of data called a test set”. In summary, the reliability of retrospective, non-blinded studies are usually considered to be much lower than that of prospective clinical studies because they are prone to selection bias and retrospective bias. Published examples include Pretorius et al. CAA of the heart: Prospective clinical studies, on the other hand, are better suited to assess the true performance of a CAA system (provided that the study is blinded and well controlled). In a prospective clinical study to evaluate the performance of a CAA system, the output of the CAA system is compared to the gold standard diagnoses. In the case of heart murmurs, a suitable gold standard diagnosis would be auscultation-based expert physician diagnosis, stratified by an echocardiogram-based diagnosis. Published examples include Lai et al.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Computational psychometrics** Computational psychometrics: Computational Psychometrics is an interdisciplinary field fusing theory-based psychometrics, learning and cognitive sciences, and data-driven AI-based computational models as applied to large-scale/high-dimensional learning, assessment, biometric, or psychological data. Computational psychometrics is frequently concerned with providing actionable and meaningful feedback to individuals based on measurement and analysis of individual differences as they pertain to specific areas of enquiry. Computational psychometrics: The relatively recent availability of large-scale psychometric data in accessible formats, alongside the rapid increase in CPU processing power, widespread accessibility and application of cluster and cloud computing, and the development of increasingly sensitive instruments for collecting biometric information has allowed big-data analytical and computational methods to expand the scale and scope of traditional psychometric areas of enquiry and modeling.Pursuing a computational approach to psychometrics often involves scientists working in multidisciplinary teams with expertise in artificial intelligence, machine learning, deep learning and neural network modeling, natural language processing, mathematics and statistics, developmental and cognitive psychology, computer science, data science, learning sciences, virtual and augmented reality, and traditional psychometrics.Another important subfield of computational science and, specifically, AI is what has been called psychometric artificial intelligence (PAI). PAI involves the use of psychometrically developed evaluations, such as intelligence tests and thinking style tests, to be solved algorithmically by an artificial agent. The goal of PAI is to put to the test the design and processing mechanisms proposed by AI researchers in order to get knowledge from both artificial and natural cognitive systems. Application: Computational psychometrics incorporates both theoretical and applied components ranging from item response theory, classical test theory, and Bayesian approaches to modeling knowledge acquisition and discovery of network psychometric models. Computational psychometrics studies the  computational basis of  learning  and  measurement of traits, such as skills, knowledge, abilities, attitudes, and personality  traits via mathematical modeling, intelligent learning and assessment virtual systems, and  computer simulation of large-scale, complex data which traditional psychometric approaches are ill-equipped to handle. Recent investigations into these hard to measure constructs include work on collaborative problem solving, teamwork, and decision making, among others. Application: Computational psychometrics is also related to the study of  social complexity.  Concepts such as  complex systems and  emergence have been considered in the study of team assembly and performance. In psychological and medical research it is focused on computational models based on technology enhanced-experimental results. Active areas of enquiry include cognitive, emotional, behavioral, diagnostic, and mental health issues. A computational psychometrics approach in this capacity frequently makes use of emerging capabilities such as biometric and multimodal sensors, virtual and augmented reality, as well as affective and wearable computing technologies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Line driver** Line driver: A line driver is an electronic amplifier circuit designed for driving a load such as a transmission line. The amplifier's output impedance may be matched to the characteristic impedance of the transmission line. Line drivers are commonly used within digital systems, e.g. to communicate digital signals across circuit-board traces and cables.In analog audio, a line driver is typically used to drive line-level analog signal outputs, for example to connect a CD player to an amplified speaker system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Balloon clock** Balloon clock: A balloon clock is a bracket clock with a waisted or balloon-shaped case. It was popular in England from the late 18th to the early 19th century. It is believed that balloon clock is derived from French styles that are usually of satinwood or mahogany with a convex or flat dial.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Conspiracy Zone** The Conspiracy Zone: The Conspiracy Zone is an American discussion program about conspiracy theories with a group of panelists, a mix of experts and celebrities. It was a half hour in length and ran for 26 episodes, though there was also an unaired pilot episode. The show was hosted by former Saturday Night Live player and comedian Kevin Nealon and was shown on The New TNN, debuting January 2002. Celebrity panelists included Ann Coulter, Harlan Ellison, Kathy Griffin, Cathy Scott and French Stewart, among others.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deadlock** Deadlock: In concurrent computing, deadlock is any situation in which no member of some group of entities can proceed because each waits for another member, including itself, to take action, such as sending a message or, more commonly, releasing a lock. Deadlocks are a common problem in multiprocessing systems, parallel computing, and distributed systems, because in these contexts systems often use software or hardware locks to arbitrate shared resources and implement process synchronization.In an operating system, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process. If a process remains indefinitely unable to change its state because resources requested by it are being used by another process that itself is waiting, then the system is said to be in a deadlock.In a communications system, deadlocks occur mainly due to loss or corruption of signals rather than contention for resources. Individually necessary and jointly sufficient conditions for deadlock: A deadlock situation on a resource can arise only if all of the following conditions occur simultaneously in a system: Mutual exclusion: At least one resource must be held in a non-shareable mode; that is, only one process at a time can use the resource. Otherwise, the processes would not be prevented from using the resource when necessary. Only one process can use the resource at any given instant of time. Individually necessary and jointly sufficient conditions for deadlock: Hold and wait or resource holding: a process is currently holding at least one resource and requesting additional resources which are being held by other processes. No preemption: a resource can be released only voluntarily by the process holding it. Individually necessary and jointly sufficient conditions for deadlock: Circular wait: each process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource. In general, there is a set of waiting processes, P = {P1, P2, …, PN}, such that P1 is waiting for a resource held by P2, P2 is waiting for a resource held by P3 and so on until PN is waiting for a resource held by P1.These four conditions are known as the Coffman conditions from their first description in a 1971 article by Edward G. Coffman, Jr.While these conditions are sufficient to produce a deadlock on single-instance resource systems, they only indicate the possibility of deadlock on systems having multiple instances of resources. Deadlock handling: Most current operating systems cannot prevent deadlocks. When a deadlock occurs, different operating systems respond to them in different non-standard manners. Most approaches work by preventing one of the four Common conditions from occurring, especially the fourth one. Major approaches are as follows. Ignoring deadlock In this approach, it is assumed that a deadlock will never occur. This is also an application of the Ostrich algorithm. This approach was initially used by MINIX and UNIX. This is used when the time intervals between occurrences of deadlocks are large and the data loss incurred each time is tolerable. Ignoring deadlocks can be safely done if deadlocks are formally proven to never occur. An example is the RTIC framework. Deadlock handling: Detection Under the deadlock detection, deadlocks are allowed to occur. Then the state of the system is examined to detect that a deadlock has occurred and subsequently it is corrected. An algorithm is employed that tracks resource allocation and process states, it rolls back and restarts one or more of the processes in order to remove the detected deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler of the operating system.After a deadlock is detected, it can be corrected by using one of the following methods: Process termination: one or more processes involved in the deadlock may be aborted. One could choose to abort all competing processes involved in the deadlock. This ensures that deadlock is resolved with certainty and speed. But the expense is high as partial computations will be lost. Or, one could choose to abort one process at a time until the deadlock is resolved. This approach has a high overhead because after each abort an algorithm must determine whether the system is still in deadlock. Several factors must be considered while choosing a candidate for termination, such as priority and age of the process. Deadlock handling: Resource preemption: resources allocated to various processes may be successively preempted and allocated to other processes until the deadlock is broken. Prevention Deadlock prevention works by preventing one of the four Coffman conditions from occurring. Removing the mutual exclusion condition means that no process will have exclusive access to a resource. This proves impossible for resources that cannot be spooled. But even with spooled resources, the deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms. Deadlock handling: The hold and wait or resource holding conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations). This advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to request resources only when it has none; First, they must release all their currently held resources before requesting all the resources they will need from scratch. This too is often impractical. It is so because resources may be allocated and remain unused for long periods. Also, a process requiring a popular resource may have to wait indefinitely, as such a resource may always be allocated to some process, resulting in resource starvation. (These algorithms, such as serializing tokens, are known as the all-or-none algorithms.) The no preemption condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, the inability to enforce preemption may interfere with a priority algorithm. Preemption of a "locked out" resource generally implies a rollback, and is to be avoided since it is very costly in overhead. Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control. If a process holding some resources and requests for some another resource(s) that cannot be immediately allocated to it, the condition may be removed by releasing all the currently being held resources of that process. Deadlock handling: The final condition is the circular wait condition. Approaches that avoid circular waits include disabling interrupts during critical sections and using a hierarchy to determine a partial ordering of resources. If no obvious hierarchy exists, even the memory address of resources has been used to determine ordering and resources are requested in the increasing order of the enumeration. Dijkstra's solution can also be used. Deadlock handling: Deadlock avoidance Similar to deadlock prevention, deadlock avoidance approach ensures that deadlock will not occur in a system. The term "deadlock avoidance" appears to be very close to "deadlock prevention" in a linguistic context, but they are very much different in the context of deadlock handling. Deadlock avoidance does not impose any conditions as seen in prevention but, here each resource request is carefully analyzed to see whether it could be safely fulfilled without causing deadlock. Deadlock handling: Deadlock avoidance requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime. Deadlock avoidance algorithm analyzes each and every request by examining that there is no possibility of deadlock occurrence in the future if the requested resource is allocated. The drawback of this approach is its requirement of information in advance about how resources are to be requested in the future. One of the most used deadlock avoidance algorithm is Banker's algorithm. Livelock: A livelock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing. Livelock: The term was coined by Edward A. Ashcroft in a 1975 paper in connection with an examination of airline booking systems. Livelock is a special case of resource starvation; the general definition only states that a specific process is not progressing.Livelock is a risk with some algorithms that detect and recover from deadlock. If more than one process takes action, the deadlock detection algorithm can be repeatedly triggered. This can be avoided by ensuring that only one process (chosen arbitrarily or by priority) takes action. Distributed deadlock: Distributed deadlocks can occur in distributed systems when distributed transactions or concurrency control is being used. Distributed deadlocks can be detected either by constructing a global wait-for graph from local wait-for graphs at a deadlock detector or by a distributed algorithm like edge chasing. Phantom deadlocks are deadlocks that are falsely detected in a distributed system due to system internal delays but do not actually exist. For example, if a process releases a resource R1 and issues a request for R2, and the first message is lost or delayed, a coordinator (detector of deadlocks) could falsely conclude a deadlock (if the request for R2 while having R1 would cause a deadlock).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Indian Journal of Medical Microbiology** Indian Journal of Medical Microbiology: The Indian Journal of Medical Microbiology is a peer-reviewed open-access medical journal published by Medknow Publications on behalf of the Indian Association of Medical Microbiology. The journal publishes articles on medical microbiology including bacteriology, virology, phycology, mycology, parasitology, and protozoology. Abstracting and indexing: The journal is indexed in Abstracts on Hygiene and Communicable Diseases, Bioline International, CAB Abstracts, CINAHL, CSA databases, EBSCO, Excerpta Medica/EMBASE, Expanded Academic ASAP, Global Health, Health & Wellness Research Center, Health Reference Center Academic, IndMed, MedInd, MEDLINE/Index Medicus, Science Citation Index Expanded, Scopus, SIIC databases, Tropical Diseases Bulletin, and Ulrich's Periodicals Directory.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Statistical parsing** Statistical parsing: Statistical parsing is a group of parsing methods within natural language processing. The methods have in common that they associate grammar rules with a probability. Grammar rules are traditionally viewed in computational linguistics as defining the valid sentences in a language. Within this mindset, the idea of associating each rule with a probability then provides the relative frequency of any given grammar rule and, by deduction, the probability of a complete parse for a sentence. (The probability associated with a grammar rule may be induced, but the application of that grammar rule within a parse tree and the computation of the probability of the parse tree based on its component rules is a form of deduction.) Using this concept, statistical parsers make use of a procedure to search over a space of all candidate parses, and the computation of each candidate's probability, to derive the most probable parse of a sentence. The Viterbi algorithm is one popular method of searching for the most probable parse. Statistical parsing: "Search" in this context is an application of search algorithms in artificial intelligence. Statistical parsing: As an example, think about the sentence "The can can hold water". A reader would instantly see that there is an object called "the can" and that this object is performing the action 'can' (i.e. is able to); and the thing the object is able to do is "hold"; and the thing the object is able to hold is "water". Using more linguistic terminology, "The can" is a noun phrase composed of a determiner followed by a noun, and "can hold water" is a verb phrase which is itself composed of a verb followed by a verb phrase. But is this the only interpretation of the sentence? Certainly "The can can" is a perfectly valid noun-phrase referring to a type of dance, and "hold water" is also a valid verb-phrase, although the coerced meaning of the combined sentence is non-obvious. This lack of meaning is not seen as a problem by most linguists (for a discussion on this point, see Colorless green ideas sleep furiously) but from a pragmatic point of view it is desirable to obtain the first interpretation rather than the second and statistical parsers achieve this by ranking the interpretations based on their probability. Statistical parsing: (In this example various assumptions about the grammar have been made, such as a simple left-to-right derivation rather than head-driven, its use of noun-phrases rather than the currently fashionable determiner-phrases, and no type-check preventing a concrete noun being combined with an abstract verb phrase. None of these assumptions affect the thesis of the argument and a comparable argument can be made using any other grammatical formalism.) There are a number of methods that statistical parsing algorithms frequently use. While few algorithms will use all of these they give a good overview of the general field. Most statistical parsing algorithms are based on a modified form of chart parsing. The modifications are necessary to support an extremely large number of grammatical rules and therefore search space, and essentially involve applying classical artificial intelligence algorithms to the traditionally exhaustive search. Some examples of the optimisations are only searching a likely subset of the search space (stack search), for optimising the search probability (Baum-Welch algorithm) and for discarding parses that are too similar to be treated separately (Viterbi algorithm). Notable people in statistical parsing: Eugene Charniak Author of Statistical techniques for natural language parsing amongst many other contributions Fred Jelinek Applied and developed numerous techniques from Information Theory to build the field David Magerman Major contributor to turning the field from theoretical to practical by managing data James Curran Applying the MaxEnt algorithm, word representation, and other contributions Michael Collins (computational linguist) First very high performance statistical parser Joshua Goodman Hypergraphs, and other generalizations between different methods
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neutron magnetic imaging** Neutron magnetic imaging: Neutrons are spin 1/2 particles that interact with magnetic induction fields via the Zeeman interaction. This interaction is both rather large and simple to describe. Several neutron scattering techniques have been developed to use thermal neutrons to characterize magnetic micro and nanostructures. Polarized small-angle neutron scattering (SANS): Small-angle neutron scattering is a technique which is especially suited for the study of nanoparticles. It has for example been used extensively for the study of ferrofluids. More recently, polarized SANS has become more widely available and a wide range of study have been performed. Polarized SANS allows either to probe the internal structure of magnetic nanoparticles via the measurement of the magnetic form factor or the magnetic interactions between magnetic nanoparticles via the structure factor. In a few cases, Polarized Grazing Incidence SANS was performed on magnetic systems A few polarized neutrons SANS spectrometers are available across the world: D33 at the Institut Laue-Langevin (ILL) in Grenoble France PA20 at CEA Laboratoire Léon Brillouin (LLB) in Saclay, France SANS-I and KWS-1 and KWS-2 at the Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) in Garching, Germany V4 at Helmholtz Zentrum Berlin Polarized neutron reflectometry: Polarized neutron reflectometry allows probing magnetic thin films and ultra-thin films. The polarized reflectivity measurements allow measuring the magnitude and directions of the magnetic induction in magnetic heterostructures with a depth resolution on the order of 2-3 nm for films with thicknesses ranging from 5 to 100 nm.A number of polarized neutrons reflectometers are available across the world: Platypus at ANSTO in Sydney, Australia C5 spectrometer at NRC Canada Chalk River Labs in Chalk River, Canada. Polarized neutron reflectometry: D3 reflectometer at NRC Canada Chalk River Labs in Chalk River, Canada. Polarized neutron reflectometry: D17, SuperADAM at the Institut Laue-Langevin (ILL) in Grenoble, France PRISM (alternate) at CEA Laboratoire Léon Brillouin (LLB) in Saclay, France N-REX+, MIRA, TREFF@NoSpec and MARIA at the Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) in Garching, Germany REFLEX Archived 2008-04-04 at the Wayback Machine and REMUR at Joint Institute for Nuclear Research IBR-2 in Dubna, Russia AMOR Archived 2007-07-14 at the Wayback Machine at the Paul Scherrer Institute (PSI) in Villigen, Switzerland SURF, CRISP, INTER, Offspec and polREF at the ISIS neutron source (ISIS) in Oxfordshire, United Kingdom NG1, NG7 at the NIST Center for Neutron Research (NCNR) in Gaithersburg, Maryland, United States Magnetism at the Spallation Neutron Source (ORNL) in Oak Ridge, Tennessee, United StatesA catalogue of neutron reflectometers is available at www.reflectometry.net. Polarized Neutron Radiography and Tomography: Precession techniques The neutron precession in an induction field is expressed as dM→dt=γnM→×B→(r) where M→ is the neutron magnetic moment, B→ is the local magnetic induction at the neutron position r(t) and γn=gnμN/ℏ is the neutron gyromagnetic ratio. For neutrons, the gyromagnetic ratio is 183 10 29.2 MHz.T−1 (note that for neutrons g factor is negative and equal to -3.83). Polarized Neutron Radiography and Tomography: Bulk systems Neutron radiography can be used to map the distribution of an induction field B→(r→) in space. In order to perform such experiments, the neutron beam is initially polarized, it interacts with the induction field of interest and the neutron precession is measured with a neutron analyzer in front of the 2D detector. The beam can be either polarized with supermirrors or with polarized 3He gaz Thin film structures The neutron precession in an induction field is rather small. Thus in the case of thin films (~1 µm thick) the neutron interaction is rather small. Thus in order to obtain a measurable signal, it has been proposed that a grazing incidence geometry could be used. In such a geometry, the interaction is enhance since the neutron travels a longer path inside the induction field. Such measurements however assume that the planar structure of the system is homogeneous and that the induction varie only through the depth of the magnetic film. The magnetisation depth profile was measured in thick CoZr films in which the magnetic anisotropy field was "engineered" during deposition. A very thorough description of the measurement process can be found in. Polarized Neutron Radiography and Tomography: Phase imaging Phase contrast (or dark field) imaging has recently been developed for neutron radiography and tomography. It has been applied to visualize magnetic domains in several types of systems: soft magnetic alloys magnetic vortices in low Tc superconductors superconductors Scanning magnetic neutron imaging Magnetic neutron radiography is currently limited in spatial resolution due to the need of analyzing the neutron polarization which results in losses in spatial resolution. It has been proposed that neutron scanning imaging could be performed by using micro beams. It is however only possible to produce 1 dimensional microbeams due to the intrinsic limitation in neutron flux. Hence this technique can presently be applied only for 1 dimensional problems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ZIP codes in the Philippines** ZIP codes in the Philippines: In the Philippines, a ZIP code is used by the Philippine Postal Corporation (PhlPost) to simplify the distribution of mail. While its function similar with the ZIP Codes used in the United States, its form and usage are quite different. Its use is not mandatory but highly recommended by the PhlPost. A ZIP code is composed of a four-digit number representing a locality. Usually, more than one code is issued for areas within Metro Manila, and a single code for each municipality and city in provincial areas, with some rare exceptions such as Dasmariñas in Cavite, which has three ZIP codes (4114, 4115, and 4126), Los Baños in Laguna, which has two ZIP codes (4030 and 4031 for the University of the Philippines Los Baños), and Angeles City, which has two ZIP codes (2009 and 2024 for Barangay Balibago).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mesoporous silica** Mesoporous silica: Mesoporous silica is a form of silica that is characterised by its mesoporous structure, that is, having pores that range from 2 nm to 50 nm in diameter. According to IUPAC's terminology, mesoporosity sits between microporous (<2 nm) and macroporous (>50 nm). Mesoporous silica is a relatively recent development in nanotechnology. The most common types of mesoporous nanoparticles are MCM-41 and SBA-15. Research continues on the particles, which have applications in catalysis, drug delivery and imaging. Mesoporous ordered silica films have been also obtained with different pore topologies.A compound producing mesoporous silica was patented around 1970. It went almost unnoticed and was reproduced in 1997. Mesoporous silica nanoparticles (MSNs) were independently synthesized in 1990 by researchers in Japan. They were later produced also at Mobil Corporation laboratories and named Mobil Composition of Matter (or Mobil Crystalline Materials, MCM).Six years later, silica nanoparticles with much larger (4.6 to 30 nanometer) pores were produced at the University of California, Santa Barbara. The material was named Santa Barbara Amorphous type material, or SBA-15. These particles also have a hexagonal array of pores. Mesoporous silica: The researchers who invented these types of particles planned to use them as molecular sieves. Today, mesoporous silica nanoparticles have many applications in medicine, biosensors, thermal energy storage, water/gas filtration and imaging. Synthesis: Mesoporous silica nanoparticles are synthesized by reacting tetraethyl orthosilicate with a template made of micellar rods. The result is a collection of nano-sized spheres or rods that are filled with a regular arrangement of pores. The template can then be removed by washing with a solvent adjusted to the proper pH.Mesoporous particles can also be synthesized using a simple sol-gel method such as the Stöber process, or a spray drying method. Tetraethyl orthosilicate is also used with an additional polymer monomer (as a template). Synthesis: However, TEOS is not the most effective precursor for synthesizing such particles; a better precursor is (3-Mercaptopropyl)trimethoxysilane, often abbreviated to MPTMS. Use of this precursor drastically reduces the chance of aggregation and ensures more uniform spheres. Drug delivery: The large surface area of the pores allows the particles to be filled with a drug or a cytotoxin. Like a Trojan Horse, the particles will be taken up by certain biological cells through endocytosis, depending on what chemicals are attached to the outside of the spheres. Some types of cancer cells will take up more of the particles than healthy cells will, giving researchers hope that MCM-41 will one day be used to treat certain types of cancer.Ordered mesoporous silica (e.g. SBA-15, TUD-1, HMM-33, and FSM-16) also show potential to boost the in vitro and in vivo dissolution of poorly water-soluble drugs. Many drug-candidates coming from drug discovery suffer from a poor water solubility. An insufficient dissolution of these hydrophobic drugs in the gastrointestinal fluids strongly limits the oral bioavailability. One example is itraconazole which is an antimycoticum known for its poor aqueous solubility. Upon introduction of itraconazole-on-SBA-15 formulation in simulated gastrointestinal fluids, a supersaturated solution is obtained giving rise to enhanced transepithelial intestinal transport. Also the efficient uptake into the systemic circulation of SBA-15 formulated itraconazole has been demonstrated in vivo (rabbits and dogs). This approach based on SBA-15 yields stable formulations and can be used for a wide variety of poorly water-soluble compounds. Biosensors: The structure of these particles allows them to be filled with a fluorescent dye that would normally be unable to pass through cell walls. The MSN material is then capped off with a molecule that is compatible with the target cells. When the MSNs are added to a cell culture, they carry the dye across the cell membrane. These particles are optically transparent, so the dye can be seen through the silica walls. The dye in the particles does not have the same problem with self-quenching that a dye in solution has. The types of molecules that are grafted to the outside of the MSNs will control what kinds of biomolecules are allowed inside the particles to interact with the dye.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dülfersitz** Dülfersitz: The Dülfersitz (named after its inventor, mountaineer Hans Dülfer), also known as body rappel is a classical, or non-mechanical abseiling technique, used in rock climbing and mountaineering. It is not used frequently any more, since the introduction of belay devices. In the Dülfersitz, the rope is wound around the body, and the speed of descent is controlled using the friction of the rope against the body. Dülfersitz: The advantages of the Dülfersitz are that one can descend without a climbing harness or belay device, and because the rope is not kinked or subjected to concentrated forces, it does not experience as much wear. The major disadvantage of this method is that intense heat is generated by the friction on the shoulder, neck and thigh, which can be painful, and can damage clothing. Abseiling by means of the Dülfersitz: The doubled rope is passed between the legs The rope is passed behind one thigh Crossing the chest, the rope is taken to the opposite shoulder From the shoulder, the rope is passed diagonally across the back to the braking hand (the hand on the same side as the thigh around which the rope has been passed) The rope is placed under load The free hand is held forward, maintaining the balance The braking hand controls the movement of the rope: to allow the rope to move, the braking hand moves backwards; to arrest movement, it moves forwards.Although the Dülfersitz is an effective method of abseiling when practised correctly, it is less safe than some modern methods: if the braking hand releases the rope (due to panic, impact from a falling stone, or cramp), a fall is unavoidable if no additional means of security, such as prusik cords, is used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mysteries of the Moonsea** Mysteries of the Moonsea: Mysteries of the Moonsea is a supplement to the 3.5 edition of the Dungeons & Dragons role-playing game. Contents: Mysteries of the Moonsea is an accessory for the Forgotten Realms which details the Moonsea region, a perilous frontier ruled by tyrants and threatened by monsters, with cities consumed by decadence and war and where conspiracies abound. This accessory contains 37 loosely connected adventures that can be run individually or linked to form the bases of a campaign in the Forgotten Realms. The book also presents maps and descriptions of the cities of Melvaunt, Hillsfar, Mulmaster, and Zhentil Keep, and statistics and descriptions for 15 important villains of the setting. Publication history: Mysteries of the Moonsea was written by Wil Upchurch, Sean K. Reynolds, Darrin Drader and Thomas M. Reid, and published in June 2006. Cover art was by William O'Connor, with interior art by Ron Lemen, William O'Connor, Francis Tsai, and Franz Vohwinkel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Febrile non-hemolytic transfusion reaction** Febrile non-hemolytic transfusion reaction: Febrile non-hemolytic transfusion reaction (FNHTR) is the most common type of transfusion reaction. It is a benign occurrence with symptoms that include fever but not directly related with hemolysis. It is caused by cytokine release from leukocytes within the donor product as a consequence of white blood cell breakdown .These inflammatory mediators accumulate during the storage of the donated blood, and so the frequency of this reaction increases with the storage length of donated blood. This is in contrast to transfusion-associated acute lung injury, in which the donor plasma has antibodies directed against the recipient HLA antigens, mediating the characteristic lung damage. Definition: Symptoms must manifest within 4 hours of cessation of the transfusion, and should not be due to another cause such as an underlying infection, bacterial contamination of the blood component, or another type of transfusion reaction, e.g. acute hemolytic transfusion reaction.Fever must be at least 38 °C/100.4 °F oral and a change of at least 1 °C/1.8 °F from pre-transfusion value OR chills and/or rigors must be present.The UK hemovigilance system (SHOT) categorizes the severity of the reaction. Definition: Mild Fever of at least 38 °C/100.4 °F oral and a change of between 1 and 2 °C from pre-transfusion values but no other symptoms or signs. Moderate Fever of at least 39 °C, OR a rise in temperature of at least 2 °C from pre-transfusion values AND/OR other symptoms or signs, including chills (rigors), painful muscles (myalgia), or nausea that are severe enough that the transfusion is stopped. Severe Fever of at least 39 °C, OR a rise in temperature of at least 2 °C from pre-transfusion values AND/OR other symptoms or signs, including chills (rigors), painful muscles (myalgia), or nausea that are severe enough that the transfusion is stopped AND requires immediate medical treatment, admission to hospital, or lengthens the duration of hospital admission. Treatment: Paracetamol has been used in treatment, and leukoreduction of future transfusions is sometimes performed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Architecture framework** Architecture framework: The ISO/IEC/IEEE 42010 Conceptual Model of Architecture Description defines the term architecture framework within systems engineering and software development as: "An architecture framework establishes a common practice for creating, interpreting, analyzing and using architecture descriptions within a particular domain of application or stakeholder community. Examples of Architecture Frameworks: MODAF, TOGAF, Kruchten's 4+1 view model, RM-ODP." Especially the domain within a company or other organization is covered by enterprise architecture frameworks. Architecture framework: The Survey of Architecture Frameworks lists some of the available architecture frameworks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strident vowel** Strident vowel: Strident vowels (also called sphincteric vowels) are strongly pharyngealized vowels accompanied by an (ary)epiglottal trill, with the larynx being raised and the pharynx constricted. Either the epiglottis or the arytenoid cartilages thus vibrate instead of the vocal cords. That is, the epiglottal trill is the voice source for such sounds. Strident vowel: Strident vowels are fairly common in Khoisan languages, which contrast them with simple pharyngealized vowels. Stridency is used in onomatopoeia in Zulu and Lamba. Stridency may be a type of phonation called harsh voice. A similar phonation, without the trill, is called ventricular voice; both have been called pressed voice. Bai, of southern China, has a register system that has allophonic strident and pressed vowels. Strident vowel: There is no official symbol for stridency in the IPA, but a superscript ⟨ʢ⟩ (for a voiced epiglottal trill) is often used. In some literature, a subscript double tilde (≈) is sometimes used.It has been accepted into Unicode, at code point U+1DFD. Languages: These languages use phonemic strident vowels: Tuu languages Taa (See Taa vowels) ǃKwi (ǃUi) Nǁng (a dialect cluster; moribund) ǀXam (a dialect cluster, including Nǀuusaa) † Sources: Moisik, Scott; Czaykowska-Higgins, Ewa; Esling, John H. (Winter 2012). Loughran, Jenny; McKillen, Alanah (eds.). "The Epilaryngeal Articulator: A New Conceptual Tool for Understanding Lingual-Laryngeal Contrasts". McGill Working Papers in Linguistics. McGill University. 22 (1).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kynapse** Kynapse: Kynapse is the artificial intelligence middleware product, developed by Kynogon, which was bought by Autodesk in 2008 and called Autodesk Kynapse. In 2011, it has been re-engineered and rebranded Autodesk Navigation.Since the discontinuation of Autodesk Gameware, the product is obsolete. Features: A complete 3D pathfinding An automatic AI data generation tool Optimizations for multicore/multiprocessing/Cell architectures Spatial reasoning Streaming mechanisms to handle very large terrains The management of dynamic and destructible terrains Usage: Kynapse has been used in the development of more than 80 game titles including Mafia II, Crackdown, Alone in the Dark 5, Fable II, Medal of Honor: Airborne, Sacred 2: Fallen Angel, Watchmen: The End Is Nigh, Sonic the Hedgehog (2006), The Lord of the Rings Online: Shadows of Angmar and the Unreal Engine. Kynapse is also being used by companies such as EADS, BAE Systems or Électricité de France to develop military or industrial simulation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stable module category** Stable module category: In representation theory, the stable module category is a category in which projectives are "factored out." Definition: Let R be a ring. For two modules M and N over R, define Hom_(M,N) to be the set of R-linear maps from M to N modulo the relation that f ~ g if f − g factors through a projective module. The stable module category is defined by setting the objects to be the R-modules, and the morphisms are the equivalence classes Hom_(M,N) Given a module M, let P be a projective module with a surjection p:P→M . Then set Ω(M) to be the kernel of p. Suppose we are given a morphism f:M→N and a surjection q:Q→N where Q is projective. Then one can lift f to a map P→Q which maps Ω(M) into Ω(N) . This gives a well-defined functor Ω from the stable module category to itself. Definition: For certain rings, such as Frobenius algebras, Ω is an equivalence of categories. In this case, the inverse Ω−1 can be defined as follows. Given M, find an injective module I with an inclusion i:M→I . Then Ω−1(M) is defined to be the cokernel of i. A case of particular interest is when the ring R is a group algebra. Definition: The functor Ω−1 can even be defined on the module category of a general ring (without factoring out projectives), as the cokernel of the injective envelope. It need not be true in this case that the functor Ω−1 is actually an inverse to Ω. One important property of the stable module category is it allows defining the Ω functor for general rings. When R is perfect (or M is finitely generated and R is semiperfect), then Ω(M) can be defined as the kernel of the projective cover, giving a functor on the module category. However, in general projective covers need not exist, and so passing to the stable module category is necessary. Connections with cohomology: Now we suppose that R = kG is a group algebra for some field k and some group G. One can show that there exist isomorphisms Hom_(Ωn(M),N)≅ExtkGn(M,N)≅Hom_(M,Ω−n(N)) for every positive integer n. The group cohomology of a representation M is given by Hn(G;M)=ExtkGn(k,M) where k has a trivial G-action, so in this way the stable module category gives a natural setting in which group cohomology lives. Furthermore, the above isomorphism suggests defining cohomology groups for negative values of n, and in this way one recovers Tate cohomology. Triangulated structure: An exact sequence 0→X→E→Y→0 in the usual module category defines an element of ExtkG1(Y,X) , and hence an element of Hom_(Y,Ω−1(X)) , so that we get a sequence X→E→Y→Ω−1(X). Taking Ω−1 to be the translation functor and such sequences as above to be exact triangles, the stable module category becomes a triangulated category.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fishing bait** Fishing bait: Fishing bait is any substance used to attract and catch fish, e.g. on a fishing hook. Bait items are both selected from and placed within the environment in order to capture prey. Traditionally, fishing bait is natural fish food such as night-crawlers, insects, worms, and smaller bait fish. Fishermen also use lures such as processed food, plastic baits and bionic lures to attract fish. Despite the importance of fish's attraction to bait, the way fish react to different baits is quite poorly understood.The various techniques and bait that a fisher may choose is dictated mainly by the target species and by its habitat. Bait can be separated into two main categories: artificial baits and natural baits. The alternative of artificial and live baits frequently demonstrate similar efficiency. The overall bait type and size will affect the efficiency and results of catches when fishing. With these two common ways to fish also comes environmental concerns. It is known that some bait fish are invasive and have the possibility to spread disease. A common theme when inspecting the use of artificial baits is the discarding and loss of said baits. The disposing of lures can lead to problems in the ecosystem. Artificial baits: Using lures is a popular method for catching predatory fish. Lures are artificial baits designed to mimic the action of different prey, usually small fish. These lures are made to use movement, color, vibration, noise, and sometimes scent to attract fish into striking. The lure may require a specialized presentation to impart an enticing action e.g. in fly fishing. Artificial lures are rigged with different types of hooks in order to increase catch rate. Artificial baits are manufactured to be durable and fished repeatedly unlike natural baits. Different companies are continuously modifying lures with new technology to better represent and attract the attention of fish. A study showed that the reason fish react to different colors of lures is due to their ability of see infrared rays being reflected off of lures. Companies have taken information like this into consideration so that they can make their lures in a way that maximizes efficiency. Some common artificial baits include: crank baits, soft plastic baits, swim baits, fake frogs, etc. Artificial baits are most commonly acquired online, in-store at tackle shops, and made by hand. Artificial baits: Environmental effects Over time, the popularity of artificial baits has increased drastically. With this, concerns of harm to the environment have been brought up. One of these concerns comes from the loss or disposing of used baits into the environment. The discarding of line and lures, loss of baits, and snapping of line while hooked to a fish can cause potential harm to the ecosystem. Another concern would be towards the health of the fish. It is not uncommon to find lures and hooks lodged into the digestive tracts of fish when caught. Along with that, fish will swallow are get tangled in discarded fishing line. Natural baits: The natural bait angler, with few exceptions, will use a common prey species of the fish as an attractant. The natural bait used may be alive or dead. Common natural baits include worms, leeches (notably bait-leech Nephelopsis obscura), minnows, frogs, salamanders, and insects. Natural baits are effective due to the lifelike texture, odor and color of the bait presented. Studies show that natural baits like croaker and shrimp are more recognized by the fish and are more readily accepted. Live bait being used to catch native species is a sustainable and desirable activity in a social and economical aspect. The availability of live bait and cost factor can inhibit the use of natural baits year round. Anglers can get various live baits from tackle shops at the limitations of price and season. Other ways anglers get natural bait is through methods of catching e.g. hook and line, fish traps and casting nets. Natural baits: Spreading disease The capture, transportation, and culture of bait fish can spread damaging organisms between ecosystems, endangering them. In 2007, several American states enacted regulations designed to slow the spread of fish diseases, including viral hemorrhagic septicemia, by bait fish. Because of the risk of transmitting Myxobolus cerebralis (whirling disease), trout and salmon should not be used as bait. The Non-indigenous Aquatic Nuisance Act of 1990 focuses on the effect of aquatic nuisance species. The introduction of these invasive species in various bodies of water have spread disease, killed fish, clogged water intakes, and covered beaches and boats.Anglers may increase the possibility of contamination by emptying bait buckets into fishing venues and collecting or using bait improperly. The transportation of fish from one location to another can break the law and cause the introduction of fish alien to the ecosystem. There has been legislation passed within the last couple years in attempt of protecting big and small fisheries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gyroelongated pyramid** Gyroelongated pyramid: In geometry, the gyroelongated pyramids (also called augmented antiprisms) are an infinite set of polyhedra, constructed by adjoining an n-gonal pyramid to an n-gonal antiprism. There are two gyroelongated pyramids that are Johnson solids made from regular triangles and square, and pentagons. A triangular and hexagonal form can be constructed with coplanar faces. Others can be constructed allowing for isosceles triangles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Two-seam fastball** Two-seam fastball: A two-seam fastball is a pitch in baseball and softball. It is a variant of the straight fastball. The pitch has the speed of a fastball and can also include late-breaking action caused by varying the pressure of the index and middle fingers on the ball. Grip and action: Several grips are used for a two-seam fastball, the most common of which is to place the index and middle fingers along the seams where they are closest together (where the horseshoes point in towards each other) with the thumb placed directly below on the leather with the rear of the thumb just touching the bottom near seam. The arm action is identical to a four-seam fastball, although the hand action differs slightly. Typically, the two-seam has more movement if the pitcher applies index fingertip pressure, or holds the baseball deeper in the hand. Both techniques cause the ball to spin out of the hand off-center and away from the pitcher, similar to the spin of a changeup.The two-seam fastball often is perceived to be slower than the four-seam fastball, but the slight pronation of the hand and off-center spin on the ball carries the ball down and toward the pitcher's dominant side, down and to the right for right-handers, and down and to the left for left-handers.A two-seam fastball that has a high horizontal break and drops less is often referred to as a running fastball. It is often higher in average velocity than a traditional two-seamer. In either case, the pitch is thrown in a two-seam orientation and has a gyro angle far from 0 degrees, leading to Seam-shifted wake effects that cause downward and lateral movement compared to a four-seam fastball. Effectiveness: The two-seam fastball appears to have more movement than a four-seam fastball, but can be more difficult to master and control. The amount of break on the pitch varies greatly from pitcher to pitcher depending on velocity, arm slot angle, and pressure points of the fingers. The two-seamer is a very natural pitch to throw, and is often taught to pitchers at a very early age. Its use is widespread throughout all levels of baseball, and most pitchers at any level have a two-seamer in their repertoire. Many pitchers, especially those without exceptional velocity, prefer a two-seam fastball to the four-seam because of its movement at the plate. However, power pitchers such as Justin Verlander combine control, high velocity, and break to make the two-seamer one of the most effective pitches in baseball. Effectiveness: The velocity of this pitch also varies greatly from pitcher to pitcher. At the major collegiate level and higher, two-seam fastballs are typically thrown in the low 90s (MPH), but with much variation. Pitchers such as Greg Maddux, Bob Stanley, Brandon McCarthy, David Price, Eddie Guardado and Marcus Stroman are notable for having success at the major-league levels with two-seam fastballs in the mid 80s to lower 90s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jugal Kalita** Jugal Kalita: Jugal Kalita is a professor and department chair of computer science at the College of Engineering and Applied Science within the University of Colorado, Colorado Springs. (UCCS) Academics: Jugal Kalita is founder of the Language Information and Computation (LINC) Lab at the University of Colorado Colorado Springs (UCCS). Authorship: On Perl: Perl for Students and Professionals, 2003, Universal Publishers Network Anomaly Detection: A Machine Learning Perspective, with Dhruba K. Bhattacharyya, 2013, CRC Press DDOS Attacks: Evolution, Detection, Prevention, Reaction and Tolerance, with Dhruba K. Bhattacharyya, 2016, CRC Press Network Traffic Anomaly Detection and Prevention: Concepts, Techniques, and Tools, with Monowar H. Bhuyan and Dhruba K. Bhattacharyya, 2017, Springer Nature Gene Expression Data Analysis: A Statistical and Machine Learning Perspective", with Pankaj Barah and Dhruba Kumar Bhattacharyya, 2021, CRC Press Machine Learning: Theory and Practice, CRC Press, 2023 CRC Press
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**WBAdmin** WBAdmin: In computing, WBAdmin is a command-line utility built into Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, Windows Server 2012, Windows 10 and Windows 11 operating systems. The command is used to perform backups and restores of operating systems, drive volumes, computer files, folders, and applications from a command-line interface. Features: WBAdmin is a disk-based backup system. It can create a "bare metal" backup used to restore the Windows operating system to similar or dissimilar hardware. The backup file(s) created are primarily in the form of Microsoft's Virtual Hard Disk (.VHD) files with some accompanying .xml configuration files. The backup .VHD file can be mounted in Windows Disk Manager to view content. However, the .VHD backup file is not a direct "disk clone". The utility replaces the previous Microsoft Windows Backup command-line tool, NTBackup, which came built-into Windows NT 4.0, Windows 2000, Windows XP and Windows Server 2003. It is the command-line version of Backup and Restore. WBAdmin also has a graphical user interface option available to simplify creation of computer backup (and restore). Workstation editions such as Windows 7 use a backup wizard located in Control Panel. The server version is done through an (easily installed) Windows feature using the Windows Management Console WBAdmin.MSC. The WBAdmin Management Console simplifies restoration, whether single file or multiple folders. Using the command-line or graphical user interface, WBAdmin creates a backup which can be quickly restored using just the Windows installation DVD and the backup files located on a removable USB disk without the need to re-install from scratch. WBAdmin uses a differencing engine to update the backup files. Once the original backup file is created Volume Shadow Copy Service updates changes, subsequent full backups take a matter of moments rather than many minutes taken to create the original backup file. Automatic backups can be scheduled on a regular basis using a wizard. Features: Two kinds of restore operations are supported: Bare-metal restore: using the Windows Recovery Environment you can complete a full server restoration to either the same server or to a server with dissimilar hardware (known as Hardware Independent Restore – HIR) Individual file and folder, and system state restore: files, folders, or the machine’s system state can be restored from the command-line using WBAdmin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lck** Lck: Lck (or lymphocyte-specific protein tyrosine kinase) is a 56 kDa protein that is found inside specialized cells of the immune system called lymphocytes. The Lck is a member of Src kinase family (SFK), it is important for the activation of the T-cell receptor signaling in both naive T cells and effector T cells. The role of the Lck is less prominent in the activation or in the maintenance of memory CD8 T cells in comparison to CD4 T cells. In addition, the role of the lck varies among the memory T cells subsets. It seems that in mice, in the effector memory T cells (TEM) population, more than 50% of lck is present in a constitutively active conformation, whereas, only less than 20% of lck is present as active form of lck. These differences are due to differential regulation by SH2 domain–containing phosphatase-1 (Shp-1) and C-terminal Src kinase.The Lck is responsible for the initiation of the TCR signaling cascade inside the cell by phosphorylating immunoreceptor tyrosine‑based activation motifs (ITAM) within the TCR-associated chains. Lck: The Lck can be found in different forms in the immune cells: free in the cytosol or bound to the plasma membrane (PM) through myristoylation and palmitoylation. Due to the presence of the conserved CxxC motif (C20 and C23) in the zinc clasp structure, the Lck is able to bind the cell surface coreceptors CD8 and\or CD4. Bound and free Lck have different properties: free Lck have more pronounced kinase activity in comparison to bounded Lck, moreover, the free form produces a higher T cell activation. The reasons of these differences are not well understood yet. T cell signaling: Lck is most commonly found in T cells. It associates with the cytoplasmic tails of the CD4 and CD8 co-receptors on T helper cells and cytotoxic T cells, respectively, to assist signaling from the T cell receptor (TCR) complex. T cells are able to respond to pathogen and cancer using T-cell receptor, nevertheless, they can also react to self-antigen causing the onset of autoimmune diseases. The T cells maturation occurs in the thymus and it is regulated by a threshold that defines the limit between the positive and the negative selection of thymocytes. in order to avoid the onset of autoimmune diseases, highly self-reactive T cells are removed during the negative selection, whereas, an amount of weak self-reactive T cells is required to promote an efficient immune response, therefore during the positive selection these cells are chosen for maturation. The threshold for positive and negative selection of developing T cells is regulated by the bound between the Lck and co-receptors.There are two main pools of T cells which mediate adaptive immune responses: CD4+ T cells (or helper T cells), and CD8+ T-cells (or cytotoxic T cells) which are MHCII-and MHCI restricted respectively. Despite their role in the immune system is different their activation is similar. Cytotoxic T cells are directly involved in the individuation and in the removal of infected cells, whereas helper T cells modulate other immune cells to supply the response.The initiation of immune response takes place when T cells encounter and recognize their cognate antigen. The antigen-presenting cells (APC) expose on their surface a fraction of the antigen that is recognized either from CD8+T cells or CD4+Tcells. This binding leads to the activation of TCR signaling cascade in which the immunoreceptor tyrosine-based activation motifs (ITAM) located in the CD3-zeta chains (ζ-chains) of the TCR complex, are phosphorylated by Lck and less extended by Fyn. Both coreceptor-bound and free Lck can phosphorylate the CD3 chains upon TCR activation, evidences suggest that the free form of Lck can be recruited and trigger the TCR signal faster than the coreceptor-bound Lck Additionally, upon T cell activation, a fraction of kinase active Lck, translocate from outside of lipid rafts (LR) to inside lipid rafts where it interacts with and activates LR-resident Fyn, which is involved in further downstream signaling activation. Once ITAM complex is phosphorylated the CD3 chains can be bound by another cytoplasmic tyrosine kinase called ZAP-70. In the case of CD8+ T cells, once ZAP70 binds CD3, the coreceptor associated with Lck binds the MHC stabilizing the TCR-MHC-peptide interaction. The phosphorylated form of ZAP-70 recruits another molecule in the signaling cascade called LAT (Linker for activation of T cells), a transmembrane protein. LAT acts as a scaffold able to regulate the TCR proximal signals in a phosphorylation-dependent manner. The most important proteins recruited by phosphorylated LAT are Shc-Grb2-SOS, PI3K, and phospholipase C (PLC). The residue responsible for the recruitment of phospholipase C-γ1 (PLC-γ1) is Y132. This binding leads to the Tec family kinase ITK-mediated PLC-γ1 phosphorylation and activation that consequentially produce calcium (Ca2+) ions mobilization. and activation of important signaling cascades within the lymphocyte. These include the Ras-MEK-ERK pathway, which goes on to activate certain transcription factors such as NFAT, NF-κB, and AP-1. These transcription factors regulate the production of a plethora of gene products, most notable, cytokines such as Interleukin-2 that promote long-term proliferation and differentiation of the activated lymphocytes. In addition to the significance of Lck and Fyn in T cell receptor signaling, these two src kinases have also been shown to be important in TLR-mediated signaling in T cells.The function of Lck has been studied using several biochemical methods, including gene knockout (knock-out mice), Jurkat cells deficient in Lck (JCaM1.6), and siRNA-mediated RNA interference. Lck activity regulation: The activity of the Lck can be positively or negatively regulated by the presence of other proteins such as the membrane protein CD146, the transmembrane tyrosine phosphatase CD45 and C-terminal Src kinase (Csk). In mice, CD146 directly interacts with the SH3 domain of coreceptor-free LCK via its cytoplasmic domain, promoting the LCK autophosphorylation. There is very little understanding of the role of CD45 isoforms, it is known that they are cell type-specific, and that they depend on the state of activation and differentiation of cells. In naïve T cells in humans, CD45RA isoform is more frequent, whereas when cells are activated the CD45R0 isoform is expressed in higher concentrations. Mice express low levels of high molecular weight isoforms (CD45RABC) in thymocytes or peripheral T cells. Low levels of CD45RB are typical in primed cells, while high levels of CD45RB are found in both naïve and primed cells. In general, CD45 acts to promote the active form of LCK by dephosphorylating a tyrosine (Y192) in its inhibitory C-terminal tail. The consequent trans-autophosphorylation of the tyrosine in the lck activation loop (Y394), stabilizes its active form promoting its open conformation which further enhances the kinase activity and substrate binding. The Dephosphorylation of the Y394 site can also be regulated by SH2 domain-containing phosphatase 1 (SHP-1), PEST-domain enriched tyrosine phosphatase (PEP), and protein tyrosine phosphatase-PEST. In contrast, Csk has an opposite role to that of CD45, it phosphorylated the Y505 of Lck promoting the closed conformation with inhibited kinase activity. When both Y394 and Y505 are unphosphorylated the lck show a basal kinase activity, vice versa, when phosphorylated, lck show similar activity to the Y394 single phosphorylated Lck Structure: Lck is a 56-kilodalton protein. The N-terminal tail of Lck is myristoylated and palmitoylated, which tethers the protein to the plasma membrane of the cell. The protein furthermore contains a SH3 domain, a SH2 domain and in the C-terminal part the tyrosine kinase domain. The two main phosphorylation sites on Lck are tyrosines 394 and 505. The former is an autophosphorylation site and is linked to activation of the protein. The latter is phosphorylated by Csk, which inhibits Lck because the protein folds up and binds its own SH2 domain. Lck thus serves as an instructive example that protein phosphorylation may result in both activation and inhibition. Lck and disease: Mutations in Lck are liked to a various range of diseases such as SCID (Severe combined immunodeficiency) or CIDs. In these pathologies, the dysfunctional activation of the lck leads to T cell activation failure. Many pathologies are linked to the overexpression of Lck such as cancer, asthma, diabetes 1, rheumatoid arthritis, psoriasis, systemic lupus erythematosus, inflammatory bowel diseases (crohn’s disease and ulcerative colitis), organ graft rejection, atherosclerosis, hypersensitivity reactions, polyarthritis, dermatomyositis. The increase of the lck in colonic epithelial cells can lead to colorectal cancer. The lck play a role also in the Thymoma, an auto-immune disorder which involve thymus. Tumorigenesis is enhanced by abnormal proliferation of immature thymocytes due to low levels of Lck.Lymphoid protein tyrosine phosphatase (lyp), is one of the suppressor of lck activity and mutations in this protein are correlated with the onset of diabetes 1. Increased activity of lck promote the onset of the diabetes 1. Lck and disease: Regarding respiratory diseases, asthma is associated with the activation of th2 type of t cell whose differentiation is mediated by lck. Moreover, mice with an unbalanced amount of lck show altered lung function which can consequentially leads to the onset of asthma. Substrates: Lck tyrosine phosphorylates a number of proteins, the most important of which are the CD3 receptor, CEACAM1, ZAP-70, SLP-76, the IL-2 receptor, Protein kinase C, ITK, PLC, SHC, RasGAP, Cbl, Vav1, and PI3K. Inhibition: In resting T cells, Lck is constitutively inhibited by Csk phosphorylation on tyrosine 505. Lck is also inhibited by SHP-1 dephosphorylation on tyrosine 394. Lck can also be inhibited by Cbl ubiquitin ligase, which is part of the ubiquitin-mediated pathway.Saractinib, a specific inhibitor of LCK impairs maintenance of human T-ALL cells in vitro as well as in vivo by targeting this tyrosine kinase in cells displaying high level of lipid rafts.Masitinib also inhibits Lck, which may have some impact on its therapeutic effects in canine mastocytoma.HSP90 inhibitor NVP-BEP800 has been described to affect stability of the LCK kinase and growth of T-cell acute lymphoblastic leukemias. Interactions: Lck has been shown to interact with:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**De Musset's sign** De Musset's sign: de Musset's sign is a condition in which there is rhythmic nodding or bobbing of the head in synchrony with the beating of the heart, in general as a result of aortic regurgitation whereby blood from the aorta regurgitates into the left ventricle due to a defect in the aortic valve. The nodding is an indication that the systolic pulse is being felt by the patient because of the increased pulse pressure resulting from the aortic insufficiency. The condition was named after the French poet Alfred de Musset. De Musset's sign is a type of head tremor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Psychosocial genomics** Psychosocial genomics: Psychosocial genomics (PG) is a field of research first proposed by Ernest L. Rossi in 2002. PG examines the modulation of gene expression in response to psychological, social and cultural experiences. Independent research shows that the experience of novelty, environmental enrichment and exercise facilitates activity and experience dependent gene expression and brain plasticity as well as stem cell healing processes.This is a top-down approach – from mind to body – that modulates the role of gene expression and brain plasticity in the development of human consciousness which can be perceived as the completion, or dynamic complement, of the bottom-up approach – direct sensorial and biological responses – as proposed by the ENCODE consortium.PG utilizes various methods and approaches derived from genomics, neuroscience and culturomics. These include DNA microarrays, and computational analysis with the GSEA database.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Convergent boundary** Convergent boundary: A convergent boundary (also known as a destructive boundary) is an area on Earth where two or more lithospheric plates collide. One plate eventually slides beneath the other, a process known as subduction. The subduction zone can be defined by a plane where many earthquakes occur, called the Wadati–Benioff zone. These collisions happen on scales of millions to tens of millions of years and can lead to volcanism, earthquakes, orogenesis, destruction of lithosphere, and deformation. Convergent boundaries occur between oceanic-oceanic lithosphere, oceanic-continental lithosphere, and continental-continental lithosphere. The geologic features related to convergent boundaries vary depending on crust types. Convergent boundary: Plate tectonics is driven by convection cells in the mantle. Convection cells are the result of heat generated by the radioactive decay of elements in the mantle escaping to the surface and the return of cool materials from the surface to the mantle. These convection cells bring hot mantle material to the surface along spreading centers creating new crust. As this new crust is pushed away from the spreading center by the formation of newer crust, it cools, thins, and becomes denser. Subduction begins when this dense crust converges with less dense crust. The force of gravity helps drive the subducting slab into the mantle. As the relatively cool subducting slab sinks deeper into the mantle, it is heated, causing hydrous minerals to break down. This releases water into the hotter asthenosphere, which leads to partial melting of asthenosphere and volcanism. Both dehydration and partial melting occurs along the 1,000 °C (1,830 °F) isotherm, generally at depths of 65 to 130 km (40 to 81 mi).Some lithospheric plates consist of both continental and oceanic lithosphere. In some instances, initial convergence with another plate will destroy oceanic lithosphere, leading to convergence of two continental plates. Neither continental plate will subduct. It is likely that the plate may break along the boundary of continental and oceanic crust. Seismic tomography reveals pieces of lithosphere that have broken off during convergence. Subduction zones: Subduction zones are areas where one lithospheric plate slides beneath another at a convergent boundary due to lithospheric density differences. These plates dip at an average of 45° but can vary. Subduction zones are often marked by an abundance of earthquakes, the result of internal deformation of the plate, convergence with the opposing plate, and bending at the oceanic trench. Earthquakes have been detected to a depth of 670 km (416 mi). The relatively cold and dense subducting plates are pulled into the mantle and help drive mantle convection. Oceanic – oceanic convergence: In collisions between two oceanic plates, the cooler, denser oceanic lithosphere sinks beneath the warmer, less dense oceanic lithosphere. As the slab sinks deeper into the mantle, it releases water from dehydration of hydrous minerals in the oceanic crust. This water reduces the melting temperature of rocks in the asthenosphere and causes partial melting. Partial melt will travel up through the asthenosphere, eventually, reach the surface, and form volcanic island arcs. Continental – oceanic convergence: When oceanic lithosphere and continental lithosphere collide, the dense oceanic lithosphere subducts beneath the less dense continental lithosphere. An accretionary wedge forms on the continental crust as deep-sea sediments and oceanic crust are scraped from the oceanic plate. Volcanic arcs form on continental lithosphere as the result of partial melting due to dehydration of the hydrous minerals of the subducting slab. Continental – continental convergence: Some lithospheric plates consist of both continental and oceanic crust. Subduction initiates as oceanic lithosphere slides beneath continental crust. As the oceanic lithosphere subducts to greater depths, the attached continental crust is pulled closer to the subduction zone. Once the continental lithosphere reaches the subduction zone, subduction processes are altered, since continental lithosphere is more buoyant and resists subduction beneath other continental lithosphere. A small portion of the continental crust may be subducted until the slab breaks, allowing the oceanic lithosphere to continue subducting, hot asthenosphere to rise and fill the void, and the continental lithosphere to rebound. Evidence of this continental rebound includes ultrahigh pressure metamorphic rocks, which form at depths of 90 to 125 km (56 to 78 mi), that are exposed at the surface. Seismic records have been used to map the torn slabs beneath the Caucasus continental – continental convergence zone, and seismic tomography has mapped detached slabs beneath the Tethyan suture zone (the Alps – Zagros – Himalaya mountain belt). Volcanism and volcanic arcs: The oceanic crust contains hydrated minerals such as the amphibole and mica groups. During subduction, oceanic lithosphere is heated and metamorphosed, causing breakdown of these hydrous minerals, which releases water into the asthenosphere. The release of water into the asthenosphere leads to partial melting. Partial melting allows the rise of more buoyant, hot material and can lead to volcanism at the surface and emplacement of plutons in the subsurface. These processes which generate magma are not entirely understood.Where these magmas reach the surface they create volcanic arcs. Volcanic arcs can form as island arc chains or as arcs on continental crust. Three magma series of volcanic rocks are found in association with arcs. The chemically reduced tholeiitic magma series is most characteristic of oceanic volcanic arcs, though this is also found in continental volcanic arcs above rapid subduction (>7 cm/year). This series is relatively low in potassium. The more oxidized calc-alkaline series, which is moderately enriched in potassium and incompatible elements, is characteristic of continental volcanic arcs. The alkaline magma series (highly enriched in potassium) is sometimes present in the deeper continental interior. The shoshonite series, which is extremely high in potassium, is rare but sometimes is found in volcanic arcs. The andesite member of each series is typically most abundant, and the transition from basaltic volcanism of the deep Pacific basin to andesitic volcanism in the surrounding volcanic arcs has been called the andesite line. Back-arc basins: Back-arc basins form behind a volcanic arc and are associated with extensional tectonics and high heat flow, often being home to seafloor spreading centers. These spreading centers are like mid-ocean ridges, though the magma composition of back-arc basins is generally more varied and contains a higher water content than mid-ocean ridge magmas. Back-arc basins are often characterized by thin, hot lithosphere. Opening of back-arc basins may arise from movement of hot asthenosphere into lithosphere, causing extension. Oceanic trenches: Oceanic trenches are narrow topographic lows that mark convergent boundaries or subduction zones. Oceanic trenches average 50 to 100 km (31 to 62 mi) wide and can be several thousand kilometers long. Oceanic trenches form as a result of bending of the subducting slab. Depth of oceanic trenches seems to be controlled by age of the oceanic lithosphere being subducted. Sediment fill in oceanic trenches varies and generally depends on abundance of sediment input from surrounding areas. An oceanic trench, the Mariana Trench, is the deepest point of the ocean at a depth of approximately 11,000 m (36,089 ft). Earthquakes and tsunamis: Earthquakes are common along convergent boundaries. A region of high earthquake activity, the Wadati-Benioff zone, generally dips 45° and marks the subducting plate. Earthquakes will occur to a depth of 670 km (416 mi) along the Wadati-Benioff margin.Both compressional and extensional forces act along convergent boundaries. On the inner walls of trenches, compressional faulting or reverse faulting occurs due to the relative motion of the two plates. Reverse faulting scrapes off ocean sediment and leads to the formation of an accretionary wedge. Reverse faulting can lead to megathrust earthquakes. Tensional or normal faulting occurs on the outer wall of the trench, likely due to bending of the downgoing slab.A megathrust earthquake can produce sudden vertical displacement of a large area of ocean floor. This in turn generates a tsunami.Some of the deadliest natural disasters have occurred due to convergent boundary processes. The 2004 Indian Ocean earthquake and tsunami was triggered by a megathrust earthquake along the convergent boundary of the Indian plate and Burma microplate and killed over 200,000 people. The 2011 tsunami off the coast of Japan, which caused 16,000 deaths and did US$360 billion in damage, was caused by a magnitude 9 megathrust earthquake along the convergent boundary of the Eurasian plate and Pacific Plate. Accretionary wedge: Accretionary wedges (also called accretionary prisms) form as sediment is scraped from the subducting lithosphere and emplaced against the overriding lithosphere. These sediments include igneous crust, turbidite sediments, and pelagic sediments. Imbricate thrust faulting along a basal decollement surface occurs in accretionary wedges as forces continue to compress and fault these newly added sediments. The continued faulting of the accretionary wedge leads to overall thickening of the wedge. Seafloor topography plays some role in accretion, especially emplacement of igneous crust. Examples: The collision between the Eurasian Plate and the Indian Plate that is forming the Himalayas. The collision between the Australian Plate and the Pacific Plate that formed the Southern Alps / Kā Tiritiri o te Moana in New Zealand Subduction of the northern part of the Pacific Plate and the NW North American Plate that is forming the Aleutian Islands. Subduction of the Nazca Plate beneath the South American Plate to form the Andes. Subduction of the Pacific Plate beneath the Australian Plate and Tonga Plate, forming the complex New Zealand to New Guinea subduction/transform boundaries. Collision of the Eurasian Plate and the African Plate formed the Pontic Mountains in Turkey. Subduction of the Pacific Plate beneath the Mariana Plate formed the Mariana Trench. Subduction of the Juan de Fuca Plate beneath the North American Plate to form the Cascade Range.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ECT2** ECT2: Protein ECT2 is a protein that in humans is encoded by the ECT2 gene. Function: The protein encoded by this gene is a transforming protein that is related to Rho-specific exchange factors and yeast cell cycle regulators. The expression of this gene is elevated with the onset of DNA synthesis and remains elevated during G2 and M phases. In situ hybridization analysis showed that expression is at a high level in cells undergoing mitosis in regenerating liver. Thus, this protein is expressed in a cell cycle-dependent manner during liver regeneration, and is thought to have an important role in the regulation of cytokinesis. Interactions: ECT2 has been shown to interact with PARD6A.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded