text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Bropirimine**
Bropirimine:
Bropirimine is an experimental drug with anti-cancer and antiviral properties. It is an orally effective immunomodulator and is being tried in bladder cancers.
Synthesis:
For the first step, the dianion from malonic acid half-ester is formed by treatment with butyllithium. Acylation of the anion with benzoyl chloride proceeds at the carbanion, which is more nucleophilic (because of the higher charge density). This tricarbonyl compound decarboxylates on acidification to the β-ketoester. Condensation with guanidine leads to the pyrimidone. NBS mediated bromination then gives bropirimine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polyphagia**
Polyphagia:
Polyphagia or hyperphagia is an abnormally strong, incessant sensation of hunger or desire to eat often leading to overeating. In contrast to an increase in appetite following exercise, polyphagia does not subside after eating and often leads to rapid intake of excessive quantities of food. Polyphagia is not a disorder by itself; rather, it is a symptom indicating an underlying medical condition. It is frequently a result of abnormal blood glucose levels (both hyperglycemia and hypoglycemia), and, along with polydipsia and polyuria, it is one of the "3 Ps" commonly associated with uncontrolled diabetes mellitus.
Etymology and pronunciation:
The word polyphagia () uses combining forms of poly- + -phagia, from the Greek words πολύς (polys), "very much" or "many", and φᾰ́γω (phago), "eating" or "devouring".
Underlying conditions and possible causes:
Polyphagia is one of the most common symptoms of diabetes mellitus. It is associated with hyperthyroidism and endocrine diseases, e.g., Graves' disease, and it has also been noted in Prader-Willi syndrome and other genetic conditions caused by chromosomal anomalies. It is only one of several diagnostic criteria for bulimia and is not by itself classified as an eating disorder. As a symptom of Kleine–Levin syndrome, it is sometimes termed megaphagia.Knocking out vagal nerve receptors has been shown to cause hyperphagia.Changes in hormones associated with the female menstrual cycle can lead to extreme hunger right before the period. Spikes in estrogen and progesterone and decreased serotonin can lead to cravings for carbohydrates and fats.Polyphagia is found in the following conditions: Chromosome 22q13 duplication syndrome Chromosome 2p25.3 deletion (MYT1L Syndrome) Chromosome Xq26.3 duplication syndrome Congenital generalized lipodystrophy type 1 Congenital generalized lipodystrophy type 2 Diabetes mellitus type 1 Familial renal glucosuria Frontotemporal dementia Frontotemporal dementia, ubiquitin-positive Graves' disease Hypotonia-cystinuria syndrome Kleine-Levin syndrome Leptin deficiency or dysfunction Leptin receptor deficiency Luscan-lumish syndrome Macrosomia adiposa congenita Mental retardation, autosomal dominant 1 Obesity, hyperphagia, and developmental delay (OBHD) Pick's disease Prader-Willi syndrome Proopiomelanocortin deficiency Schaaf-yang syndrome
Polyphagia in diabetes:
Diabetes mellitus causes a disruption in the body's ability to transfer glucose from food into energy. Intake of food causes glucose levels to rise without a corresponding increase in energy, which leads to a persistent sensation of hunger. Polyphagia usually occurs early in the course of diabetic ketoacidosis. However, once insulin deficiency becomes more severe and ketoacidosis develops, appetite is suppressed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lumbosacral trunk**
Lumbosacral trunk:
The lumbosacral trunk is nervous tissue that connects the lumbar plexus with the sacral plexus. It is formed by the union of parts of the fourth and fifth lumbar nerves and descends to join the sacral plexus.
Anatomy:
The lumbosacral trunk is formed by the union of the entire anterior ramus of lumbar nerve L5 and a part of L4. L4 first issues its branches to the lumbar plexus, then emerges from the medial border of the psoas muscle to unite with the anterior ramus of L5 just superior to the pelvic brim to form the thick, cord-like trunk which crosses the pelvic brim (medial to the obturator nerve) to descend upon the anterior surface of the ala of sacrum before joining the sacral plexus.Like the sacral nerves, the lumbosacral trunks splits into an anterior division and a posterior division before recombining to form nerves for the flexor and extensor compartments of the lower limb.
Clinical significance:
The lumbosacral trunk may be compressed by the fetal head during the second stage of labour. This causes some muscle weakness in the legs. A full recovery is usually expected. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Beta3-adrenergic agonist**
Beta3-adrenergic agonist:
The β3 (beta 3) adrenergic receptor agonist or β3-adrenoceptor agonist, also known as β3-AR agonist, are a class of medicine that bind selectively to β3-adrenergic receptors.
Beta3-adrenergic agonist:
β3-AR agonists for the treatment of obesity and type 2 diabetes have been in developmental stages within many large pharmaceutical companies since the early 1990s without successfully delivering an anti-obesity product to the market. More recently pharmaceutical companies have developed selective β3-AR agonists targeted at urinary inconsistencies and in 2012 Mirabegron (trade name Myrbetriq and Betmiga) was the first β3-AR agonist to be approved in the United States and Europe for the treatment of overactive bladder (OAB) syndrome.
Medical Uses:
In 2018 only one β3-AR agonist is approved by the European Medicines Agency (EMA) and the Food and drug Administration (FDA) as a medicine. The medicine is called Mirabegron and is used to treat OAB.
Medical Uses:
Urinary bladder Mirabegron is a selective β3-AR agonist that affects the detrusor muscles of the urinary bladder. By stimulation of β3-AR the contraction of the smooth muscles of the bladder is decreased and the bladder can store more volume of urine at a given time. Mirabegron also has an influence on the non-voiding contraction by decreasing the frequency of the contractions.
Medical Uses:
In 2018 two other β3-AR agonists are in clinical trials, vibegron and solabegron. Vibegron is in phase 3 clinical trial and is used to treat OAB. Solabegron is in phase 2b clinical trials to treat OAB in women and in phase 1 clinical trials in men to treat OAB.
Obesity and diabetes The β3-AR has been linked to thermogenesis in human skeletal muscles, with studies showing it to be responsible for over 40% of ephedrine-induced thermogenesis.
Medical Uses:
Cardiovascular In March 2016 a study funded by the European Commission began. The study is assessing the efficacy of Mirabegron in prevention of heart failure. In 2018 the study is ongoing and it's expected to conclude in 2020.A selective β1-AR antagonist with additional β3-AR agonist activity, called nebivolol, is one of a few selective B1-AR antagonists known to also cause vasodilation. This peripheral vasodilation is mediated by endothelial nitric oxide release following β3-AR agonism, not by adrenergic receptor blockade. This means nebivolol exerts vasodilatory, cardioprotective effects without the added adrenergic block side effects seen with non-selective beta-blockers that concomitantly lower blood pressure. Nebivolol is therefore approved for hypertension therapy in the United States; however, beta-blockers are still not generally the first line of treatment for primary hypertension.
Mechanism of action:
β3-AR are coupled with G proteins, both Gs protein and Gi protein. Gs protein coupled with β3-AR lead to increased activity of the enzyme adenylyl cyclase. Increased activity of adenylyl cyclase leads to increased formation of cyclic adenosine monophosphate (cAMP).β3-AR can also couple with Gi proteins. When they are coupled they lead to decrease in intracellular cAMP. The mechanism of β3-AR and Gi protein has been proposed as a mechanism of action in the heart. When β3-AR are coupled with Gi protein they can act as a brake on β1- and β2 adrenergic receptors to prevent over-activation by opposing the classical inotropic effect of β1 and β2 adrenergic receptors.The smooth muscle cells in the urinary bladder express β3-AR. They have effect on the detrusor muscle which relaxes when the β3-AR are activated. The relaxed detrusor muscle improves filling capacity of the bladder and eases the urge to pass urine.
Mechanism of action:
Nitric oxide Another mechanism by which β3-AR agonists exert their relaxative effects on vasculature is by promoting endothelial nitric oxide synthase (eNOS) activity and NO bioavailability. This is believed to be the mechanism by which nebivolol, a selective β1-AR antagonist with additional β3-AR agonist activity, exerts its cardio-protective effects.
Structure Activity Relationships (SAR):
Basic structure of β3-adrenergic receptor agonists β3-AR agonists have the basic structure 2-amino-1-phenylethan-1-ol but have variations that affect the selectivity of the agonist.
Binding to the β3-adrenergic receptor Visual inspection of selective β3-AR agonists revealed that they bind deep in the binding pocket of the receptors and exhibit some H-bonds and or hydrophobic interactions with the receptor.
Activity of β3-adrenergic receptor agonists The R-group on the following picture determines α- or β-adrenergic receptor selectivity. The larger the R-group is, the greater the β-receptor selectivity.
Structure Activity Relationships (SAR):
The basic structure of β-AR subtypes exhibit sequence similarity greater than 70% suggesting that the 3-dimensional structure of theses subtypes is similar. While the overall structure sequence is 70% identical the residue sites of the ligand binding pocket have an even higher similarity (75%-85%), making development of highly selective ligands difficult. The β3-AR unlike other β-adrenergic receptors has a higher affinity to ligands with a pyrimidine or m-chlorobenzyl ring rather than catecholamine like the other beta receptors subtypes.Noticeable ligand binding sites are the hydrophobic interaction of the aromatic ring, attached to the β-hydroxyl chiral carbon on the left-hand-side, to hydrophobic microdomains on TM3 and TM6 deep in the binding pocket. Additional hydrophobic side groups attached on this aromatic ring have been shown to increase hydrophobic contact in this region. The central hydroxyl group and the central protonated amine form strong hydrogen bonds with the TM7 and TM3 subunits. The hydrogen bonding of the central protonated amine to Y336 on the TM7 of the β3-AR serves as an important binding site for the ligand, aligning it properly for the deeper hydrophobic interaction between LHS aromatic ring and TM3 and TM6. This interaction is consistent between the ligands.Most of the selective agonists have an aromatic ring formation or another hydrophobic region, around 2-3 carbons from the central protonated amine group, which interacts with the superficial extracellular (ELC2) domain on the receptor. The stereochemistry of this aromatic group and its interactions with the ECL2 affect the ability of the ligand to align properly in the deep binding pocket and is an important factor for the total affinity of the ligand.The addition of a proton donating group (e.g. acid, amide) on the right-hand-side terminus contributes to a strong bifurcated hydrogen bonding to the R315 of TM6. Those ligands that do not have an acidic group have some other form of strong hydrogen bonding group that interacts with R315, such as a thiazole. This binding site differs between the different beta receptors subtypes and contributes to the β3 selectivity.
History of development:
In 1984 the β3 receptor was described as the third group of beta receptors in adipose tissue. This led to the development of agonist targeted at obesity and diabetes.
History of development:
In 1999 the function of the β3 in detrusor muscles was defined which opened the way for development of β3-AR agonist for OAB.In 2001 Mirabegron began clinical development in phase 1 clinical study. The indications were type 2 diabetes, lower urinary tract symptoms, OAB and bladder outlet obstruction.From 2004 to 2008 phase 2 clinical trials were performed. However the development of Mirabegron for type two diabetes was interrupted.In 2007 GlaxoSmithKline entered phase 1 clinical trials with Solabegron with the indication for OAB as well as a trial with the indication for Irritable bowel syndrome (IBS).From 2009 to 2011 Astellas Pharma concluded phase 3 clinical trials for the treatment of OAB with mirabegron. In July 2012 mirabegron was the first β3-AR agonist to be approved by the FDA and in October the same year it was approved by EMA.In 2011 Merck & Co. entered clinical trials with vibegron with the indication for OAB. and phase III clinical trials began in 2018.In 2018 solabegron, which has been acquired by Velicept Therapeutics, Inc., started phase I and phase II clinical trials in men and women, respectively, for the indication of OAB. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Prime omega function**
Prime omega function:
In number theory, the prime omega functions ω(n) and Ω(n) count the number of prime factors of a natural number n.
Prime omega function:
Thereby ω(n) (little omega) counts each distinct prime factor, whereas the related function Ω(n) (big omega) counts the total number of prime factors of n, honoring their multiplicity (see arithmetic function). That is, if we have a prime factorization of n of the form n=p1α1p2α2⋯pkαk for distinct primes pi (1≤i≤k ), then the respective prime omega functions are given by ω(n)=k and Ω(n)=α1+α2+⋯+αk . These prime factor counting functions have many important number theoretic relations.
Properties and relations:
The function ω(n) is additive and Ω(n) is completely additive. ω(n)=∑p∣n1 If p divides n at least once we count it only once, e.g. 12 )=ω(223)=2 .Ω(n)=∑pα∣n1=∑pα∥nα If p divides n α≥1 times then we count the exponents, e.g. 12 )=Ω(2231)=3 . As usual, pα∥n means α is the exact power of p dividing n .Ω(n)≥ω(n) If Ω(n)=ω(n) then n is squarefree and related to the Möbius function by μ(n)=(−1)ω(n)=(−1)Ω(n) If Ω(n)=1 then n is a prime number.
Properties and relations:
It is known that the average order of the divisor function satisfies 2ω(n)≤d(n)≤2Ω(n) .Like many arithmetic functions there is no explicit formula for Ω(n) or ω(n) but there are approximations.
An asymptotic series for the average order of ω(n) is given by log log log n)k, where 0.26149721 is the Mertens constant and γj are the Stieltjes constants.
The function ω(n) is related to divisor sums over the Möbius function and the divisor function including the next sums.
∑d∣n|μ(d)|=2ω(n) ∑d∣n|μ(d)|kω(d)=(k+1)ω(n) ∑r∣n2ω(r)=d(n2) ∑r∣n2ω(r)d(nr)=d2(n) ∑d∣n(−1)ω(d)=∏pα||n(1−α) gcd gcd gcd lcm odd lcm (m1,m2) gcd (k,m)=11≤k≤n1=nφ(m)m+O(2ω(m)) The characteristic function of the primes can be expressed by a convolution with the Möbius function: χP(n)=(μ∗ω)(n)=∑d|nω(d)μ(n/d).
Properties and relations:
A partition-related exact identity for ω(n) is given by log 2[∑k=1n∑j=1k(∑d∣k∑i=1dp(d−ji))sn,k⋅|μ(j)|], where p(n) is the partition function, μ(n) is the Möbius function, and the triangular sequence sn,k is expanded by sn,k=[qn](q;q)∞qk1−qk=so(n,k)−se(n,k), in terms of the infinite q-Pochhammer symbol and the restricted partition functions so/e(n,k) which respectively denote the number of k 's in all partitions of n into an odd (even) number of distinct parts.
Continuation to the complex plane:
A continuation of ω(n) has been found, though it is not analytic everywhere. Note that the normalized sinc function sinc sin (πx)πx is used.
log sinc (∏y=1⌈Re(z)⌉+1(x2+x−yz)))
Average order and summatory functions:
An average order of both ω(n) and Ω(n) is log log n . When n is prime a lower bound on the value of the function is ω(n)=1 . Similarly, if n is primorial then the function is as large as log log log n on average order. When n is a power of 2, then log log 2 .Asymptotics for the summatory functions over ω(n) , Ω(n) , and ω(n)2 are respectively computed in Hardy and Wright as log log log log log log log log log log log log x)k−1),k∈Z+, where 0.2614972128 is the Mertens constant and the constant B2 is defined by prime 1.0345061758.
Average order and summatory functions:
Other sums relating the two variants of the prime omega functions include ∑n≤x{Ω(n)−ω(n)}=O(x), and log log log log x)1/2).
Average order and summatory functions:
Example I: A modified summatory function In this example we suggest a variant of the summatory functions := ∑n≤xω(n) estimated in the above results for sufficiently large x . We then prove an asymptotic formula for the growth of this modified summatory function derived from the asymptotic estimate of Sω(x) provided in the formulas in the main subsection of this article above.To be completely precise, let the odd-indexed summatory function be defined as odd := odd ], where [⋅] denotes Iverson bracket. Then we have that odd log log mod log x).
Average order and summatory functions:
The proof of this result follows by first observing that if is odd; if is even, and then applying the asymptotic result from Hardy and Wright for the summatory function over ω(n) , denoted by := ∑n≤xω(n) , in the following form: odd odd odd odd (x)+Sω(⌊x2⌋)+⌊x4⌋.
Example II: Summatory functions for so-termed factorial moments of ω(n) The computations expanded in Chapter 22.11 of Hardy and Wright provide asymptotic estimates for the summatory function ω(n){ω(n)−1}, by estimating the product of these two component omega functions as prime prime prime 1.
We can similarly calculate asymptotic formulas more generally for the related summatory functions over so-termed factorial moments of the function ω(n)
Dirichlet series:
A known Dirichlet series involving ω(n) and the Riemann zeta function is given by 1.
Dirichlet series:
We can also see that ∑n≥1zω(n)ns=∏p(1+zps−1),|z|<2,ℜ(s)>1, ∑n≥1zΩ(n)ns=∏p(1−zps)−1,|z|<2,ℜ(s)>1, The function Ω(n) is completely additive, where ω(n) is strongly additive (additive). Now we can prove a short lemma in the following form which implies exact formulas for the expansions of the Dirichlet series over both ω(n) and Ω(n) : Lemma. Suppose that f is a strongly additive arithmetic function defined such that its values at prime powers is given by := f0(p,α) , i.e., f(p1α1⋯pkαk)=f0(p1,α1)+⋯+f0(pk,αk) for distinct primes pi and exponents αi≥1 . The Dirichlet series of f is expanded by min (1,σf).
Dirichlet series:
Proof. We can see that ∑n≥1uf(n)ns=∏pprime(1+∑n≥1uf0(p,n)p−ns).
This implies that ∑n≥1f(n)ns=ddu[∏pprime(1+∑n≥1uf0(p,n)p−ns)]|u=1=∏p(1+∑n≥1p−ns)×∑p∑n≥1f0(p,n)p−ns1+∑n≥1p−ns=ζ(s)×∑pprime(1−p−s)⋅∑n≥1f0(p,n)p−ns, wherever the corresponding series and products are convergent. In the last equation, we have used the Euler product representation of the Riemann zeta function. ⊡ The lemma implies that for ℜ(s)>1 , := log := log := log ζ(s), where P(s) is the prime zeta function and λ(n)=(−1)Ω(n) is the Liouville lambda function.
The distribution of the difference of prime omega functions:
The distribution of the distinct integer values of the differences Ω(n)−ω(n) is regular in comparison with the semi-random properties of the component functions. For k≥0 , define := #({n∈Z+:Ω(n)−ω(n)=k}∩[1,x]).
These cardinalities have a corresponding sequence of limiting densities dk such that for x≥2 log x)43).
These densities are generated by the prime products ∑k≥0dk⋅zk=∏p(1−1p)(1+1p−z).
With the absolute constant := 14×∏p>2(1−1(p−1)2)−1 , the densities dk satisfy dk=c^⋅2−k+O(5−k).
Compare to the definition of the prime products defined in the last section of in relation to the Erdős–Kac theorem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantum tunnelling composite**
Quantum tunnelling composite:
Quantum tunnelling composites (QTCs) are composite materials of metals and non-conducting elastomeric binder, used as pressure sensors. They use quantum tunnelling: without pressure, the conductive elements are too far apart to conduct electricity; when pressure is applied, they move closer and electrons can tunnel through the insulator. The effect is far more pronounced than would be expected from classical (non-quantum) effects alone, as classical electrical resistance is linear (proportional to distance), while quantum tunnelling is exponential with decreasing distance, allowing the resistance to change by a factor of up to 1012 between pressured and unpressured states.Quantum tunneling composites hold multiple designations in specialized literature, such as: conductive/semi-conductive polymer composite, piezo-resistive sensor and force-sensing resistor (FSR). However, in some cases Force-sensing resistors may operate predominantly under percolation regime; this implies that the composite resistance grows for an incremental applied stress or force.
Introduction:
QTCs were discovered in 1996 by technician David Lussey while he was searching for a way to develop an electrically conductive adhesive. Lussey founded Peratech Ltd, a company devoted to research work and usage of QTCs. Peratech Ltd. and other companies are working on developing quantum tunneling composite to improve touch technology. Currently, there is restricted use of QTC due to its high cost, but eventually this technology is expected to become available to the general user. Quantum tunneling composites are combinations of polymer composites with elastic, rubber-like properties elastomer, and metal particles (nickel). Due to a no-air gap in the sensor contamination or interference between the contact points is impossible. There is also little to no chance of arcing, electrical sparks between contact points. In the QTC's inactive state, the conductive elements are too far from one another to pass electron charges. Thus, current does not flow when there is no pressure on the quantum-tunneling composite. A characterization of a QTC is its spiky silicon covered surface. The spikes do not actually touch, but when a force is applied to the QTC, the spikes move closer to each other and a [quantum] effect occurs as a high concentration of electrons flow from one spike tip to the next. The electric current stops when the force is taken away.
Types:
QTCs come in different forms and each form is used differently but has a similar resistance change when deformed. QTC pills are the most commonly used type of QTC. Pills are pressure sensitive variable resistors. The amount of electric current passed is exponentially proportionate to the amount of pressure applied. QTC pills can be used as input sensors which respond to an applied force. These pills can also be used in devices to control higher currents than QTC sheets. QTC sheets are composed of three layers: a thin layer of QTC material, a conductive material and a plastic insulator. QTC sheets allow a quick switch from high to low resistance and vice versa.
Applications:
In February 2008 the newly formed company QIO Systems Inc gained, in a deal with Peratech, the worldwide exclusive license to the intellectual property and design rights for the electronics and textile touchpads based on QTC technology and for the manufacture and sale of ElekTex (QTC-based) textile touchpads for use in both consumer and commercial applications.QTCs were used to provide fingertip sensitivity in NASA's Robonaut in 2012. Robonaut was able to survive and send detailed feedback from space. The sensors on the human-like robot were able to tell how hard and where it was gripping something.Quantum tunneling composites are relatively new and are still being researched and developed. QTC has been implemented within clothing to make “smart”, touchable membrane control panels to control electronic devices within clothing, e.g. mp3 players or mobile phones. This allows equipment to be operated without removing clothing layers or opening fastenings and makes standard equipment usable in extreme weather or environmental conditions such as Arctic/Antarctic exploration or spacesuits.
Applications:
The following are possible uses of QTCs: Sporting materials such as training dummies or fencing jackets can be covered in QTC material. Sensors on the material can relay information on the force of an impact.
Mirror and window operation such as gesture, stroke, or swipe can be used in automotive applications. Depending on the amount of pressure applied from the gesture, the car parts will adjust to the desired setting at either a fast speed or a slow speed. The more pressure is applied, the faster the operation will be.
Blood pressure cuffs: QTCs in blood pressure cuffs reduce inaccurate readings from improper cuff attachment. The sensors tell how much tension is needed to read a person's blood pressure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BAM15**
BAM15:
BAM15 is a novel mitochondrial protonophore uncoupler capable of protecting mammals from acute renal ischemic-reperfusion injury and cold-induced microtubule damage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interrogative word**
Interrogative word:
An interrogative word or question word is a function word used to ask a question, such as what, which, when, where, who, whom, whose, why, whether and how. They are sometimes called wh-words, because in English most of them start with wh- (compare Five Ws). They may be used in both direct questions (Where is he going?) and in indirect questions (I wonder where he is going). In English and various other languages the same forms are also used as relative pronouns in certain relative clauses (The country where he was born) and certain adverb clauses (I go where he goes). It can also be used as a modal, since question words are more likely to appear in modal sentences, like (Why was he walking?) A particular type of interrogative word is the interrogative particle, which serves to convert a statement into a yes–no question, without having any other meaning. Examples include est-ce que in French, ли li in Russian, czy in Polish, чи chy in Ukrainian, ĉu in Esperanto, āyā آیا in Persian, কি ki in Bengali, 嗎/吗 ma in Mandarin Chinese, mı/mi in Turkish, pa in Ladin, か ka in Japanese, 까 kka in Korean, ko/kö in Finnish and (да) ли (da) li in Serbo-Croatian. "Is it true that..." and "... right?" would be a similar construct in English. Such particles contrast with other interrogative words, which form what are called wh-questions rather than yes–no questions.
Interrogative word:
For more information about the grammatical rules for using formed questions in various languages, see Interrogative.
In English:
Interrogative words in English can serve as interrogative determiners, interrogative pronouns, or interrogative adverbs. Certain pronominal adverbs may also be used as interrogative words, such as whereby or wherefore.
In English:
Interrogative determiner The interrogative words which, what, and whose are interrogative determiners when used to prompt the specification of a presented noun or noun phrase such as in the question Which farm is the largest? where the interrogative determiner which prompts specification of the noun farm. In the question Whose gorgeous, pink painting is that?, whose is the interrogative, personal, possessive determiner prompting a specification for the possessor of the noun phrase gorgeous pink painting.
In English:
Interrogative pronoun The interrogative words who, whom, whose, what, and which are interrogative pronouns when used in the place of a noun or noun phrase. In the question Who is the leader?, the interrogative word who is a interrogative pronoun because it stands in the place of the noun or noun phrase the question prompts (e.g. the king or the woman with the crown). Similarly, in the question Which leads to the city center? the interrogative word which is an interrogative pronoun because it stands in the place of a noun or noun phrase (e.g. the road to the north or the river to your east). Note, which is an interrogative pronoun, not an interrogative determiner, because there is no noun or noun phrase present to serve as a determiner for. Consequently, in the question Which leads to the city center? the word which is an interrogative pronoun; when in the question Which road leads to the city center? the word which is an interrogative determiner for the noun road.
In English:
Interrogative adverb The interrogative words where, when, how, why, whether, whatsoever, and the more archaic whither and whence are interrogative adverbs when they modify a verb. In the question How did you announce the deal? the interrogative word how is an interrogative adverb because it modifies the verb did (past tense of to do). In the question Why should I read that book? the interrogative word why is an interrogative adverb because it describes the verb should.
In English:
Note, interrogative adverbs always describe auxiliary verbs such as did, do, should, will, must, or might.
In English:
Yes–no questions Yes–no questions can begin with an interrogative particle, such as: A conjugation of be (e.g. "Are you hungry?") A conjugation of do (e.g. "Do you want fries?") - see Do-support § In questions A conjugation of another auxiliary verb, including contractions (e.g. "Can't you move any faster?")English questions can also be formed without an interrogative word as the first word, by changing the intonation or punctuation of a statement. For example: "You're done eating?" Etymology Ultimately, the English interrogative pronouns (those beginning with wh in addition to the word how), derive from the Proto-Indo-European root kwo- or kwi, the former of which was reflected in Proto-Germanic as χwa- or khwa-, due to Grimm's law.
In English:
These underwent further sound changes and spelling changes, notably wh-cluster reductions, resulting in the initial sound being either /w/ (in most dialects) or /h/ (how, who) and the initial spelling being either wh or h (how). This was the result of two sound changes – /hw/ > /h/ before /uː/ (how, who) and /hw/ > /w/ otherwise – and the spelling change from hw to wh in Middle English. The unusual pronunciation versus spelling of who is because the vowel was formerly /aː/, and thus it did not undergo the sound change in Old English, but in Middle English (following spelling change) the vowel changed to /uː/ and it followed the same sound change as how before it, but with the Middle English spelling unchanged.
In English:
In how (Old English hū, from Proto-Germanic χwō), the w merged into the lave of the word, as it did in Old Frisian hū, hō (Dutch hoe "how"), but it can still be seen in Old Saxon hwō, Old High German hwuo (German wie "how"). In English, the gradual change of voiceless stops into voiceless fricatives (phase 1 of Grimm's law) during the development of Germanic languages is responsible for "wh-" of interrogatives. Although some varieties of American English and various Scottish dialects still preserve the original sound (i.e. [ʍ] rather than [w]), most have only the [w].
In English:
The words who, whom, whose, what and why, can all be considered to come from a single Old English word hwā, reflecting its masculine and feminine nominative (hwā), dative (hwām), genitive (hwæs), neuter nominative and accusative (hwæt), and instrumental (masculine and neuter singular) (hwȳ, later hwī) respectively. Other interrogative words, such as which, how, where, whence, or whither, derive either from compounds (which coming from a compound of hwā [what, who] and līc [like]), or other words from the same root (how deriving from hū).
In English:
The Proto-Indo-European root also directly originated the Latin and Romance form qu- in words such as Latin quī ("which") and quando ("when"); it has also undergone sound and spelling changes, as in French qui "which", with initial /k/, and Spanish cuando, with initial /kw/.
In English:
Forms with -ever Most English interrogative words can take the suffix -ever, to form words such as whatever and wherever. (Older forms of the suffix are -so and -soever, as in whoso and whomsoever.) These words have the following main meanings: As more emphatic interrogative words, often expressing disbelief or puzzlement in mainly rhetorical questions: Whoever could have done such a thing? Wherever has he gone? To form free relative clauses, as in I'll do whatever you do, Whoever challenges us shall be punished, Go to wherever they go. In this use, the nominal -ever words (who(m)ever, whatever, whichever) can be regarded as indefinite pronouns or as relative pronouns.
In English:
To form adverbial clauses with the meaning "no matter where/who/etc.": Wherever they hide, I will find them.Some of these words have also developed independent meanings, such as however as an adverb meaning "nonetheless"; whatsoever as an emphatic adverb used with no, none, any, nothing, etc. (I did nothing wrong whatsoever); and whatever in its slang usage.
Other languages:
A frequent class of interrogative words in several other languages is the interrogative verb: Korean: Mongolian: Australian Aboriginal languages Interrogative pronouns in Australian Aboriginal languages are a diverse set of lexical items with functions extending far beyond simply the formation of questions (though this is one of their uses). These pronominal stems are sometimes called ignoratives or epistememes because their broader function is to convey differing degrees of perceptual or epistemic certainty. Often, a singular ignorative stem may serve a variety of interrogative functions that would be expressed by different lexical items in, say, English through contextual variation and interaction with other morphology such as case-marking. In Jingulu, for example, the single stem nyamba may come to mean 'what,' 'where,' 'why,' or 'how' through combination with locative, dative, ablative, and instrumental case suffixes: (Adapted from Pensalfini) Other closely related languages, however, have less interrelated ways of forming wh-questions with separate lexemes for each of these wh-pronouns. This includes Wardaman, which has a collection of entirely unrelated interrogative stems: yinggiya 'who,' ngamanda 'what,' guda 'where,' nyangurlang 'when,' gun.garr-ma 'how many/what kind.'Mushin (1995) and Verstraete (2018) provide detailed overviews of the broader functions of ignoratives in an array of languages. The latter focuses on the lexeme ngaani in many Paman Languages which can have a Wh-like interrogative function but can also have a sense of epistemic indefiniteness or uncertainty like 'some' or 'perhaps;' see the following examples from Umpithamu: Wh-question Adnominal / Determiner Adverbial (Verstraete 2018) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tetramethylammonium pentafluoroxenate**
Tetramethylammonium pentafluoroxenate:
Tetramethylammonium pentafluoroxenate is the chemical compound with the formula N(CH3)4XeF5. The XeF−5 ion it contains was the first example of a pentagonal planar molecular geometry AX5E2 species. It was prepared by the reaction of N(CH3)4F with xenon tetrafluoride, N(CH3)4F being chosen because it can be prepared in anhydrous form and is readily soluble in organic solvents. The anion is planar, with the fluorine atoms in a slightly distorted pentagonal coordination (Xe–F bond lengths 197.9–203.4 pm, and F–X–F bond angles 71.5°–72.3°). Other salts have been prepared with sodium, cesium and rubidium, and vibrational spectra show that these contain the same planar ion. The isolated anion has the point group of D5h. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Felicity effect**
Felicity effect:
The Felicity effect in Physics, is an effect observed during acoustic emission in a structure undergoing repeated mechanical loading. It negates the effect of emission silence in the structure that is often observed from the related Kaiser effect at high loads. A material demonstrating the Felicity effect gives off acoustic emission at a lower load than one previously reached in an increasing load cycle regime. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Do3D**
Do3D:
Do3D was one of the first virtual reality computer software released for Microsoft Windows, being the first consumer product released by Superscape in 1998.
The purpose of the program was the ability to create your own virtual worlds, having the feature to implement them onto webpages with special plug-ins, and walk around them. It used low-polygon, simple 3D graphics, with the possibility of adding your own textures and colors into pre-made objects, or into construction blocks for custom buildings.
Its dedicated website, Do3D.com, is now offline. The "robots.txt" file present on the website's server prevented the pages from being archived by the Wayback Machine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SwitchUp**
SwitchUp:
SwitchUp is an online coding and computing programing platform. Students use the website to research online and offline programming courses by reading alumni reviews, connecting with mentors in the forum, taking an online quiz, and reading industry studies. SwitchUp only accepts reviews from verified alumni and has a verification process.
History:
SwitchUp, was started after Jonathan Lau, an MIT alum, attended a Coding Bootcamp in Boston. He launched SwitchUp with a small team, and left the first code bootcamp review on the site.
SwitchUp aims to add transparency to the technology education industry. The website was launched in August 2014.
Product:
As of October 2020, the site had over 20,000 reviews of 1000 different programming bootcamps and courses across 30 different countries. Switch guides students on a career path, recommends bootcamps, and aggregates alumni reviews.
Research Publications and Rankings:
SwitchUp regularly publishes industry research and bootcamp rankings. They also put out data science, cyber security, and web design rankings.
Research Publications and Rankings:
They also offer scholarship information and listings for bootcamps that accept the GI Bill.In a job outcomes study conducted by researchers published on Dec 1, 2016, the following trends were found: 63% of graduates reported increase in salary 80% of graduates reported they were 'satisfied' or 'very satisfied' Average class size is 30 students with a 1-to-3.8 student instructor ratio A one-tailed paired-difference test showed that the increase in salary was statistically significant at the 95% levelSwitchUp also published the 2018 best coding bootcamp rankings on December 31, 2018. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bartter syndrome**
Bartter syndrome:
Bartter syndrome (BS) is a rare inherited disease characterised by a defect in the thick ascending limb of the loop of Henle, which results in low potassium levels (hypokalemia), increased blood pH (alkalosis), and normal to low blood pressure. There are two types of Bartter syndrome: neonatal and classic. A closely associated disorder, Gitelman syndrome, is milder than both subtypes of Bartter syndrome.
Signs and symptoms:
In 90% of cases, neonatal Bartter syndrome is seen between 24 and 30 weeks of gestation with excess amniotic fluid (polyhydramnios). After birth, the infant is seen to urinate and drink excessively (polyuria, and polydipsia, respectively). Life-threatening dehydration may result if the infant does not receive adequate fluids. About 85% of infants dispose of excess amounts of calcium in the urine (hypercalciuria) and kidneys (nephrocalcinosis), which may lead to kidney stones. In rare occasions, the infant may progress to kidney failure.Patients with classic Bartter syndrome may have symptoms in the first two years of life, but they are usually diagnosed at school age or later. Like infants with the neonatal subtype, patients with classic Bartter syndrome also have polyuria, polydipsia, and a tendency to dehydration, but normal or just slightly increased urinary calcium excretion without the tendency to develop kidney stones. These patients also have vomiting and growth retardation. Kidney function is also normal if the disease is treated, but occasionally patients proceed to end-stage kidney failure.
Signs and symptoms:
Bartter syndrome consists of low levels of potassium in the blood, alkalosis, normal to low blood pressures, and elevated plasma renin and aldosterone. Numerous causes of this syndrome probably exist. Diagnostic pointers include high urinary potassium and chloride despite low serum values, increased plasma renin, hyperplasia of the juxtaglomerular apparatus on kidney biopsy, and careful exclusion of diuretic abuse. Excess production of prostaglandins by the kidneys is often found. Magnesium wasting may also occur. Homozygous patients experience severe hypercalciuria and nephrocalcinosis.
Pathophysiology:
Bartter syndrome is caused by mutations of genes encoding proteins that transport ions across renal cells in the thick ascending limb of the nephron also called as the ascending loop of Henle. Specifically, mutations directly or indirectly involving the Na-K-2Cl cotransporter are key. The Na-K-2Cl cotransporter is involved in electroneutral transport of one sodium, one potassium, and two chloride ions across the apical membrane of the tubule. The basolateral calcium-sensing receptor has the ability to downregulate the activity of this transporter upon activation. Once transported into the tubule cells, sodium ions are actively transported across the basolateral membrane by Na+/K+-ATPases, and chloride ions pass by facilitated diffusion through basolateral chloride channels. Potassium, however, is able to diffuse back into the tubule lumen through apical potassium channels, returning a net positive charge to the lumen and establishing a positive voltage between the lumen and interstitial space. This charge gradient is obligatory for the paracellular reabsorption of both calcium and magnesium ions.Proper function of all of these transporters is necessary for normal ion reabsorption along the thick ascending limb, and loss of any component can result in functional inactivation of the system as a whole and lead to the presentation of Bartter syndrome. Loss of function of this reabsorption system results in decreased sodium, potassium, and chloride reabsorption in the thick ascending limb, as well as abolishment of the lumen-positive voltage, resulting in decreased calcium and magnesium reabsorption. Loss of reabsorption of sodium here also has the undesired effect of abolishing the hypertonicity of the renal medulla, severely impairing the ability to reabsorb water later in the distal nephron and collecting duct system, leading to significant diuresis and the potential for volume depletion. Finally, increased sodium load to the distal nephron elicits compensatory reabsorption mechanisms, albeit at the expense of potassium by excretion by principal cells and resulting hypokalemia. This increased potassium excretion is partially compensated by α-intercalated cells at the expense of hydrogen ions, leading to metabolic alkalosis.Bartter and Gitelman syndromes can be divided into different subtypes based on the genes involved:
Diagnosis:
People with Bartter syndrome present symptoms that are identical to those of patients who are on loop diuretics like furosemide, given that the loop diuretics target the exact transport protein that is defective in the syndrome (at least for type 1 Bartter syndrome). The other subtypes of the syndrome involve mutations in other transporters that result in functional loss of the target transporter. Patients often admit to a personal preference for salty foods.The clinical findings characteristic of Bartter syndrome is hypokalemia, metabolic alkalosis, and normal to low blood pressure. These findings may also be caused by other conditions, which may cause confusion. When diagnosing a Bartter's syndrome, the following conditions must be ruled out as possible causes of the symptomatology: Chronic vomiting: These patients will have low urine chloride levels; they have relatively higher urine chloride levels.
Diagnosis:
Abuse of diuretic medications (water pills): The physician must screen urine for multiple diuretics before a diagnosis is made.
Magnesium deficiency and calcium deficiency: These patients will also have low serum and urine magnesium and calcium.Patients with Bartter syndrome may also have elevated renin and aldosterone levels.Prenatal Bartter syndrome can be associated with polyhydramnios.
Diagnosis:
Related conditions Bartter and Gitelman syndromes are both characterized by low levels of potassium and magnesium in the blood, normal to low blood pressure, and hypochloremic metabolic alkalosis.However, Bartter syndrome is also characterized by high renin, high aldosterone, hypercalciuria, and an abnormal Na+-K+-2Cl− transporter in the thick ascending limb of the loop of Henle, whereas Gitelman syndrome causes hypocalciuria and is due to an abnormal thiazide-sensitive transporter in the distal segment.
Diagnosis:
Pseudo-Bartter's syndrome is a syndrome of similar presentation as Bartter syndrome but without any of its characteristic genetic defects. Pseudo-Bartter's syndrome has been seen in cystic fibrosis, as well as in excessive use of laxatives.
Treatment:
Medically supervised sodium, chloride and potassium supplementation is necessary, and spironolactone can be also used to reduce potassium loss. Free and unqualified access to water is necessary to prevent dehydration, as patients maintain an appropriate thirst response. In severe cases where supplementation alone cannot maintain biochemical homeostasis, nonsteroidal anti-inflammatory drugs (NSAIDs) can be used to reduce glomerular filtration and can be very useful, although may cause gastric irritation and should be administered alongside stomach acid suppression therapies. Angiotensin-converting enzyme (ACE) inhibitors can also be used to reduce glomerular filtration rate. In young babies and children, a low threshold to check serum electrolytes during periods of illness compromising fluid intake is necessary.Surveillance renal ultrasound should be employed to monitor for the development of nephrocalcinosis, a common complication which further augments urinary concentrating difficulty.
Prognosis:
The limited prognostic information available suggests that early diagnosis and appropriate treatment of infants and young children with classic Bartter Syndrome may improve growth and perhaps intellectual development. On the other hand, sustained hypokalemia and hyperreninemia can cause progressive tubulointerstitial nephritis, resulting in end-stage kidney disease (kidney failure). With early treatment of the electrolyte imbalances, the prognosis for patients with classic Bartter Syndrome is good.
History:
The condition is named after Dr. Frederic Bartter, who, along with Dr. Pacita Pronove, first described it in 1960 and in more patients in 1962. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Václav Chvátal**
Václav Chvátal:
Václav (Vašek) Chvátal (Czech: [ˈvaːtslaf ˈxvaːtal]) is a Professor Emeritus in the Department of Computer Science and Software Engineering at Concordia University in Montreal, Quebec, Canada, and a visiting professor at Charles University in Prague. He has published extensively on topics in graph theory, combinatorics, and combinatorial optimization.
Biography:
Chvátal was born in 1946 in Prague and educated in mathematics at Charles University in Prague, where he studied under the supervision of Zdeněk Hedrlín. He fled Czechoslovakia in 1968, three days after the Soviet invasion, and completed his Ph.D. in Mathematics at the University of Waterloo, under the supervision of Crispin St. J. A. Nash-Williams, in the fall of 1970. Subsequently, he took positions at McGill University (1971 and 1978–1986), Stanford University (1972 and 1974–1977), the Université de Montréal (1972–1974 and 1977–1978), and Rutgers University (1986-2004) before returning to Montreal for the Canada Research Chair in Combinatorial Optimization at Concordia (2004-2011) and the Canada Research Chair in Discrete Mathematics (2011-2014) till his retirement.
Research:
Chvátal first learned of graph theory in 1964, on finding a book by Claude Berge in a Pilsen bookstore and much of his research involves graph theory: His first mathematical publication, at the age of 19, concerned directed graphs that cannot be mapped to themselves by any nontrivial graph homomorphism Another graph-theoretic result of Chvátal was the 1970 construction of the smallest possible triangle-free graph that is both 4-chromatic and 4-regular, now known as the Chvátal graph.
Research:
A 1972 paper relating Hamiltonian cycles to connectivity and maximum independent set size of a graph, earned Chvátal his Erdős number of 1. Specifically, if there exists an s such that a given graph is s-vertex-connected and has no (s + 1)-vertex independent set, the graph must be Hamiltonian. Avis et al. tell the story of Chvátal and Erdős working out this result over the course of a long road trip, and later thanking Louise Guy "for her steady driving." In a 1973 paper, Chvátal introduced the concept of graph toughness, a measure of graph connectivity that is closely connected to the existence of Hamiltonian cycles. A graph is t-tough if, for every k greater than 1, the removal of fewer than tk vertices leaves fewer than k connected components in the remaining subgraph. For instance, in a graph with a Hamiltonian cycle, the removal of any nonempty set of vertices partitions the cycle into at most as many pieces as the number of removed vertices, so Hamiltonian graphs are 1-tough. Chvátal conjectured that 3/2-tough graphs, and later that 2-tough graphs, are always Hamiltonian; despite later researchers finding counterexamples to these conjectures, it still remains open whether some constant bound on the graph toughness is enough to guarantee Hamiltonicity.Some of Chvátal's work concerns families of sets, or equivalently hypergraphs, a subject already occurring in his Ph.D. thesis, where he also studied Ramsey theory.
Research:
In a 1972 conjecture that Erdős called "surprising" and "beautiful", and that remains open (with a $10 prize offered by Chvátal for its solution) he suggested that, in any family of sets closed under the operation of taking subsets, the largest pairwise-intersecting subfamily may always be found by choosing an element of one of the sets and keeping all sets containing that element.
Research:
In 1979, he studied a weighted version of the set cover problem, and proved that a greedy algorithm provides good approximations to the optimal solution, generalizing previous unweighted results by David S. Johnson (J. Comp. Sys. Sci. 1974) and László Lovász (Discrete Math. 1975).Chvátal first became interested in linear programming through the influence of Jack Edmonds while Chvátal was a student at Waterloo. He quickly recognized the importance of cutting planes for attacking combinatorial optimization problems such as computing maximum independent sets and, in particular, introduced the notion of a cutting-plane proof. At Stanford in the 1970s, he began writing his popular textbook, Linear Programming, which was published in 1983.Cutting planes lie at the heart of the branch and cut method used by efficient solvers for the traveling salesman problem. Between 1988 and 2005, the team of David L. Applegate, Robert E. Bixby, Vašek Chvátal, and William J. Cook developed one such solver, Concorde. The team was awarded The Beale-Orchard-Hays Prize for Excellence in Computational Mathematical Programming in 2000 for their ten-page paper enumerating some of Concorde's refinements of the branch and cut method that led to the solution of a 13,509-city instance and it was awarded the Frederick W. Lanchester Prize in 2007 for their book, The Traveling Salesman Problem: A Computational Study.
Research:
Chvátal is also known for proving the art gallery theorem, for researching a self-describing digital sequence, for his work with David Sankoff on the Chvátal–Sankoff constants controlling the behavior of the longest common subsequence problem on random inputs, and for his work with Endre Szemerédi on hard instances for resolution theorem proving.
Books:
Vašek Chvátal (1983). Linear Programming. W.H. Freeman. ISBN 978-0-7167-1587-0.. Japanese translation published by Keigaku Shuppan, Tokyo, 1986.
C. Berge and V. Chvátal (eds.) (1984). Topics on Perfect Graphs. Elsevier. ISBN 978-0-444-86587-8. {{cite book}}: |author= has generic name (help) David L. Applegate; Robert E. Bixby; Vašek Chvátal; William J. Cook (2007). The Traveling Salesman Problem: A Computational Study. Princeton University Press. ISBN 978-0-691-12993-8.
Vašek Chvátal, ed. (2011). Combinatorial Optimization: Methods and Applications. IOS Press. ISBN 978-1-60750-717-8.
Vašek Chvátal (2021). Discrete Mathematical Charms of Paul Erdős. A Simple Introduction. Cambridge University Press. ISBN 978-1-108-92740-6. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cloud tree**
Cloud tree:
A cloud tree is a tree shaped using topiary techniques. The leaves are pruned into a ball or cloud shape, leaving the stems thin and exposed. The shape of the tree as a whole resembles a set of clouds.
Cloud trees differ from bonsai trees because they are not miniature. Typically, cloud trees are planted in plain soil, rather than in pots.
Similarly to bonsai, the practice of shaping cloud trees comes from Japan, deriving from a Japanese style of gardening known as Niwaki. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Calendar (Windows)**
Calendar (Windows):
Calendar is a personal calendar application made by Microsoft for Microsoft Windows. It offers synchronization of calendars using Microsoft Exchange Server, Outlook.com Apple's iCloud calendar service, and Google Calendar. It supports the popular iCalendar 2.0 format.
History:
Microsoft first included a Calendar application (shortened to app) in Windows 1.0, which was included through Windows 3.1, and was replaced by Schedule+ in Windows for Workgroups and Windows NT 3.1. Schedule+ was later moved from Windows to the Microsoft Office suite, and Windows did not include another Calendar application until Windows Calendar in Windows Vista. Calendar had been created by Beta 2 of Windows Vista.
History:
Windows Vista This version supports sharing, subscribing, and publishing of calendars on WebDAV-enabled web servers and network shares. It has always supported .ics files, and the subscription feature enables syncing with Google Calendar. Its interface matches Windows Vista Mail's, but the two apps are not connected in this operating system. The default calendar can be renamed.
History:
On the calendar taskbar applet, there is a date grid on the left side and a skeuomorphistic analogue clock and a digital clock underneath on the right side. Additional clocks displaying different time zones can be added to the view. On the date grid (left side), dates far in the past or future can be looked up by zooming out by clicking on the date range indicator above the calendar grid, until each tile is a decade (e.g. 2010-2019, 2020-2029).This version has later been inherited on Windows 7.
History:
Windows 8 A new version of Calendar with a text-heavy was added to Windows 8 as one of many apps written to run full-screen or snapped as part of Microsoft's Metro design language philosophy. It is one of three apps on Windows that originate from Microsoft Outlook, the other two being Mail and People apps. Structurally, the three apps are one and are installed and uninstalled as such. But each has its own user interface. Calendar in Windows 8 originally supported Outlook.com, Exchange, Google Calendar, and Facebook calendars. Because of API changes, Facebook and Google calendars can no longer be directly synced on Windows 8. Like many Microsoft apps introduced for Windows 8, many of the features are hidden in the charms or a menu at the bottom of the screen that is triggered by right clicking. Different calendars can be labeled with different colors. When a user with a Microsoft account adds a calendar account on one computer with Windows 8 Calendar, the account will be automatically added to all other Windows 8 computers the user is logged into. .ics files are not supported in this version.
History:
Windows 10 Calendar has preset server configurations for Outlook.com, Exchange, Google Calendar, and iCloud Calendar. Users can set it to use the system theme or choose a custom accent color, background image, and light/dark preference. Windows 10 Calendar has multi-window support for viewing and editing events. Different calendars can be labeled with different colors, and events can be rearranged by dragging and dropping. The default interface is Month View, but users can also use Day, Week, and Year views and print these views. The Windows 10 app also uses a flyout settings panel and a mini Ribbon interface in the viewing pane. The day of the year and calendar events show on the live tile. Like the Vista version, the important controls are readily visible and use icons to match the system's. Accounts can be grouped and relabeled, but folders cannot be edited from within the app. .ics support was added to this version in time for the Windows 10 Anniversary Update. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Doral (cigarette)**
Doral (cigarette):
Doral is an American brand of cigarettes, currently owned and manufactured by the R.J. Reynolds Tobacco Company.
History:
Doral was introduced in 1969 and is available nationwide in the United States.Originally a premium brand, the cigarettes were re-branded in 1984 as a savings brand. This made Doral the officially first branded cigarette to be in the value-savings market.In 1984, The New York Times tested various "low tar" and "low nicotine" brands and the tests concluded that Doral King Size and Doral King Size menthol had 5 MG of tar, 0,4 MG of nicotine and 3 MG of carbon monoxide.In 1999, it was reported that, due to R.J. Reynolds' sponsoring of various sport sponsorships, the brand was the third largest tobacco brand after Marlboro and Newport.In October 2014, it was reported that R.J. Reynolds might add another brand that had to be sold off to Imperial Tobacco (along with the Kool, Salem, Winston and Maverick brands) to secure the approval of the Federal Trade Commission to purchase the Lorillard Tobacco Company. This did not happen, however, and R.J. Reynolds still sells the brand today.Doral currently receives limited support from R.J. Reynolds, as Pall Mall has taken over as the company's primary discount brand.
Advertisement:
R.J. Reynolds made various poster adverts to promote the Doral brand. Magazine advertisements were done in comic strip format; such as one with a lion tamer worried about what his Doral pack sang.Doral's current slogan is "Premium Taste, Guaranteed". An early slogan was "Taste me!" done with female voices on broadcast commercials. This was lampooned by George Carlin in his 1972 stand-up bit "Sex in Commercials".In 1972 and 1973, Doral was a sponsor of the NASTAR skiing competition, which they promoted with a "Doral-NASTAR Award".R.J. Reynolds also printed cigarette cards for Doral during the years 2000-2001.As of 2019, Doral held a market share of less than 1%. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MathWorks**
MathWorks:
MathWorks is an American privately held corporation that specializes in mathematical computing software. Its major products include MATLAB and Simulink, which support data analysis and simulation.
History:
The company's key product, MATLAB, was created in the 1970s by Cleve Moler, who was chairman of the computer science department at the University of New Mexico at the time. It was a free tool for academics. Jack Little, who would eventually set up the company, came across the tool while he was a graduate student in electrical engineering at Stanford University.Little and Steve Bangert rewrote the code for MATLAB in C while they were colleagues at an engineering firm. They founded MathWorks along with Moler in 1984, with Little running it out of his house in Portola Valley, California. Little would mail diskettes in baggies (food storage bags) to the first customers. The company sold its first order, 10 copies of MATLAB, for $500 to the Massachusetts Institute of Technology (MIT) in February 1985. A few years later, Little and the company moved to Massachusetts. There, Little hired Jeanne O'Keefe, an experienced computer executive, to help formalize the business. By 1997, MathWorks was profitable, claiming revenue of around $50 million, and had around 380 employees.
History:
In 1999, MathWorks relocated to the Apple Hill office complex in Natick, Massachusetts, purchasing additional buildings in the complex in 2008 and 2009, ultimately occupying the entire campus. MathWorks expanded further in 2013 by buying Boston Scientific's old headquarters campus, which is near to MathWorks' headquarters in Natick.By 2018, the company had around 3,000 employees in Natick and said it had revenues of around $900 million.
Products:
The company's two lead products are MATLAB, which provides an environment for scientists, engineers and programmers to analyze and visualize data and develop algorithms, and Simulink, a graphical and simulation environment for model-based design of dynamic systems. MATLAB and Simulink are used in aerospace, automotive, software and other fields. The company's other products include Polyspace, SimEvents, Stateflow, and ThingSpeak.
Corporate affairs:
Intellectual property and competition In 1999, the U.S. Department of Justice filed a lawsuit against MathWorks and Wind River Systems alleging that an agreement between them violated antitrust laws. The agreement in question stipulated that the two companies agreed to stop competing in the field of dynamic control system design software, with MathWorks alone selling Wind River's MATRIXx Software and that Wind River would stop all research and development and sales in that field. Both companies eventually settled with the Department of Justice and agreed to sell the MATRIXx software to a third party. MathWorks had total sales of $200 million in 2001, with dynamic control system design software accounting for half of those sales.MathWorks's Simulink software was found to have infringed 3 patents from National Instruments related to data flow diagrams in 2003, a decision which was confirmed by a court of appeal in 2004.In 2011, MathWorks sued AccelerEyes for copyright infringement in one court, and patent and trademark infringement in another. AccelerEyes accepted consent decrees in both cases before the trials began.In 2012, the European Commission opened an antitrust investigation into MathWorks after competitors alleged that MathWorks refused to grant licenses to its intellectual property that would allow people to create software with interoperability with its products. The case was closed in 2014.
Corporate affairs:
Logo The logo represents the first vibrational mode of a thin L-shaped membrane, clamped at the edges, and governed by the wave equation, which was the subject of Moler's thesis.
Corporate affairs:
Community The company annually sponsors a number of student engineering competitions, including EcoCAR, an advanced vehicle technology competition created by the United States Department of Energy (DOE) and General Motors (GM). MathWorks sponsored the mathematics exhibit at London's Science Museum.In the coding community, MathWorks hosts MATLAB Central, an online exchange where users ask and answer questions and share code. MATLAB Central currently houses around than 145,000 questions in its MATLAB Answers database. The company actively supports numerous academic institutions to advance STEM education (primarily through the use of MathWorks products), including giving funding to MIT Open Courseware and MITx. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Crossover (genetic algorithm)**
Crossover (genetic algorithm):
In genetic algorithms and evolutionary computation, crossover, also called recombination, is a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and is analogous to the crossover that happens during sexual reproduction in biology. Solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions may be mutated before being added to the population.
Crossover (genetic algorithm):
Different algorithms in evolutionary computation may use different data structures to store genetic information, and each genetic representation can be recombined with different crossover operators. Typical data structures that can be recombined with crossover are bit arrays, vectors of real numbers, or trees.
The list of operators presented below is by no means complete and serves mainly as an exemplary illustration of this dyadic genetic operator type. More operators and more details can be found in the literature.
Crossover for binary arrays:
Traditional genetic algorithms store genetic information in a chromosome represented by a bit array. Crossover methods for bit arrays are popular and an illustrative example of genetic recombination.
One-point crossover A point on both parents' chromosomes is picked randomly, and designated a 'crossover point'. Bits to the right of that point are swapped between the two parent chromosomes. This results in two offspring, each carrying some genetic information from both parents.
Crossover for binary arrays:
Two-point and k-point crossover In two-point crossover, two crossover points are picked randomly from the parent chromosomes. The bits in between the two points are swapped between the parent organisms. Two-point crossover is equivalent to performing two single-point crossovers with different crossover points. This strategy can be generalized to k-point crossover for any positive integer k, picking k crossover points.
Crossover for binary arrays:
Uniform crossover In uniform crossover, typically, each bit is chosen from either parent with equal probability. Other mixing ratios are sometimes used, resulting in offspring which inherit more genetic information from one parent than the other.
In a uniform crossover, we don’t divide the chromosome into segments, rather we treat each gene separately. In this, we essentially flip a coin for each chromosome to decide whether or not it will be included in the off-spring.
Crossover for integer or real-valued genomes:
For the crossover operators presented above and for most other crossover operators for bit strings, it holds that they can also be applied accordingly to integer or real-valued genomes whose genes each consist of an integer or real-valued number. Instead of individual bits, integer or real-valued numbers are then simply copied into the child genome. The offspring lie on the remaining corners of the hyperbody spanned by the two parents 1.5 ,6,8) and P2=(7,2,1) , as exemplified in the accompanying image for the three-dimensional case.
Crossover for integer or real-valued genomes:
Discrete recombination If the rules of the uniform crossover for bit strings are applied during the generation of the offspring, this is also called discrete recombination.
Crossover for integer or real-valued genomes:
Intermediate recombination In this recombination operator, the allele values of the child genome ai are generated by mixing the alleles of the two parent genomes ai,P1 and ai,P2 :αi=αi,P1⋅βi+αi,P2⋅(1−βi)withβi∈[−d,1+d] randomly equally distributed per gene i The choice of the interval [−d,1+d] causes that besides the interior of the hyperbody spanned by the allele values of the parent genes additionally a certain environment for the range of values of the offspring is in question. A value of 0.25 is recommended for d to counteract the tendency to reduce the allele values that otherwise exists at d=0 .The adjacent figure shows for the two-dimensional case the range of possible new alleles of the two exemplary parents P1=(2,6) and P2=(9,2) in intermediate recombination. The offspring of discrete recombination C1 and C2 are also plotted. Intermediate recombination satisfies the arithmetic calculation of the allele values of the child genome required by virtual alphabet theory. Discrete and intermediate recombination are used as a standard in the evolution strategy.
Crossover for permutations:
For combinatorial tasks, permutations are usually used that are specifically designed for genomes that are themselves permutations of a set. The underlying set is usually a subset of N or N0 . If 1- or n-point or uniform crossover for integer genomes is used for such genomes, a child genome may contain some values twice and others may be missing. This can be remedied by genetic repair, e.g. by replacing the redundant genes in positional fidelity for missing ones from the other child genome. In order to avoid the generation of invalid offspring, special crossover operators for permutations have been developed which fulfill the basic requirements of such operators for permutations, namely that all elements of the initial permutation are also present in the new one and only the order is changed. It can be distinguished between combinatorial tasks, where all sequences are admissible, and those where there are constraints in the form of inadmissible partial sequences. A well-known representative of the first task type is the traveling salesman problem (TSP), where the goal is to visit a set of cities exactly once on the shortest tour. An example of the constrained task type is the scheduling of multiple workflows. Workflows involve sequence constraints on some of the individual work steps. For example, a thread cannot be cut until the corresponding hole has been drilled in a workpiece. Such problems are also called order-based permutations.
Crossover for permutations:
In the following, two crossover operators are presented as examples, the partially mapped crossover (PMX) motivated by the TSP and the order crossover (OX1) designed for order-based permutations. A second offspring can be produced in each case by exchanging the parent chromosomes.
Crossover for permutations:
Partially mapped crossover (PMX) The PMX operator was designed as a recombination operator for TSP like Problems. The explanation of the procedure is illustrated by an example: Order crossover (OX1) The order crossover goes back to Davis in its original form and is presented here in a slightly generalized version with more than two crossover points. It transfers information about the relative order from the second parent to the offspring. First, the number and position of the crossover points are determined randomly. The resulting gene sequences are then processed as described below: Among other things, order crossover is well suited for scheduling multiple workflows, when used in conjunction with 1- and n-point crossover.
Crossover for permutations:
Further crossover operators for permutations Over time, a large number of crossover operators for permutations have been proposed, so the following list is only a small selection. For more information, the reader is referred to the literature.
Crossover for permutations:
cycle crossover (CX) order-based crossover (OX2) position-based crossover (POS) edge recombination voting recombination (VR) alternating-positions crossover (AP) maximal preservative crossover (MPX) merge crossover (MX) sequential constructive crossover operator (SCX)The usual approach to solving TSP-like problems by genetic or, more generally, evolutionary algorithms, presented earlier, is either to repair illegal descendants or to adjust the operators appropriately so that illegal offspring do not arise in the first place. Alternatively, Riazi suggests the use of a double chromosome representation, which avoids illegal offspring. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Isochronous signal**
Isochronous signal:
In telecommunication, an isochronous signal is a signal in which the time interval separating any two significant instants is equal to the unit interval or a multiple of the unit interval. Variations in the time intervals are constrained within specified limits.
"Isochronous" is a characteristic of one signal, while "synchronous" indicates a relationship between two or more signals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transmission electron microscopy**
Transmission electron microscopy:
Transmission electron microscopy (TEM) is a microscopy technique in which a beam of electrons is transmitted through a specimen to form an image. The specimen is most often an ultrathin section less than 100 nm thick or a suspension on a grid. An image is formed from the interaction of the electrons with the sample as the beam is transmitted through the specimen. The image is then magnified and focused onto an imaging device, such as a fluorescent screen, a layer of photographic film, or a sensor such as a scintillator attached to a charge-coupled device.
Transmission electron microscopy:
Transmission electron microscopes are capable of imaging at a significantly higher resolution than light microscopes, owing to the smaller de Broglie wavelength of electrons. This enables the instrument to capture fine detail—even as small as a single column of atoms, which is thousands of times smaller than a resolvable object seen in a light microscope. Transmission electron microscopy is a major analytical method in the physical, chemical and biological sciences. TEMs find application in cancer research, virology, and materials science as well as pollution, nanotechnology and semiconductor research, but also in other fields such as paleontology and palynology.
Transmission electron microscopy:
TEM instruments have multiple operating modes including conventional imaging, scanning TEM imaging (STEM), diffraction, spectroscopy, and combinations of these. Even within conventional imaging, there are many fundamentally different ways that contrast is produced, called "image contrast mechanisms". Contrast can arise from position-to-position differences in the thickness or density ("mass-thickness contrast"), atomic number ("Z contrast", referring to the common abbreviation Z for atomic number), crystal structure or orientation ("crystallographic contrast" or "diffraction contrast"), the slight quantum-mechanical phase shifts that individual atoms produce in electrons that pass through them ("phase contrast"), the energy lost by electrons on passing through the sample ("spectrum imaging") and more. Each mechanism tells the user a different kind of information, depending not only on the contrast mechanism but on how the microscope is used—the settings of lenses, apertures, and detectors. What this means is that a TEM is capable of returning an extraordinary variety of nanometer- and atomic-resolution information, in ideal cases revealing not only where all the atoms are but what kinds of atoms they are and how they are bonded to each other. For this reason TEM is regarded as an essential tool for nanoscience in both biological and materials fields.
Transmission electron microscopy:
The first TEM was demonstrated by Max Knoll and Ernst Ruska in 1931, with this group developing the first TEM with resolution greater than that of light in 1933 and the first commercial TEM in 1939. In 1986, Ruska was awarded the Nobel Prize in physics for the development of transmission electron microscopy.
History:
Initial development In 1873, Ernst Abbe proposed that the ability to resolve detail in an object was limited approximately by the wavelength of the light used in imaging or a few hundred nanometers for visible light microscopes. Developments in ultraviolet (UV) microscopes, led by Köhler and Rohr, increased resolving power by a factor of two. However this required expensive quartz optics, due to the absorption of UV by glass. It was believed that obtaining an image with sub-micrometer information was not possible due to this wavelength constraint.In 1858, Plücker observed the deflection of "cathode rays" (electrons) by magnetic fields. This effect was used by Ferdinand Braun in 1897 to build simple cathode-ray oscilloscope (CRO) measuring devices. In 1891, Riecke noticed that the cathode rays could be focused by magnetic fields, allowing for simple electromagnetic lens designs. In 1926, Hans Busch published work extending this theory and showed that the lens maker's equation could, with appropriate assumptions, be applied to electrons.In 1928, at the Technical University of Berlin, Adolf Matthias, Professor of High Voltage Technology and Electrical Installations, appointed Max Knoll to lead a team of researchers to advance the CRO design. The team consisted of several PhD students including Ernst Ruska and Bodo von Borries. The research team worked on lens design and CRO column placement, to optimize parameters to construct better CROs, and make electron optical components to generate low magnification (nearly 1:1) images. In 1931, the group successfully generated magnified images of mesh grids placed over the anode aperture. The device used two magnetic lenses to achieve higher magnifications, arguably creating the first electron microscope. In that same year, Reinhold Rudenberg, the scientific director of the Siemens company, patented an electrostatic lens electron microscope.
History:
Improving resolution At the time, electrons were understood to be charged particles of matter; the wave nature of electrons was not fully realized until the PhD thesis of Louis de Broglie in 1924. Knoll's research group was unaware of this publication until 1932, when they realized that the de Broglie wavelength of electrons was many orders of magnitude smaller than that for light, theoretically allowing for imaging at atomic scales. (Even for electrons with a kinetic energy of just 1 volt the wavelength is already as short as 1.23 nm.) In April 1932, Ruska suggested the construction of a new electron microscope for direct imaging of specimens inserted into the microscope, rather than simple mesh grids or images of apertures. With this device successful diffraction and normal imaging of an aluminium sheet was achieved. However the magnification achievable was lower than with light microscopy. Magnifications higher than those available with a light microscope were achieved in September 1933 with images of cotton fibers quickly acquired before being damaged by the electron beam.At this time, interest in the electron microscope had increased, with other groups, such as that of Paul Anderson and Kenneth Fitzsimmons of Washington State University and that of Albert Prebus and James Hillier at the University of Toronto, who constructed the first TEMs in North America in 1935 and 1938, respectively, continually advancing TEM design.
History:
Research continued on the electron microscope at Siemens in 1936, where the aim of the research was the development and improvement of TEM imaging properties, particularly with regard to biological specimens. At this time electron microscopes were being fabricated for specific groups, such as the "EM1" device used at the UK National Physical Laboratory. In 1939, the first commercial electron microscope, pictured, was installed in the Physics department of IG Farben-Werke. Further work on the electron microscope was hampered by the destruction of a new laboratory constructed at Siemens by an air raid, as well as the death of two of the researchers, Heinz Müller and Friedrick Krause during World War II.
History:
Further research After World War II, Ruska resumed work at Siemens, where he continued to develop the electron microscope, producing the first microscope with 100k magnification. The fundamental structure of this microscope design, with multi-stage beam preparation optics, is still used in modern microscopes. The worldwide electron microscopy community advanced with electron microscopes being manufactured in Manchester UK, the USA (RCA), Germany (Siemens) and Japan (JEOL). The first international conference in electron microscopy was in Delft in 1949, with more than one hundred attendees. Later conferences included the "First" international conference in Paris, 1950 and then in London in 1954.
History:
With the development of TEM, the associated technique of scanning transmission electron microscopy (STEM) was re-investigated and remained undeveloped until the 1970s, with Albert Crewe at the University of Chicago developing the field emission gun and adding a high quality objective lens to create the modern STEM. Using this design, Crewe demonstrated the ability to image atoms using annular dark-field imaging. Crewe and coworkers at the University of Chicago developed the cold field electron emission source and built a STEM able to visualize single heavy atoms on thin carbon substrates.
Background:
Electrons Theoretically, the maximum resolution, d, that one can obtain with a light microscope is limited by the wavelength of the photons (λ) and the numerical aperture NA of the system.
Background:
sin NA where n is the index of refraction of the medium in which the lens is working and α is the maximum half-angle of the cone of light that can enter the lens (see numerical aperture). Early twentieth century scientists theorized ways of getting around the limitations of the relatively large wavelength of visible light (wavelengths of 400–700 nanometers) by using electrons. Like all matter, electrons have both wave and particle properties (matter wave), and their wave-like properties mean that a beam of electrons can be focused and diffracted much like light can. The wavelength of electrons is related to their kinetic energy via the de Broglie equation, which says that the wavelength is inversely proportional to the momentum. Taking into account relativistic effects (as in a TEM an electron's velocity is a substantial fraction of the speed of light, c) the wavelength is λe=h2m0E(1+E2m0c2) where, h is Planck's constant, m0 is the rest mass of an electron and E is the kinetic energy of the accelerated electron.
Background:
Electron source From the top down, the TEM consists of an emission source or cathode, which may be a tungsten filament, a lanthanum hexaboride (LaB6) single crystal or a field emission gun. The gun is connected to a high voltage source (typically ~100–300 kV) and emits electrons either by thermionic or field electron emission into the vacuum. In the case of a thermionic source, the electron source is mounted in a Wehnelt cylinder to provide preliminary focus of the emitted electrons into a beam while also stabilizing the current using a passive feedback circuit. A field emission source uses instead electrostatic electrodes called an extractor, a suppressor, and a gun lens, with different voltages on each, to control the electric field shape and intensity near the sharp tip. The combination of the cathode and these first electrostatic lens elements is collectively called the "electron gun". After it leaves the gun, the beam is typically accelerated until it reaches its final voltage and enters the next part of the microscope: the condenser lens system. These upper lenses of the TEM then further focus the electron beam to the desired size and location on the sample.Manipulation of the electron beam is performed using two physical effects. The interaction of electrons with a magnetic field will cause electrons to move according to the left hand rule, thus allowing electromagnets to manipulate the electron beam. Additionally, electrostatic fields can cause the electrons to be deflected through a constant angle. Coupling of two deflections in opposing directions with a small intermediate gap allows for the formation of a shift in the beam path, allowing for beam shifting.
Background:
Optics The lenses of a TEM are what gives it its flexibility of operating modes and ability to focus beams down to the atomic scale and magnify them to get an image. A lens is usually made of a solenoid coil nearly surrounded by ferromagnetic materials designed to concentrate the coil's magnetic field into a precise, confined shape. When an electron enters and leaves this magnetic field, it spirals around the curved magnetic field lines in a way that acts very much as an ordinary glass lens does for light—it is a converging lens. But, unlike a glass lens, a magnetic lens can very easily change its focusing power by adjusting the current passing through the coils.
Background:
Equally important to the lenses are the apertures. These are circular holes in thin strips of heavy metal. Some are fixed in size and position and play important roles in limiting x-ray generation and improving the vacuum performance. Others can be freely switched among several different sizes and have their positions adjusted. Variable apertures after the sample allow the user to select the range of spatial positions or electron scattering angles to be used in the formation of an image or a diffraction pattern.
Background:
The electron-optical system also includes deflectors and stigmators, usually made of small electromagnets. The deflectors allow the position and angle of the beam at the sample position to be independently controlled and also ensure that the beams remain near the low-aberration centers of every lens in the lens stacks. The stigmators compensate for slight imperfections and aberrations that cause astigmatism—a lens having a different focal strength in different directions.
Background:
Typically a TEM consists of three stages of lensing. The stages are the condenser lenses, the objective lenses, and the projector lenses. The condenser lenses are responsible for primary beam formation, while the objective lenses focus the beam that comes through the sample itself (in STEM scanning mode, there are also objective lenses above the sample to make the incident electron beam convergent). The projector lenses are used to expand the beam onto the phosphor screen or other imaging device, such as film. The magnification of the TEM is due to the ratio of the distances between the specimen and the objective lens' image plane. TEM optical configurations differ significantly with implementation, with manufacturers using custom lens configurations, such as in spherical aberration corrected instruments, or TEMs using energy filtering to correct electron chromatic aberration.
Background:
Reciprocity The optical reciprocity theorem, or principle of Helmholtz reciprocity, generally holds true for elastically scattered electrons, as is often the case under standard TEM operating conditions. The theorem states that the wave amplitude at some point B as a result of electron point source A would be the same as the amplitude at A due to an equivalent point source placed at B. Simply stated, the wave function for electrons focused through any series of optical components that includes only scalar (i.e. not magnetic) fields will be exactly equivalent if the electron source and observation point are reversed. R Reciprocity is used to understand scanning transmission electron microscopy (STEM) in the familiar context of TEM, and to obtain and interpret images using STEM.
Background:
Display and detectors The key factors when considering electron detection include detective quantum efficiency (DQE), point spread function (PSF), modulation transfer function (MTF), pixel size and array size, noise, data readout speed, and radiation hardness.Imaging systems in a TEM consist of a phosphor screen, which may be made of fine (10–100 μm) particulate zinc sulfide, for direct observation by the operator, and an image recording system such as photographic film, doped YAG screen coupled CCDs, or other digital detector. Typically these devices can be removed or inserted into the beam path as required. (Photograph film is no longer used.) The first report of using a Charge-Coupled Device (CCD) detector for TEM was in 1982, but the technology didn't find widespread use until the late 1990s/early 2000s. Monolithic active-pixel sensors (MAPSs) were also used in TEM. CMOS detectors, which are faster and more resistant to radiation damage than CCDs, have been used for TEM since 2005. In the early 2010s, further development of CMOS technology allowed for the detection of single electron counts ("counting mode"). These Direct Electron Detectors are available from Gatan, FEI, Quantum Detectors and Direct Electron.
Components:
A TEM is composed of several components, which include a vacuum system in which the electrons travel, an electron emission source for generation of the electron stream, a series of electromagnetic lenses, as well as electrostatic plates. The latter two allow the operator to guide and manipulate the beam as required. Also required is a device to allow the insertion into, motion within, and removal of specimens from the beam path. Imaging devices are subsequently used to create an image from the electrons that exit the system.
Components:
Vacuum system To increase the mean free path of the electron gas interaction, a standard TEM is evacuated to low pressures, typically on the order of 10−4 Pa. The need for this is twofold: first the allowance for the voltage difference between the cathode and the ground without generating an arc, and secondly to reduce the collision frequency of electrons with gas atoms to negligible levels—this effect is characterized by the mean free path. TEM components such as specimen holders and film cartridges must be routinely inserted or replaced requiring a system with the ability to re-evacuate on a regular basis. As such, TEMs are equipped with multiple pumping systems and airlocks and are not permanently vacuum sealed.
Components:
The vacuum system for evacuating a TEM to an operating pressure level consists of several stages. Initially, a low or roughing vacuum is achieved with either a rotary vane pump or diaphragm pumps setting a sufficiently low pressure to allow the operation of a turbo-molecular or diffusion pump establishing high vacuum level necessary for operations. To allow for the low vacuum pump to not require continuous operation, while continually operating the turbo-molecular pumps, the vacuum side of a low-pressure pump may be connected to chambers which accommodate the exhaust gases from the turbo-molecular pump. Sections of the TEM may be isolated by the use of pressure-limiting apertures to allow for different vacuum levels in specific areas such as a higher vacuum of 10−4 to 10−7 Pa or higher in the electron gun in high-resolution or field-emission TEMs.
Components:
High-voltage TEMs require ultra-high vacuums on the range of 10−7 to 10−9 Pa to prevent the generation of an electrical arc, particularly at the TEM cathode. As such for higher voltage TEMs a third vacuum system may operate, with the gun isolated from the main chamber either by gate valves or a differential pumping aperture – a small hole that prevents the diffusion of gas molecules into the higher vacuum gun area faster than they can be pumped out. For these very low pressures, either an ion pump or a getter material is used.
Components:
Poor vacuum in a TEM can cause several problems ranging from the deposition of gas inside the TEM onto the specimen while viewed in a process known as electron beam induced deposition to more severe cathode damages caused by electrical discharge. The use of a cold trap to adsorb sublimated gases in the vicinity of the specimen largely eliminates vacuum problems that are caused by specimen sublimation.
Components:
Specimen stage TEM specimen stage designs include airlocks to allow for insertion of the specimen holder into the vacuum with minimal loss of vacuum in other areas of the microscope. The specimen holders hold a standard size of sample grid or self-supporting specimen. Standard TEM grid sizes are 3.05 mm diameter, with a thickness and mesh size ranging from a few to 100 μm. The sample is placed onto the meshed area having a diameter of approximately 2.5 mm. Usual grid materials are copper, molybdenum, gold or platinum. This grid is placed into the sample holder, which is paired with the specimen stage. A wide variety of designs of stages and holders exist, depending upon the type of experiment being performed. In addition to 3.05 mm grids, 2.3 mm grids are sometimes, if rarely, used. These grids were particularly used in the mineral sciences where a large degree of tilt can be required and where specimen material may be extremely rare. Electron transparent specimens have a thickness usually less than 100 nm, but this value depends on the accelerating voltage.
Components:
Once inserted into a TEM, the sample has to be manipulated to locate the region of interest to the beam, such as in single grain diffraction, in a specific orientation. To accommodate this, the TEM stage allows movement of the sample in the XY plane, Z height adjustment, and commonly a single tilt direction parallel to the axis of side entry holders. Sample rotation may be available on specialized diffraction holders and stages. Some modern TEMs provide the ability for two orthogonal tilt angles of movement with specialized holder designs called double-tilt sample holders. Some stage designs, such as top-entry or vertical insertion stages once common for high resolution TEM studies, may simply only have X-Y translation available. The design criteria of TEM stages are complex, owing to the simultaneous requirements of mechanical and electron-optical constraints and specialized models are available for different methods.
Components:
A TEM stage is required to have the ability to hold a specimen and be manipulated to bring the region of interest into the path of the electron beam. As the TEM can operate over a wide range of magnifications, the stage must simultaneously be highly resistant to mechanical drift, with drift requirements as low as a few nm/minute while being able to move several μm/minute, with repositioning accuracy on the order of nanometers. Earlier designs of TEM accomplished this with a complex set of mechanical downgearing devices, allowing the operator to finely control the motion of the stage by several rotating rods. Modern devices may use electrical stage designs, using screw gearing in concert with stepper motors, providing the operator with a computer-based stage input, such as a joystick or trackball.
Components:
Two main designs for stages in a TEM exist, the side-entry and top entry version. Each design must accommodate the matching holder to allow for specimen insertion without either damaging delicate TEM optics or allowing gas into TEM systems under vacuum.
Components:
The most common is the side entry holder, where the specimen is placed near the tip of a long metal (brass or stainless steel) rod, with the specimen placed flat in a small bore. Along the rod are several polymer vacuum rings to allow for the formation of a vacuum seal of sufficient quality, when inserted into the stage. The stage is thus designed to accommodate the rod, placing the sample either in between or near the objective lens, dependent upon the objective design. When inserted into the stage, the side entry holder has its tip contained within the TEM vacuum, and the base is presented to atmosphere, the airlock formed by the vacuum rings.
Components:
Insertion procedures for side-entry TEM holders typically involve the rotation of the sample to trigger micro switches that initiate evacuation of the airlock before the sample is inserted into the TEM column.
Components:
The second design is the top-entry holder consists of a cartridge that is several cm long with a bore drilled down the cartridge axis. The specimen is loaded into the bore, possibly using a small screw ring to hold the sample in place. This cartridge is inserted into an airlock with the bore perpendicular to the TEM optic axis. When sealed, the airlock is manipulated to push the cartridge such that the cartridge falls into place, where the bore hole becomes aligned with the beam axis, such that the beam travels down the cartridge bore and into the specimen. Such designs are typically unable to be tilted without blocking the beam path or interfering with the objective lens.
Components:
Electron gun The electron gun is formed from several components: the filament, a biasing circuit, a Wehnelt cap, and an extraction anode. By connecting the filament to the negative component power supply, electrons can be "pumped" from the electron gun to the anode plate and the TEM column, thus completing the circuit. The gun is designed to create a beam of electrons exiting from the assembly at some given angle, known as the gun divergence semi-angle, α. By constructing the Wehnelt cylinder such that it has a higher negative charge than the filament itself, electrons that exit the filament in a diverging manner are, under proper operation, forced into a converging pattern the minimum size of which is the gun crossover diameter.
Components:
The thermionic emission current density, J, can be related to the work function of the emitting material via Richardson's law exp (−ΦkT), where A is the Richardson's constant, Φ is the work function and T is the temperature of the material.This equation shows that in order to achieve sufficient current density it is necessary to heat the emitter, taking care not to cause damage by application of excessive heat. For this reason materials with either a high melting point, such as tungsten, or those with a low work function (LaB6) are required for the gun filament. Furthermore, both lanthanum hexaboride and tungsten thermionic sources must be heated in order to achieve thermionic emission, this can be achieved by the use of a small resistive strip. To prevent thermal shock, there is often a delay enforced in the application of current to the tip, to prevent thermal gradients from damaging the filament, the delay is usually a few seconds for LaB6, and significantly lower for tungsten.
Components:
Electron lens Electron lenses are designed to act in a manner emulating that of an optical lens, by focusing parallel electrons at some constant focal distance. Electron lenses may operate electrostatically or magnetically. The majority of electron lenses for TEM use electromagnetic coils to generate a convex lens. The field produced for the lens must be radially symmetrical, as deviation from the radial symmetry of the magnetic lens causes aberrations such as astigmatism, and worsens spherical and chromatic aberration. Electron lenses are manufactured from iron, iron-cobalt or nickel cobalt alloys, such as permalloy. These are selected for their magnetic properties, such as magnetic saturation, hysteresis and permeability.
Components:
The components include the yoke, the magnetic coil, the poles, the polepiece, and the external control circuitry. The pole piece must be manufactured in a very symmetrical manner, as this provides the boundary conditions for the magnetic field that forms the lens. Imperfections in the manufacture of the pole piece can induce severe distortions in the magnetic field symmetry, which induce distortions that will ultimately limit the lenses' ability to reproduce the object plane. The exact dimensions of the gap, pole piece internal diameter and taper, as well as the overall design of the lens is often performed by finite element analysis of the magnetic field, whilst considering the thermal and electrical constraints of the design.The coils which produce the magnetic field are located within the lens yoke. The coils can contain a variable current, but typically use high voltages, and therefore require significant insulation in order to prevent short-circuiting the lens components. Thermal distributors are placed to ensure the extraction of the heat generated by the energy lost to resistance of the coil windings. The windings may be water-cooled, using a chilled water supply in order to facilitate the removal of the high thermal duty.
Components:
Apertures Apertures are annular metallic plates, through which electrons that are further than a fixed distance from the optic axis may be excluded. These consist of a small metallic disc that is sufficiently thick to prevent electrons from passing through the disc, whilst permitting axial electrons. This permission of central electrons in a TEM causes two effects simultaneously: firstly, apertures decrease the beam intensity as electrons are filtered from the beam, which may be desired in the case of beam sensitive samples. Secondly, this filtering removes electrons that are scattered to high angles, which may be due to unwanted processes such as spherical or chromatic aberration, or due to diffraction from interaction within the sample.Apertures are either a fixed aperture within the column, such as at the condenser lens, or are a movable aperture, which can be inserted or withdrawn from the beam path, or moved in the plane perpendicular to the beam path. Aperture assemblies are mechanical devices which allow for the selection of different aperture sizes, which may be used by the operator to trade off intensity and the filtering effect of the aperture. Aperture assemblies are often equipped with micrometers to move the aperture, required during optical calibration.
Imaging methods:
Imaging methods in TEM use the information contained in the electron waves exiting from the sample to form an image. The projector lenses allow for the correct positioning of this electron wave distribution onto the viewing system. The observed intensity, I, of the image, assuming sufficiently high quality of imaging device, can be approximated as proportional to the time-averaged squared absolute value of the amplitude of the electron wavefunctions, where the wave that forms the exit beam is denoted by Ψ.
Imaging methods:
I(x)=kt1−t0∫t0t1ΨΨ∗dt Different imaging methods therefore attempt to modify the electron waves exiting the sample in a way that provides information about the sample, or the beam itself. From the previous equation, it can be deduced that the observed image depends not only on the amplitude of beam, but also on the phase of the electrons, although phase effects may often be ignored at lower magnifications. Higher resolution imaging requires thinner samples and higher energies of incident electrons, which means that the sample can no longer be considered to be absorbing electrons (i.e., via a Beer's law effect). Instead, the sample can be modeled as an object that does not change the amplitude of the incoming electron wave function, but instead modifies the phase of the incoming wave; in this model, the sample is known as a pure phase object. For sufficiently thin specimens, phase effects dominate the image, complicating analysis of the observed intensities. To improve the contrast in the image, the TEM may be operated at a slight defocus to enhance contrast, owing to convolution by the contrast transfer function of the TEM, which would normally decrease contrast if the sample was not a weak phase object.
Imaging methods:
The figure on the right shows the two basic operation modes of TEM – imaging and diffraction modes. In both cases the specimen is illuminated with the parallel beam, formed by electron beam shaping with the system of Condenser lenses and Condenser aperture. After interaction with the sample, on the exit surface of the specimen two types of electrons exist – unscattered (which will correspond to the bright central beam on the diffraction pattern) and scattered electrons (which change their trajectories due to interaction with the material).
Imaging methods:
In Imaging mode, the objective aperture is inserted in a back focal plane (BFP) of the objective lens (where diffraction spots are formed). If using the objective aperture to select only the central beam, the transmitted electrons are passed through the aperture while all others are blocked, and a bright field image (BF image) is obtained. If we allow the signal from a diffracted beam, a dark field image (DF image) is received. The selected signal is magnified and projected on a screen (or on a camera) with the help of Intermediate and Projector lenses. An image of the sample is thus obtained.
Imaging methods:
In Diffraction mode, a selected area aperture may be used to determine more precisely the specimen area from which the signal will be displayed. By changing the strength of current to the intermediate lens, the diffraction pattern is projected on a screen. Diffraction is a very powerful tool for doing a cell reconstruction and crystal orientation determination.
Imaging methods:
Contrast formation The contrast between two adjacent areas in a TEM image can be defined as the difference in the electron densities in image plane. Due to the scattering of the incident beam by the sample, the amplitude and phase of the electron wave change, which results in amplitude contrast and phase contrast, correspondingly. Most images have both contrast components.
Imaging methods:
Amplitude–contrast is obtained due to removal of some electrons before the image plane. During their interaction with the specimen some of electrons will be lost due to absorption, or due to scattering at very high angles beyond the physical limitation of microscope or are blocked by the objective aperture. While the first two losses are due to the specimen and microscope construction, the objective aperture can be used by operator to enhance the contrast.
Imaging methods:
Figure on the right shows a TEM image (a) and the corresponding diffraction pattern (b) of Pt polycrystalline film taken without an objective aperture. In order to enhance the contrast in the TEM image the number of scattered beams as visible in the diffraction pattern should be reduced. This can be done by selecting a certain area in the back focal plane such as only the central beam or a specific diffracted beam (angle), or combinations of such beams. By intentionally selecting an objective aperture which only permits the non-diffracted beam to pass beyond the back focal plane (and onto the image plane): one creates a Bright-Field (BF) image (c), whereas if the central, non-diffracted beam is blocked: one may obtain Dark-Field (DF) images such as those shown in (d-e). The DF images (d-e) were obtained by selecting the diffracted beams indicated in diffraction pattern with circles (b) using an aperture at the back focal plane. Grains from which electrons are scattered into these diffraction spots appear brighter. More details about diffraction contrast formation are given further.
Imaging methods:
There are two types of amplitude contrast – mass–thickness and diffraction contrast. First, let's consider mass–thickness contrast. When the beam illuminates two neighbouring areas with low mass (or thickness) and high mass (or thickness), the heavier region scatters electrons at bigger angles. These strongly scattered electrons are blocked in BF TEM mode by objective aperture. As a result, heavier regions appear darker in BF images (have low intensity). Mass–thickness contrast is most important for non–crystalline, amorphous materials.
Imaging methods:
Diffraction contrast occurs due to a specific crystallographic orientation of a grain. In such a case the crystal is oriented in a way that there is a high probability of diffraction. Diffraction contrast provides information on the orientation of the crystals in a polycrystalline sample, as well as other information such as defects. Note that in case diffraction contrast exists, the contrast cannot be interpreted as due to mass or thickness variations.
Imaging methods:
Diffraction contrast Samples can exhibit diffraction contrast, whereby the electron beam undergoes diffraction which in the case of a crystalline sample, disperses electrons into discrete locations in the back focal plane. By the placement of apertures in the back focal plane, i.e. the objective aperture, the desired reciprocal lattice vectors can be selected (or excluded), thus only parts of the sample that are causing the electrons to scatter to the selected reflections will end up projected onto the imaging apparatus.
Imaging methods:
If the reflections that are selected do not include the unscattered beam (which will appear up at the focal point of the lens), then the image will appear dark wherever no sample scattering to the selected peak is present, as such a region without a specimen will appear dark. This is known as a dark-field image.
Modern TEMs are often equipped with specimen holders that allow the user to tilt the specimen to a range of angles in order to obtain specific diffraction conditions, and apertures placed above the specimen allow the user to select electrons that would otherwise be diffracted in a particular direction from entering the specimen.
Imaging methods:
Applications for this method include the identification of lattice defects in crystals. By carefully selecting the orientation of the sample, it is possible not just to determine the position of defects but also to determine the type of defect present. If the sample is oriented so that one particular plane is only slightly tilted away from the strongest diffracting angle (known as the Bragg Angle), any distortion of the crystal plane that locally tilts the plane to the Bragg angle will produce particularly strong contrast variations. However, defects that produce only displacement of atoms that do not tilt the crystal towards the Bragg angle (i. e. displacements parallel to the crystal plane) will produce weaker contrast.
Imaging methods:
Phase contrast Crystal structure can also be investigated by high-resolution transmission electron microscopy (HRTEM), also known as phase contrast. When using a field emission source and a specimen of uniform thickness, the images are formed due to differences in phase of electron waves, which is caused by specimen interaction. Image formation is given by the complex modulus of the incoming electron beams. As such, the image is not only dependent on the number of electrons hitting the screen, making direct interpretation of phase contrast images slightly more complex. However this effect can be used to an advantage, as it can be manipulated to provide more information about the sample, such as in complex phase retrieval techniques.
Imaging methods:
Diffraction As previously stated, by adjusting the magnetic lenses such that the back focal plane of the lens rather than the imaging plane is placed on the imaging apparatus a diffraction pattern can be generated. For thin crystalline samples, this produces an image that consists of a pattern of dots in the case of a single crystal, or a series of rings in the case of a polycrystalline or amorphous solid material. For the single crystal case the diffraction pattern is dependent upon the orientation of the specimen and the structure of the sample illuminated by the electron beam. This image provides the investigator with information about the space group symmetries in the crystal and the crystal's orientation to the beam path. This is typically done without using any information but the position at which the diffraction spots appear and the observed image symmetries.
Imaging methods:
Diffraction patterns can have a large dynamic range, and for crystalline samples, may have intensities greater than those recordable by CCD. As such, TEMs may still be equipped with film cartridges for the purpose of obtaining these images, as the film is a single use detector.
Imaging methods:
Analysis of diffraction patterns beyond point-position can be complex, as the image is sensitive to a number of factors such as specimen thickness and orientation, objective lens defocus, and spherical and chromatic aberration. Although quantitative interpretation of the contrast shown in lattice images is possible, it is inherently complicated and can require extensive computer simulation and analysis, such as electron multislice analysis.More complex behavior in the diffraction plane is also possible, with phenomena such as Kikuchi lines arising from multiple diffraction within the crystalline lattice. In convergent beam electron diffraction (CBED) where a non-parallel, i.e. converging, electron wavefront is produced by concentrating the electron beam into a fine probe at the sample surface, the interaction of the convergent beam can provide information beyond structural data such as sample thickness.
Imaging methods:
Electron energy loss spectroscopy (EELS) Using the advanced technique of electron energy loss spectroscopy (EELS), for TEMs appropriately equipped, electrons can be separated into a spectrum based upon their velocity (which is closely related to their kinetic energy, and thus energy loss from the beam energy), using magnetic sector based devices known as EEL spectrometers. These devices allow for the selection of particular energy values, which can be associated with the way the electron has interacted with the sample. For example, different elements in a sample result in different electron energies in the beam after the sample. This normally results in chromatic aberration – however this effect can, for example, be used to generate an image which provides information on elemental composition, based upon the atomic transition during electron-electron interaction.EELS spectrometers can often be operated in both spectroscopic and imaging modes, allowing for isolation or rejection of elastically scattered beams. As for many images inelastic scattering will include information that may not be of interest to the investigator thus reducing observable signals of interest, EELS imaging can be used to enhance contrast in observed images, including both bright field and diffraction, by rejecting unwanted components.
Imaging methods:
Three-dimensional imaging As TEM specimen holders typically allow for the rotation of a sample by a desired angle, multiple views of the same specimen can be obtained by rotating the angle of the sample along an axis perpendicular to the beam. By taking multiple images of a single TEM sample at differing angles, typically in 1° increments, a set of images known as a "tilt series" can be collected. This methodology was proposed in the 1970s by Walter Hoppe. Under purely absorption contrast conditions, this set of images can be used to construct a three-dimensional representation of the sample.The reconstruction is accomplished by a two-step process, first images are aligned to account for errors in the positioning of a sample; such errors can occur due to vibration or mechanical drift. Alignment methods use image registration algorithms, such as autocorrelation methods to correct these errors. Secondly, using a reconstruction algorithm, such as filtered back projection, the aligned image slices can be transformed from a set of two-dimensional images, Ij(x, y), to a single three-dimensional image, I'j(x, y, z). This three-dimensional image is of particular interest when morphological information is required, further study can be undertaken using computer algorithms, such as isosurfaces and data slicing to analyse the data.
Imaging methods:
As TEM samples cannot typically be viewed at a full 180° rotation, the observed images typically suffer from a "missing wedge" of data, which when using Fourier-based back projection methods decreases the range of resolvable frequencies in the three-dimensional reconstruction. Mechanical refinements, such as multi-axis tilting (two tilt series of the same specimen made at orthogonal directions) and conical tomography (where the specimen is first tilted to a given fixed angle and then imaged at equal angular rotational increments through one complete rotation in the plane of the specimen grid) can be used to limit the impact of the missing data on the observed specimen morphology. Using focused ion beam milling, a new technique has been proposed which uses pillar-shaped specimen and a dedicated on-axis tomography holder to perform 180° rotation of the sample inside the pole piece of the objective lens in TEM. Using such arrangements, quantitative electron tomography without the missing wedge is possible. In addition, numerical techniques exist which can improve the collected data.
Imaging methods:
All the above-mentioned methods involve recording tilt series of a given specimen field. This inevitably results in the summation of a high dose of reactive electrons through the sample and the accompanying destruction of fine detail during recording. The technique of low-dose (minimal-dose) imaging is therefore regularly applied to mitigate this effect. Low-dose imaging is performed by deflecting illumination and imaging regions simultaneously away from the optical axis to image an adjacent region to the area to be recorded (the high-dose region). This area is maintained centered during tilting and refocused before recording. During recording the deflections are removed so that the area of interest is exposed to the electron beam only for the duration required for imaging. An improvement of this technique (for objects resting on a sloping substrate film) is to have two symmetrical off-axis regions for focusing followed by setting focus to the average of the two high-dose focus values before recording the low-dose area of interest.
Imaging methods:
Non-tomographic variants on this method, referred to as single particle analysis, use images of multiple (hopefully) identical objects at different orientations to produce the image data required for three-dimensional reconstruction. If the objects do not have significant preferred orientations, this method does not suffer from the missing data wedge (or cone) which accompany tomographic methods nor does it incur excessive radiation dosage, however it assumes that the different objects imaged can be treated as if the 3D data generated from them arose from a single stable object.
Sample preparation:
Sample preparation in TEM can be a complex procedure. TEM specimens should be less than 100 nanometers thick for a conventional TEM. Unlike neutron or X-ray radiation the electrons in the beam interact readily with the sample, an effect that increases roughly with atomic number squared (Z2). High quality samples will have a thickness that is comparable to the mean free path of the electrons that travel through the samples, which may be only a few tens of nanometers. Preparation of TEM specimens is specific to the material under analysis and the type of information to be obtained from the specimen.
Sample preparation:
Materials that have dimensions small enough to be electron transparent, such as powdered substances, small organisms, viruses, or nanotubes, can be quickly prepared by the deposition of a dilute sample containing the specimen onto films on support grids. Biological specimens may be embedded in resin to withstand the high vacuum in the sample chamber and to enable cutting tissue into electron transparent thin sections. The biological sample can be stained using either a negative staining material such as uranyl acetate for bacteria and viruses, or, in the case of embedded sections, the specimen may be stained with heavy metals, including osmium tetroxide. Alternately samples may be held at liquid nitrogen temperatures after embedding in vitreous ice. In material science and metallurgy the specimens can usually withstand the high vacuum, but still must be prepared as a thin foil, or etched so some portion of the specimen is thin enough for the beam to penetrate. Constraints on the thickness of the material may be limited by the scattering cross-section of the atoms from which the material is comprised.
Sample preparation:
Tissue sectioning Biological tissue is often embedded in a resin block then thinned to less than 100 nm on an ultramicrotome. The resin block is fractured as it passes over a glass or diamond knife edge. This method is used to obtain thin, minimally deformed samples that allow for the observation of tissue ultrastructure. Inorganic samples, such as aluminium, may also be embedded in resins and ultrathin sectioned in this way, using either coated glass, sapphire or larger angle diamond knives. To prevent charge build-up at the sample surface when viewing in the TEM, tissue samples need to be coated with a thin layer of conducting material, such as carbon.
Sample preparation:
Sample staining TEM samples of biological tissues need high atomic number stains to enhance contrast. The stain absorbs the beam electrons or scatters part of the electron beam which otherwise is projected onto the imaging system. Compounds of heavy metals such as osmium, lead, uranium or gold (in immunogold labelling) may be used prior to TEM observation to selectively deposit electron dense atoms in or on the sample in desired cellular or protein region. This process requires an understanding of how heavy metals bind to specific biological tissues and cellular structures.
Sample preparation:
Mechanical milling Mechanical polishing is also used to prepare samples for imaging on the TEM. Polishing needs to be done to a high quality, to ensure constant sample thickness across the region of interest. A diamond, or cubic boron nitride polishing compound may be used in the final stages of polishing to remove any scratches that may cause contrast fluctuations due to varying sample thickness. Even after careful mechanical milling, additional fine methods such as ion etching may be required to perform final stage thinning.
Sample preparation:
Chemical etching Certain samples may be prepared by chemical etching, particularly metallic specimens. These samples are thinned using a chemical etchant, such as an acid, to prepare the sample for TEM observation. Devices to control the thinning process may allow the operator to control either the voltage or current passing through the specimen, and may include systems to detect when the sample has been thinned to a sufficient level of optical transparency.
Sample preparation:
Ion etching Ion etching is a sputtering process that can remove very fine quantities of material. This is used to perform a finishing polish of specimens polished by other means. Ion etching uses an inert gas passed through an electric field to generate a plasma stream that is directed to the sample surface. Acceleration energies for gases such as argon are typically a few kilovolts. The sample may be rotated to promote even polishing of the sample surface. The sputtering rate of such methods is on the order of tens of micrometers per hour, limiting the method to only extremely fine polishing.
Sample preparation:
Ion etching by argon gas has been recently shown to be able to file down MTJ stack structures to a specific layer which has then been atomically resolved. The TEM images taken in plan view rather than cross-section reveal that the MgO layer within MTJs contains a large number of grain boundaries that may be diminishing the properties of devices.
Sample preparation:
Ion milling (FIB) More recently focused ion beam methods have been used to prepare samples. FIB is a relatively new technique to prepare thin samples for TEM examination from larger specimens. Because FIB can be used to micro-machine samples very precisely, it is possible to mill very thin membranes from a specific area of interest in a sample, such as a semiconductor or metal. Unlike inert gas ion sputtering, FIB makes use of significantly more energetic gallium ions and may alter the composition or structure of the material through gallium implantation.
Sample preparation:
Nanowire assisted transfer For a minimal introduction of stress and bending to transmission electron microscopy (TEM) samples (lamellae, thin films, and other mechanically and beam sensitive samples), when transferring inside a focused ion beam (FIB), flexible metallic nanowires can be attached to a typically rigid micromanipulator.
The main advantages of this method include a significant reduction of sample preparation time (quick welding and cutting of nanowire at low beam current), and minimization of stress-induced bending, Pt contamination, and ion beam damage.
This technique is particularly suitable for in situ electron microscopy sample preparation.
Sample preparation:
Replication Samples may also be replicated using cellulose acetate film, the film subsequently coated with a heavy metal such as platinum, the original film dissolved away, and the replica imaged on the TEM. Variations of the replica technique are used for both materials and biological samples. In materials science a common use is for examining the fresh fracture surface of metal alloys.
Modifications:
The capabilities of the TEM can be further extended by additional stages and detectors, sometimes incorporated on the same microscope.
Modifications:
Scanning TEM A TEM can be modified into a scanning transmission electron microscope (STEM) by the addition of a system that rasters a convergent beam across the sample to form the image, when combined with suitable detectors. Scanning coils are used to deflect the beam, such as by an electrostatic shift of the beam, where the beam is then collected using a current detector such as a Faraday cup, which acts as a direct electron counter. By correlating the electron count to the position of the scanning beam (known as the "probe"), the transmitted component of the beam may be measured. The non-transmitted components may be obtained either by beam tilting or by the use of annular dark field detectors.
Modifications:
Fundamentally, TEM and STEM are linked via Helmholtz reciprocity. A STEM is a TEM in which the electron source and observation point have been switched relative to the direction of travel of the electron beam. See the ray diagrams in the figure on the right. The STEM instrument effectively relies on the same optical set-up as a TEM, but operates by flipping the direction of travel of the electrons (or reversing time) during operation of a TEM. Rather than using an aperture to control detected electrons, as in TEM, a STEM uses various detectors with collection angles that may be adjusted depending on which electrons the user wants to capture.
Modifications:
Low-voltage electron microscope A low-voltage electron microscope (LVEM) is operated at relatively low electron accelerating voltage between 5–25 kV. Some of these can be a combination of SEM, TEM and STEM in a single compact instrument. Low voltage increases image contrast which is especially important for biological specimens. This increase in contrast significantly reduces, or even eliminates the need to stain. Resolutions of a few nm are possible in TEM, SEM and STEM modes. The low energy of the electron beam means that permanent magnets can be used as lenses and thus a miniature column that does not require cooling can be used.
Modifications:
Cryo-TEM Cryogenic transmission electron microscopy (Cryo-TEM) uses a TEM with a specimen holder capable of maintaining the specimen at liquid nitrogen or liquid helium temperatures. This allows imaging specimens prepared in vitreous ice, the preferred preparation technique for imaging individual molecules or macromolecular assemblies, imaging of vitrified solid-electrolye interfaces, and imaging of materials that are volatile in high vacuum at room temperature, such as sulfur.
Modifications:
Environmental/In-situ TEM In-situ experiments may also be conducted in TEM using differentially pumped sample chambers, or specialized holders. Types of in-situ experiments include studying nanomaterials, biological specimens, chemical reactions of molecules, liquid-phase electron microscopy, and material deformation testing.
Modifications:
High Temperature In-Situ TEM Many phase transformations occur during heating. Additionally, coarsening and grain growth, along with other diffusion-related processes occur more rapidly at elevated temperatures, where kinetics are improved, allowing for the observation of related phenomena under transmission electron microscopy within reasonable time scales. This also allows for the observation of phenomena that occur at elevated temperatures and disappear or are not uniformly preserved in ex-situ samples.
Modifications:
High temperature TEM introduces various additional challenges which must be addressed in the mechanics of high temperature holders, including but not limited to drift-correction, temperature measurement, and decreased spatial resolution at the expense of more complex holders.Sample drift in the TEM is linearly proportional to the temperature differential between the room and holder. With temperatures as high as 1500C in modern holders, samples may experience significant drift and vertical displacement (bulging), requiring continuous focus or stage adjustments, inducing resolution loss and mechanical drift. Individual labs and manufacturers have developed software coupled with advanced cooling systems to correct for thermal drift based on the predicted temperature in the sample chamber These systems often take 30 min-many hours for sample shifts to stabilize. While significant progress has been made, no universal TEM attachment has been made to account for drift at elevated temperatures.An additional challenge of many of these specialized holders is knowing the local sample temperature. Many high temperature holders utilize a tungsten filament to locally heat the sample. Ambiguity in temperature in furnace heaters (W wire) with thermocouples arises from the thermal contact between the furnace and the TEM grid; complicated by temperature gradients along the sample caused by varying thermal conductivity with different samples and grid materials. With different holders both commercial and lab made, different methods for creating temperature calibration are available. Manufacturers like Gatan use IR pyrometry to measure temperature gradients over their entire sample. An even better method to calibrate is Raman spectroscopy which measures the local temperature of Si powder on electron transparent windows and quantitatively calibrates the IR pyrometry. These measurements have guaranteed accuracy within 5%. Research laboratories have also performed their own calibrations on commercial holders. Researchers at NIST utilized Raman spectroscopy to map the temperature profile of a sample on a TEM grid and achieve very precise measurements to enhance their research. Similarly, a research group in Germany utilized X-ray diffraction to measure slight shifts in lattice spacing caused by changes in temperature to back calculate the exact temperature in the holder. This process required careful calibration and exact TEM optics. Other examples include the use of EELS to measure local temperature using change of gas density, and resistivity changes.Optimal resolution in a TEM is achieved when spherical aberrations are corrected with objective lens. However, due to the geometry of most TEMs, inserting large in-situ holders requires the user to compromise the objective lens and endure spherical aberrations. Therefore, there is a compromise between the width of the pole-piece gap and spatial resolution below 0.1 nm. Research groups at various institutions have tried to overcome spherical aberrations through use of monochromators to achieve 0.05 nm resolution with a 5 mm pole piece gap.
Modifications:
In-Situ Mechanical TEM High resolution of TEM allows for monitoring the sample in question on a length scale ranging from hundreds of nanometers to several angstroms. This allows for the visualization of both elastic and plastic deformation via strain fields as well as the motion of crystallographic defects such as lattice distortions and dislocation motion. By simultaneously observing deformation phenomena and measuring mechanical response in situ, it is possible to connect nano-mechanical testing information to models that describe both the subtlety and complexity of how materials respond to stress and strain. The material properties and data accuracy obtained from such nano-mechanical tests is largely determined by the mechanical straining holder being used. Current straining holders have the ability to perform tensile tests, nano-indentation, compression tests, shear tests and bending tests on materials.
Modifications:
Classical Mechanical Holders One of the pioneers of classical holders was Dr. Heinz G.F. Wilsdorf, who conducted a tensile test inside a TEM in 1958. In a typical experiment, electron transparent TEM samples are cut to shape and glued to a deformable grid. Advances in micromanipulators have also enabled the tensile testing of nanowires and thin films. The deformable grid attaches to the classical tensile holder which stretches the sample using a long rigid shaft attached to a worm gear box actuated by an electric motor located in a housing outside the TEM. Typically strain rates range from 10 nm/s to 10 μm/s. Custom-made holders expanding simple straining actuation have enabled bending tests using a bending holder and shear tests using a shear sample holder. The typical measured sample properties in these experiments are yield strength, elastic modulus, shear modulus, tensile strength, bending strength, and shear strength. In order to study the temperature-dependent mechanical properties of TEM samples, the holder can be cooled through a cold finger connected to a liquid nitrogen reservoir. For high temperature experiments, the TEM sample can also be heated through a miniaturized furnace or a laser that can typically reach 1000 °C.
Modifications:
Nano-Indentation Holders Nano-Indentation holders perform a hardness test on the material in question by pressing a hard tip into a polished flat surface and measuring the applied force and the resulting displacement on the TEM sample through a change in capacitance between a reference and a movable electrostatic plate attached to the tip. The typical measured sample properties are hardness and elastic modulus. Although nano-indentation was possible since early 1980s, its investigation using a TEM was first reported in 2001 where an aluminum sample deposited on a silicon wedge was investigated. For nanoindentation experiments, TEM samples are typically shaped as wedges using a tripod polisher, H-bar window or a micro-nanopillar using focused ion beam to create enough space for a tip to be pressed at the desired electron transparent location. The indenter tips are typically flat punch-type, pyramidal, or wedge shaped elongated in the z-direction. Pyramidal tips offer high precision on the order of 10 nm but suffer from sample slip, while wedge indenters have greater contract to prevent slipping but require finite element analysis to model the transmitted stress since the high contact area with the TEM sample makes this almost a compression test.
Modifications:
Micro Electro-Mechanical Systems (MEMs) Micro Electro-Mechanical Systems (MEMs) based holders provide a cheap and customizable platform to conduct mechanical tests on previously difficult samples to work with such as micropillars, nanowires, and thin films. Passive MEMs are used as simple push to pull devices for in-situ mechanical tests. Typically, a nano-indentation holder is used to apply a pushing force at the indentation site. Using a geometry of arms, this pushing force translates to a pulling force on a pair of tensile pads to which the sample is attached. Thus, a compression applied on the outside of the MEMs translates to a tension in the central gap where the TEM sample is located. It is worth noting that the resulting force-displacement curve needs to be corrected by performing the same test on an empty MEMs without the TEM sample to account for the stiffness of the empty MEMs. The dimensions and stiffness of the MEMs can be modified to perform tensile tests on different sized samples with different loads. To smoothen the actuation process, active MEMs have been developed with built-in actuators and sensors. These devices work by applying a stress using electrical power and measuring strain using capacitance variations. Electrostatically actuated MEMs have also been developed to accommodate very low applied forces in the 1-100 nN range.Much of current research focuses on developing sample holders that can perform mechanical tests while creating an environmental stimulus such as temperature change, variable strain rates, and different gas environments. In addition, the emergence of high resolution detectors are allowing to monitor dislocation motion and interactions with other defects and pushing the limits of sub-nanometer strain measurements. In Situ Mechanical TEM measurements are routinely coupled with other standard TEM measurements such as EELS and XEDS to reach a comprehensive understanding of the sample structure and properties.
Modifications:
Aberration corrected TEM Modern research TEMs may include aberration correctors, to reduce the amount of distortion in the image. Incident beam monochromators may also be used which reduce the energy spread of the incident electron beam to less than 0.15 eV. Major aberration corrected TEM manufacturers include JEOL, Hitachi High-technologies, FEI Company, and NION.
Modifications:
Ultrafast and dynamic TEM It is possible to reach temporal resolution far beyond that of the readout rate of electron detectors with the use of pulsed electrons. Pulses can be produced by either modifying the electron source to enable laser-triggered photoemission or by installation of an ultrafast beam blanker. This approach is termed ultrafast transmission electron microscopy when stroboscopic pump-probe illumination is used: an image is formed by the accumulation of many ultrashort electron pulses (typically of hundreds of femtoseconds) with a fixed time delay between the arrival of the electron pulse and the sample excitation. On the other hand, the use of single or a short sequence of electron pulses with a sufficient number of electrons to form an image from each pulse is called dynamic transmission electron microscopy. Temporal resolution down to hundreds of femtoseconds and spatial resolution comparable to that available with a Schottky field emission source is possible in ultrafast TEM, but the technique can only image reversible processes that can be reproducibly triggered millions of times. Dynamic TEM can resolve irreversible processes down to tens of nanoseconds and tens of nanometers.The technique has been pioneered at the early 2000s in laboratories in Germany (Technical University Berlin) and in the USA (Caltech and Lawrence Livermore National Laboratory ). Ultrafast TEM and Dynamic TEM have made possible real-time investigation of numerous physical and chemical phenomena at the nanoscale.
Modifications:
An interesting variant of the Ultrafast Transmission Electron Microscopy technique is the Photon-Induced Near-field Electron Microscopy (PINEM). The latter is based on the inelastic coupling between electrons and photons in presence of a surface or a nanostructure. This method allows one to investigate time-varying nanoscale electromagnetic fields in an electron microscope, as well as dynamically shape the wave properties of the electron beam.
Limitations:
There are a number of drawbacks to the TEM technique. Many materials require extensive sample preparation to produce a sample thin enough to be electron transparent, which makes TEM analysis a relatively time-consuming process with a low throughput of samples. The structure of the sample may also be changed during the preparation process. Also the field of view is relatively small, raising the possibility that the region analyzed may not be characteristic of the whole sample. There is potential that the sample may be damaged by the electron beam, particularly in the case of biological materials.
Limitations:
Resolution limits The limit of resolution obtainable in a TEM may be described in several ways, and is typically referred to as the information limit of the microscope. One commonly used value is a cut-off value of the contrast transfer function, a function that is usually quoted in the frequency domain to define the reproduction of spatial frequencies of objects in the object plane by the microscope optics. A cut-off frequency, qmax, for the transfer function may be approximated with the following equation, where Cs is the spherical aberration coefficient and λ is the electron wavelength: max 0.67 (Csλ3)1/4.
Limitations:
For a 200 kV microscope, with partly corrected spherical aberrations ("to the third order") and a Cs value of 1 µm, a theoretical cut-off value might be 1/qmax = 42 pm. The same microscope without a corrector would have Cs = 0.5 mm and thus a 200-pm cut-off. The spherical aberrations are suppressed to the third or fifth order in the "aberration-corrected" microscopes. Their resolution is however limited by electron source geometry and brightness and chromatic aberrations in the objective lens system.The frequency domain representation of the contrast transfer function may often have an oscillatory nature, which can be tuned by adjusting the focal value of the objective lens. This oscillatory nature implies that some spatial frequencies are faithfully imaged by the microscope, whilst others are suppressed. By combining multiple images with different spatial frequencies, the use of techniques such as focal series reconstruction can be used to improve the resolution of the TEM in a limited manner. The contrast transfer function can, to some extent, be experimentally approximated through techniques such as Fourier transforming images of amorphous material, such as amorphous carbon.
Limitations:
More recently, advances in aberration corrector design have been able to reduce spherical aberrations and to achieve resolution below 0.5 Ångströms (50 pm) at magnifications above 50 million times. Improved resolution allows for the imaging of lighter atoms that scatter electrons less efficiently, such as lithium atoms in lithium battery materials. The ability to determine the position of atoms within materials has made the HRTEM an indispensable tool for nanotechnology research and development in many fields, including heterogeneous catalysis and the development of semiconductor devices for electronics and photonics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slug test**
Slug test:
In hydrogeology, a slug test is a particular type of aquifer test where water is quickly added or removed from a groundwater well, and the change in hydraulic head is monitored through time, to determine the near-well aquifer characteristics. It is a method used by hydrogeologists and civil engineers to determine the transmissivity/hydraulic conductivity and storativity of the material the well is completed in.
Method:
The "slug" of water can either be added to or removed from the well — the only requirement is that it be done as quickly as possible (the interpretation typically assumes instantaneously), then the water level or pressure is monitored. Depending on the properties of the aquifer and the size of the slug, the water level may return to pre-test levels very quickly (thus complicating accurate collection of water level data). A slug can be added by either quickly adding a measured amount of water to the well or something which displaces a measured volume (e.g., a long heavy pipe with the ends capped off). An alternative object is a solid polyvinyl chloride (PVC) rod, with sufficient weight to sink into the groundwater. The objective here is to displace water, not merely be "heavy". A slug of water can be removed using a bailer or pump, but this is more difficult to do since it must be done very quickly and the equipment for removing the water (pump or bailer) will likely be in the way of getting water level measurements.
Performance:
A slug test is in contrast to standard aquifer tests, which typically involve pumping a well at a constant flowrate, and monitoring the response of the aquifer in nearby monitoring wells. Often slug tests are performed instead of a constant rate test, because: time constraints (quick results, or results for a large number of wells, are needed), the well does not or cannot have a pump installed on it (slug tests do not require pumping), the transmissivity of the material the well is cased in is too low to realistically perform a proper pumping test (common for aquitards or some bedrock monitoring wells), or the general size (order of magnitude) of the aquifer parameters is all the accuracy that is required.The size of the slug required is determined by the aquifer properties, the size of the well and the amount of time which is available for the test. For very permeable aquifers, the pulse will dissipate very quickly. If the well has a large diameter, a large volume of water must be added to increase the level in the well a measurable amount.
Interpretation:
Because the flow rate into or out of the well is not constant, as is the case in a typical aquifer test, the standard Theis solution does not work. Mathematically, the Theis equation is the solution of the groundwater flow equation for a step increase in discharge rate at the pumping well; a slug test is instead an instantaneous pulse at the pumping well. This means that a superposition (or more precisely a convolution) of an infinite number of sequential slug tests through time would effectively be a "standard" Theis aquifer test.
Interpretation:
There are several known solutions to the slug test problem; a common engineering approximation is the Hvorslev method, which approximates the more rigorous solution to transient aquifer flow with a simple decaying exponential function.
The aquifer parameters obtained from a slug test are typically less representative of the aquifer surrounding the well than an aquifer test which involves pumping in one well and monitoring in another. Complications arise from near-well effects (i.e., well skin and wellbore storage), which may make it difficult to get accurate results from slug test interpretation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TMPGEnc**
TMPGEnc:
TMPGEnc or TSUNAMI MPEG Encoder is a video transcoder software application primarily for encoding video files to VCD and SVCD-compliant MPEG video formats and was developed by Hiroyuki Hori and Pegasys Inc. TMPGEnc can also refer to the family of software video encoders created after the success of the original TMPGEnc encoder. These include: TMPGEnc Plus, TMPGEnc Free Version, TMPGenc Video Mastering Works, TMPGEnc Authoring Works, TMPGEnc MovieStyle and TMPGEnc MPEG Editor. TMPGEnc products run on Microsoft Windows.
TMPGEnc:
The free trial version of TMPGEnc Video Mastering works has a 14-day time limit. The TMPGEnc Free Version has 30-day time limit for MPEG-2 encoding, MPEG-1 encoding is without limit, but it can be used only for non-commercial, personal or demonstration purposes.
History:
The first beta versions of the TMPGEnc encoder were freely available in 2000 and 2001 and were known as Tsunami MPEG Encoder. The first "stable" version was TMPGEnc 2.00, released on 2001-11-01. In December 2001, sales of "TMPGEnc Plus" started in Japan. In January 2002, the "TMPGEnc Plus - English version" was released. In August 2002, TMPGEnc DVD Source Creator was released and bundled with Sony "Vaio" PC in Japan. In April 2003, "TMPGEnc DVD Author - English version" was released. In March 2005, Tsunami MPEG Video Encoder XPress was released. In August 2005, "TSUNAMI" and "TMPGEnc" were combined into one brand.TMPGEnc Plus/TMPGEnc Free Version was often rated as one of the best-quality MPEG-1/MPEG-2 encoders, alongside Canopus ProCoder and Cinema Craft Encoder. The popularity of TMPGEnc encoders has spawned various other products and "TMPGEnc" is now used as a general brand name for products such TMPGEnc Authoring Works (a consumer-grade Blu-ray Disc, DVD, and DivX authoring tool), TMPGEnc MovieStyle (a video converter primarily for portable and set-top devices), and TMPGEnc MPEG Editor (an MPEG editing program). TMPGEnc Plus is currently still sold by Pegasys Inc., alongside TMPGEnc Video Mastering Works, TMPGEnc Authoring Works, TMPGEnc MovieStyle, TMPGEnc MPEG Editor, TMPGEnc Instant Show Presenter, and TMPGEnc KARMA..Plus. The TMPGEnc Free Version was updated in 2008 for compatibility with Windows Vista (SP1 included).
Technical details:
TMPGEnc Plus in first releases provided advanced MPEG-1 and MPEG-2 video encoding with various technical options, MPEG-1 Layer II and Layer I audio encoding, support for external audio encoders (such as toolame, l3enc, mp3enc, LAME), internal video filters (such as deinterlacing), support for various input formats (AVI, MPEG, WAV, sequence JPEG, TGA files, etc.) depending on installed DirectShow filters, VFAPI frameserver support, support for AVI, WAV, BMP, TGA output and other features. TMPGEnc encoders can read most video formats, as long as the appropriate DirectShow filters are installed in the system. TMPGEnc Plus and TMPGEnc Free Version include tool named "MPEG Tools", which is a simple multiplexer and demultiplexer for MPEG containers (MPEG program stream).
Technical details:
TMPGEnc Video Mastering Works also provides HD MPEG-4 AVC/H.264 output support, Blu-ray Disc output support, AVCHD input support, DVD-Video and DVD-VR input support, MKV input and output support, FLV input, etc. It is the first TMPGEnc product to incorporate the x264 encoding engine for MPEG-4 AVC/H.264 output and is the first software product to commercially license the x264 encoder.
New to TMPGEnc Video Mastering Works 6 over previous versions is H.265/HEVC encoding support (4K and 8K), H.264/AVC 10-bit format (4:2:2 and 4:4:4) output support, and more. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Secretagogue**
Secretagogue:
A secretagogue is a substance that causes another substance to be secreted. The word contains the suffix -agogue, which refers to something that leads to something else; a secretagogue thus leads to secretion. One example is gastrin, which stimulates the H/K ATPase in the parietal cells (increased gastric acid production by the stomach). Pentagastrin, a synthetic gastrin, histamine, and acetylcholine are also gastric secretagogues.
Secretagogue:
Insulin secretagogues, such as sulfonylureas, trigger insulin release by direct action on the KATP channel of the pancreatic beta cells. Blockage of this channel leads to depolarization and secretion of vesicles.
Angiotensin II is a secretagogue for aldosterone from the adrenal gland. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glycolipid**
Glycolipid:
Glycolipids are lipids with a carbohydrate attached by a glycosidic (covalent) bond. Their role is to maintain the stability of the cell membrane and to facilitate cellular recognition, which is crucial to the immune response and in the connections that allow cells to connect to one another to form tissues. Glycolipids are found on the surface of all eukaryotic cell membranes, where they extend from the phospholipid bilayer into the extracellular environment.
Structure:
The essential feature of a glycolipid is the presence of a monosaccharide or oligosaccharide bound to a lipid moiety. The most common lipids in cellular membranes are glycerolipids and sphingolipids, which have glycerol or a sphingosine backbones, respectively. Fatty acids are connected to this backbone, so that the lipid as a whole has a polar head and a non-polar tail. The lipid bilayer of the cell membrane consists of two layers of lipids, with the inner and outer surfaces of the membrane made up of the polar head groups, and the inner part of the membrane made up of the non-polar fatty acid tails.
Structure:
The saccharides that are attached to the polar head groups on the outside of the cell are the ligand components of glycolipids, and are likewise polar, allowing them to be soluble in the aqueous environment surrounding the cell. The lipid and the saccharide form a glycoconjugate through a glycosidic bond, which is a covalent bond. The anomeric carbon of the sugar binds to a free hydroxyl group on the lipid backbone. The structure of these saccharides varies depending on the structure of the molecules to which they bind.
Metabolism:
Glycosyltransferases Enzymes called glycosyltransferases link the saccharide to the lipid molecule, and also play a role in assembling the correct oligosaccharide so that the right receptor can be activated on the cell which responds to the presence of the glycolipid on the surface of the cell. The glycolipid is assembled in the Golgi apparatus and embedded in the surface of a vesicle which is then transported to the cell membrane. The vesicle merges with the cell membrane so that the glycolipid can be presented on the cell's outside surface.
Metabolism:
Glycoside hydrolases Glycoside hydrolases catalyze the breakage of glycosidic bonds. They are used to modify the oligosaccharide structure of the glycan after it has been added onto the lipid. They can also remove glycans from glycolipids to turn them back into unmodified lipids.
Metabolism:
Defects in metabolism Sphingolipidoses are a group of diseases that are associated with the accumulation of sphingolipids which have not been degraded correctly, normally due to a defect in a glycoside hydrolase enzyme. Sphingolipidoses are typically inherited, and their effects depend on which enzyme is affected, and the degree of impairment. One notable example is Niemann–Pick disease which can cause pain and damage to neural networks.
Function:
Cell–cell interactions The main function of glycolipids in the body is to serve as recognition sites for cell–cell interactions. The saccharide of the glycolipid will bind to a specific complementary carbohydrate or to a lectin (carbohydrate-binding protein), of a neighboring cell. The interaction of these cell surface markers is the basis of cell recognitions, and initiates cellular responses that contribute to activities such as regulation, growth, and apoptosis.
Function:
Immune responses An example of how glycolipids function within the body is the interaction between leukocytes and endothelial cells during inflammation. Selectins, a class of lectins found on the surface of leukocytes and endothelial cells bind to the carbohydrates attached to glycolipids to initiate the immune response. This binding causes leukocytes to leave circulation and congregate near the site of inflammation. This is the initial binding mechanism, which is followed by the expression of integrins which form stronger bonds and allow leukocytes to migrate toward the site of inflammation. Glycolipids are also responsible for other responses, notably the recognition of host cells by viruses.
Function:
Blood types Blood types are an example of how glycolipids on cell membranes mediate cell interactions with the surrounding environment. The four main human blood types (A, B, AB, O) are determined by the oligosaccharide attached to a specific glycolipid on the surface of red blood cells, which acts as an antigen. The unmodified antigen, called the H antigen, is the characteristic of type O, and is present on red blood cells of all blood types. Blood type A has an N-acetylgalactosamine added as the main determining structure, type B has a galactose, and type AB has all three of these antigens. Antigens which are not present in an individual's blood will cause antibodies to be produced, which will bind to the foreign glycolipids. For this reason, people with blood type AB can receive transfusions from all blood types (the universal acceptor), and people with blood type O can act as donors to all blood types (the universal donor).
Types of glycolipids:
Glycoglycerolipids: a sub-group of glycolipids characterized by an acetylated or non-acetylated glycerol with at least one fatty acid as the lipid complex. Glyceroglycolipids are often associated with photosynthetic membranes and their functions. The subcategories of glyceroglycolipids depend on the carbohydrate attached.Galactolipids: defined by a galactose sugar attached to a glycerol lipid molecule. They are found in chloroplast membranes and are associated with photosynthetic properties.
Types of glycolipids:
Sulfolipids: have a sulfur-containing functional group in the sugar moiety attached to a lipid. An important group is the sulfoquinovosyl diacylglycerols which are associated with the sulfur cycle in plants.
Glycosphingolipids: a sub-group of glycolipids based on sphingolipids. Glycosphingolipids are mostly located in nervous tissue and are responsible for cell signaling.Cerebrosides: a group glycosphingolipids involved in nerve cell membranes.Galactocerebrosides: a type of cerebroseide with galactose as the saccharide moiety Glucocerebrosides: a type of cerebroside with glucose as the saccharide moiety; often found in non-neural tissue.
Sulfatides: a class of glycolipids containing a sulfate group in the carbohydrate with a ceramide lipid backbone. They are involved in numerous biological functions ranging from immune response to nervous system signaling.
Gangliosides: the most complex animal glycolipids. They contain negatively charged oligosacchrides with one or more sialic acid residues; more than 200 different gangliosides have been identified. They are most abundant in nerve cells.
Globosides: glycosphingolipids with more than one sugar as part of the carbohydrate complex. They have a variety of functions; failure to degrade these molecules leads to Fabry disease.
Glycophosphosphingolipids: complex glycophospholipids from fungi, yeasts, and plants, where they were originally called "phytoglycolipids". They may be as complicated a set of compounds as the negatively charged gangliosides in animals.
Glycophosphatidylinositols: a sub-group of glycolipids defined by a phosphatidylinositol lipid moiety bound to a carbohydrate complex. They can be bound to the C-terminus of a protein and have various functions associated with the different proteins they can be bound to.
Saccharolipids | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virstatin**
Virstatin:
Virstatin is a small molecule that inhibits the activity of the cholera protein, ToxT.Its activity in cholera was first published in 2005 in a paper that described the screening of a chemical library in a phenotypic screen and subsequent testing of one of the hits in infected mice.
The compound is an isoquinoline alkaloid and can be synthesized by a simple two-step synthesis | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nonpoint source pollution**
Nonpoint source pollution:
Nonpoint source (NPS) pollution refers to diffuse contamination (or pollution) of water or air that does not originate from a single discrete source. This type of pollution is often the cumulative effect of small amounts of contaminants gathered from a large area. It is in contrast to point source pollution which results from a single source. Nonpoint source pollution generally results from land runoff, precipitation, atmospheric deposition, drainage, seepage, or hydrological modification (rainfall and snowmelt) where tracing pollution back to a single source is difficult. Nonpoint source water pollution affects a water body from sources such as polluted runoff from agricultural areas draining into a river, or wind-borne debris blowing out to sea. Nonpoint source air pollution affects air quality, from sources such as smokestacks or car tailpipes. Although these pollutants have originated from a point source, the long-range transport ability and multiple sources of the pollutant make it a nonpoint source of pollution; if the discharges were to occur to a body of water or into the atmosphere at a single location, the pollution would be single-point.
Nonpoint source pollution:
Nonpoint source water pollution may derive from many different sources with no specific solutions or changes to rectify the problem, making it difficult to regulate. Nonpoint source water pollution is difficult to control because it comes from the everyday activities of many different people, such as lawn fertilization, applying pesticides, road construction or building construction. Controlling nonpoint source pollution requires improving the management of urban and suburban areas, agricultural operations, forestry operations and marinas.
Nonpoint source pollution:
Types of nonpoint source water pollution include sediment, nutrients, toxic contaminants and chemicals and pathogens. Principal sources of nonpoint source water pollution include: urban and suburban areas, agricultural operations, atmospheric inputs, highway runoff, forestry and mining operations, marinas and boating activities. In urban areas, contaminated storm water washed off of parking lots, roads and highways, called urban runoff, is usually included under the category of non-point sources (it can become a point source if it is channeled into storm drain systems and discharged through pipes to local surface waters). In agriculture, the leaching out of nitrogen compounds from fertilized agricultural lands is a nonpoint source water pollution. Nutrient runoff in storm water from "sheet flow" over an agricultural field or a forest are also examples of non-point source pollution.
Principal types (for water pollution):
Sediment Sediment (loose soil) includes silt (fine particles) and suspended solids (larger particles). Sediment may enter surface waters from eroding stream banks, and from surface runoff due to improper plant cover on urban and rural land. Sediment creates turbidity (cloudiness) in water bodies, reducing the amount of light reaching lower depths, which can inhibit growth of submerged aquatic plants and consequently affect species which are dependent on them, such as fish and shellfish. High turbidity levels also inhibit drinking water purification systems.
Principal types (for water pollution):
Sediment can also be discharged from multiple different sources. Sources include construction sites (although these are point sources, which can be managed with erosion controls and sediment controls), agricultural fields, stream banks, and highly disturbed areas.
Principal types (for water pollution):
Nutrients Nutrients mainly refers to inorganic matter from runoff, landfills, livestock operations and crop lands. The two primary nutrients of concern are phosphorus and nitrogen.Phosphorus is a nutrient that occurs in many forms that are bioavailable. It is notoriously over-abundant in human sewage sludge. It is a main ingredient in many fertilizers used for agriculture as well as on residential and commercial properties, and may become a limiting nutrient in freshwater systems and some estuaries. Phosphorus is most often transported to water bodies via soil erosion because many forms of phosphorus tend to be adsorbed on to soil particles. Excess amounts of phosphorus in aquatic systems (particularly freshwater lakes, reservoirs, and ponds) leads to proliferation of microscopic algae called phytoplankton. The increase of organic matter supply due to the excessive growth of the phytoplankton is called eutrophication. A common symptom of eutrophication is algae blooms that can produce unsightly surface scums, shade out beneficial types of plants, produce taste-and-odor-causing compounds, and poison the water due to toxins produced by the algae. These toxins are a particular problem in systems used for drinking water because some toxins can cause human illness and removal of the toxins is difficult and expensive. Bacterial decomposition of algal blooms consumes dissolved oxygen in the water, generating hypoxia with detrimental consequences for fish and aquatic invertebrates.
Principal types (for water pollution):
Nitrogen is the other key ingredient in fertilizers, and it generally becomes a pollutant in saltwater or brackish estuarine systems where nitrogen is a limiting nutrient. Similar to phosphorus in fresh-waters, excess amounts of bioavailable nitrogen in marine systems lead to eutrophication and algae blooms. Hypoxia is an increasingly common result of eutrophication in marine systems and can impact large areas of estuaries, bays, and near shore coastal waters. Each summer, hypoxic conditions form in bottom waters where the Mississippi River enters the Gulf of Mexico. During recent summers, the aerial extent of this "dead zone" is comparable to the area of New Jersey and has major detrimental consequences for fisheries in the region.
Principal types (for water pollution):
Nitrogen is most often transported by water as nitrate (NO3). The nitrogen is usually added to a watershed as organic-N or ammonia (NH3), so nitrogen stays attached to the soil until oxidation converts it into nitrate. Since the nitrate is generally already incorporated into the soil, the water traveling through the soil (i.e., interflow and tile drainage) is the most likely to transport it, rather than surface runoff.
Principal types (for water pollution):
Toxic contaminants and chemicals Compounds including heavy metals like lead, mercury, zinc, and cadmium, organics like polychlorinated biphenyls (PCBs) and polycyclic aromatic hydrocarbons (PAHs), fire retardants, and other substances are resistant to breakdown. These contaminants can come from a variety of sources including human sewage sludge, mining operations, vehicle emissions, fossil fuel combustion, urban runoff, industrial operations and landfills.Toxic chemicals mainly include organic compounds and inorganic compounds. These compounds include pesticides like DDT, acids, and salts that have severe effects to the ecosystem and water-bodies. These compounds can threaten the health of both humans and aquatic species while being resistant to environmental breakdown, thus allowing them to persist in the environment. These toxic chemicals could come from croplands, nurseries, orchards, building sites, gardens, lawns and landfills.Acids and salts mainly are inorganic pollutants from irrigated lands, mining operations, urban runoff, industrial sites and landfills.
Principal types (for water pollution):
Pathogens Pathogens are bacteria and viruses that can be found in water and cause diseases in humans. Typically, pathogens cause disease when they are present in public drinking water supplies. Pathogens found in contaminated runoff may include: Cryptosporidium parvum Giardia lamblia Salmonella Norovirus and other viruses Parasitic worms (helminths).Coliform bacteria and fecal matter may also be detected in runoff. These bacteria are a commonly used indicator of water pollution, but not an actual cause of disease.Pathogens may contaminate runoff due to poorly managed livestock operations, faulty septic systems, improper handling of pet waste, the over application of human sewage sludge, contaminated storm sewers, and sanitary sewer overflows.
Principal sources (for water pollution):
Urban and suburban areas Urban and suburban areas are a main sources of nonpoint source pollution due to the amount of runoff that is produced due to the large amount of paved surfaces. Paved surfaces, such as asphalt and concrete are impervious to water penetrating them. Any water that is on contact with these surfaces will run off and be absorbed by the surrounding environment. These surfaces make it easier for stormwater to carry pollutants into the surrounding soil.Construction sites tend to have disturbed soil that is easily eroded by precipitation like rain, snow, and hail. Additionally, discarded debris on the site can be carried away by runoff waters and enter the aquatic environment.Contaminated stormwater washed off parking lots, roads and highways, and lawns (often containing fertilizers and pesticides) is called urban runoff. This runoff is often classified as a type of NPS pollution. Some people may also consider it a point source because many times it is channeled into municipal storm drain systems and discharged through pipes to nearby surface waters. However, not all urban runoff flows through storm drain systems before entering water bodies. Some may flow directly into water bodies, especially in developing and suburban areas. Also, unlike other types of point sources, such as industrial discharges, sewage treatment plants and other operations, pollution in urban runoff cannot be attributed to one activity or even group of activities. Therefore, because it is not caused by an easily identified and regulated activity, urban runoff pollution sources are also often treated as true nonpoint sources as municipalities work to abate them. Typically, in suburban areas, chemicals are used for lawn care. These chemicals can end up in runoff and enter the surrounding environment via storm drains in the city. Since the water in storm drains is not treated before flowing into surrounding water bodies, the chemicals enter the water directly.
Principal sources (for water pollution):
Other significant sources of runoff include habitat modification and silviculture (forestry).
Principal sources (for water pollution):
Agricultural operations Nutrients (nitrogen and phosphorus) are typically applied to farmland as commercial fertilizer, animal manure, or spraying of municipal or industrial wastewater (effluent) or sludge. Nutrients may also enter runoff from crop residues, irrigation water, wildlife, and atmospheric deposition.: p. 2–9 Sediment (loose soil) washed off fields is a form of agricultural pollution. Farms with large livestock and poultry operations, such as factory farms, are often point source dischargers. These facilities are called "concentrated animal feeding operations" or "feedlots" in the US and are being subject to increasing government regulation.Agricultural operations account for a large percentage of all nonpoint source pollution in the United States. When large tracts of land are plowed to grow crops, it exposes and loosens soil that was once buried. This makes the exposed soil more vulnerable to erosion during rainstorms. It also can increase the amount of fertilizer and pesticides carried into nearby bodies of water.
Principal sources (for water pollution):
Atmospheric inputs Atmospheric deposition is a source of inorganic and organic constituents because these constituents are transported from sources of air pollution to receptors on the ground. Typically, industrial facilities, like factories, emit air pollution via a smokestack. Although this is a point source, due to the distributional nature, long-range transport, and multiple sources of the pollution, it can be considered as nonpoint source in the depositional area. Atmospheric inputs that affect runoff quality may come from dry deposition between storm events and wet deposition during storm events. The effects of vehicular traffic on the wet and dry deposition that occurs on or near highways, roadways, and parking areas creates uncertainties in the magnitudes of various atmospheric sources in runoff. Existing networks that use protocols sufficient to quantify these concentrations and loads do not measure many of the constituents of interest and these networks are too sparse to provide good deposition estimates at a local scale Highway runoff Highway runoff accounts for a small but widespread percentage of all nonpoint source pollution. Harned (1988) estimated that runoff loads were composed of atmospheric fallout (9%), vehicle deposition (25%) and highway maintenance materials (67%) he also estimated that about 9 percent of these loads were reentrained in the atmosphere.
Principal sources (for water pollution):
Forestry and mining operations Forestry and mining operations can have significant inputs to nonpoint source pollution.
Forestry Forestry operations reduce the number of trees in a given area, thus reducing the oxygen levels in that area as well. This action, coupled with the heavy machinery (harvesters, etc.) rolling over the soil increases the risk of erosion.
Principal sources (for water pollution):
Mining Active mining operations are considered point sources, however runoff from abandoned mining operations contribute to nonpoint source pollution. In strip mining operations, the top of the mountain is removed to expose the desired ore. If this area is not properly reclaimed once the mining has finished, soil erosion can occur. Additionally, there can be chemical reactions with the air and newly exposed rock to create acidic runoff. Water that seeps out of abandoned subsurface mines can also be highly acidic. This can seep into the nearest body of water and change the pH in the aquatic environment.
Principal sources (for water pollution):
Marinas and boating activities Chemicals used for boat maintenance, like paint, solvents, and oils find their way into water through runoff. Additionally, spilling fuels or leaking fuels directly into the water from boats contribute to nonpoint source pollution. Nutrient and bacteria levels are increased by poorly maintained sanitary waste receptacles on the boat and pump-out stations.
Control (for water pollution):
Urban and suburban areas To control nonpoint source pollution, many different approaches can be undertaken in both urban and suburban areas. Buffer strips provide a barrier of grass in between impervious paving material like parking lots and roads, and the closest body of water. This allows the soil to absorb any pollution before it enters the local aquatic system. Retention ponds can be built in drainage areas to create an aquatic buffer between runoff pollution and the aquatic environment. Runoff and storm water drain into the retention pond allowing for the contaminants to settle out and become trapped in the pond. The use of porous pavement allows for rain and storm water to drain into the ground beneath the pavement, reducing the amount of runoff that drains directly into the water body. Restoration methods such as constructing wetlands are also used to slow runoff as well as absorb contamination.
Control (for water pollution):
Construction sites typically implement simple measures to reduce pollution and runoff. Firstly, sediment or silt fences are erected around construction sites to reduce the amount of sediment and large material draining into the nearby water body. Secondly, laying grass or straw along the border of construction sites also work to reduce nonpoint source pollution.In areas served by single-home septic systems, local government regulations can force septic system maintenance to ensure compliance with water quality standards. In Washington (state), a novel approach was developed through a creation of a "shellfish protection district" when either a commercial or recreational shellfish bed is downgraded because of ongoing nonpoint source pollution. The shellfish protection district is a geographic area designated by a county to protect water quality and tideland resources, and provides a mechanism to generate local funds for water quality services to control nonpoint sources of pollution. At least two shellfish protection districts in south Puget Sound have instituted septic system operation and maintenance requirements with program fees tied directly to property taxes.
Control (for water pollution):
Agricultural operations To control sediment and runoff, farmers may utilize erosion controls to reduce runoff flows and retain soil on their fields. Common techniques include contour plowing, crop mulching, crop rotation, planting perennial crops or installing riparian buffers.: pp. 4-95–4-96 Conservation tillage is a concept used to reduce runoff while planting a new crop. The farmer leaves some crop reside from the previous planting in the ground to help prevent runoff during the planting process.Nutrients are typically applied to farmland as commercial fertilizer; animal manure; or spraying of municipal or industrial wastewater (effluent) or sludge. Nutrients may also enter runoff from crop residues, irrigation water, wildlife, and atmospheric deposition.: p. 2–9 Farmers can develop and implement nutrient management plans to reduce excess application of nutrients.: pp. 4-37–4-38 To minimize pesticide impacts, farmers may use Integrated Pest Management (IPM) techniques (which can include biological pest control) to maintain control over pests, reduce reliance on chemical pesticides, and protect water quality.
Control (for water pollution):
Forestry operations With a well-planned placement of both logging trails, also called skid trails, can reduce the amount of sediment generated. By planning the trails location as far away from the logging activity as possible as well as contouring the trails with the land, it can reduce the amount of loose sediment in the runoff. Additionally, by replanting trees on the land after logging, it provides a structure for the soil to regain stability as well as replaces the logged environment.
Control (for water pollution):
Marinas Installing shut off valves on fuel pumps at a marina dock can help reduce the amount of spillover into the water. Additionally, pump-out stations that are easily accessible to boaters in a marina can provide a clean place in which to dispose of sanitary waste without dumping it directly into the water. Finally, something as simple as having trash containers around a marina can prevent larger objects entering the water.
Country examples:
United States Nonpoint source pollution is the leading cause of water pollution in the United States today, with polluted runoff from agriculture and hydromodification the primary sources.: 15 Regulation of Nonpoint Source Pollution in the United States The definition of a nonpoint source is addressed under the U.S. Clean Water Act as interpreted by the U.S. Environmental Protection Agency (EPA). The law does not provide for direct federal regulation of nonpoint sources, but state and local governments may do so pursuant to state laws. For example, many states have taken the steps to implement their own management programs for places such as their coastlines, all of which have to be approved by the National Oceanic and Atmospheric Administration and the EPA. The goals of these programs and those alike are to create foundations that encourage statewide pollution reduction by growing and improving systems that already exist. Programs within these state and local governments look to best management practices (BMPs) in order to accomplish their goals of finding the least costly method to reduce the greatest amount of pollution. BMPs can be implemented for both agricultural and urban runoff, and can also be either structural or nonstructural methods. Federal agencies, including EPA and the Natural Resources Conservation Service, have approved and provided a list of commonly used BMPs for the many different categories of nonpoint source pollution.
Country examples:
U.S. Clean Water Act provisions for states Congress authorized the CWA section 319 grant program in 1987. Grants are provided to states, territories, and tribes in order to encourage implementation and further development in policy. The law requires all states to operate NPS management programs. EPA requires regular program updates in order to effectively manage the ever-changing nature of their waters, and to ensure effective use of the 319 grant funds and resources.The Coastal Zone Act Reauthorization Amendments (CZARA) of 1990 created a program under the Coastal Zone Management Act that mandates development of nonpoint source pollution management measures in states with coastal waters. CZARA requires states with coastlines to implement management measures to remediate water pollution, and to make sure that the product of these measures is implementation as opposed to adoption. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Identical ancestors point**
Identical ancestors point:
In genetic genealogy, the identical ancestors point (IAP), or all common ancestors (ACA) point, or genetic isopoint, is the most recent point in a given population's past such that each individual alive at this point either has no living descendants, or is the ancestor of every individual alive in the present. This point lies further in the past than the population's most recent common ancestor (MRCA).
Calculation:
A set of full siblings has an IAP one generation back: their parents. Similarly, double first cousins have an IAP two generations back: the four grandparents. Considering all humans alive today and moving back in time, we eventually arrive at the MRCA to all humans. The MRCA had many contemporary companions. Many of these contemporaries had descendant lines to some people living today, but not to all people living today. Others did not have any children, or had descendants, but all descendant lines are now fully extinct.
Calculation:
Going further back, all the ancestors of the MRCA are also common ancestors to all humans, just not the most recent. As we move further back in time, other common ancestors will be found on other lines, resulting in more and more of the ancient population being common ancestors. Eventually the point is reached where all people in the past population fall into one of two categories: they are common ancestors, with at least one line of descent to everyone living today, or they are the ancestors of no one alive today, because their lines of descent are completely extinct on every branch. This point in time is termed the 'identical ancestors point'. Joseph T. Chang has proposed that in a large, well mixed population of size N, we only have to go 1.77log2 (N) generations in the past to find the time when everyone in the population (who left descendants) is an ancestor to the entire population. For example, a population of 4,000 individuals would, on average, have a most recent common ancestor about 13 generations earlier and an IAP about 24 or 25 generations earlier. (This model assumes random mate choice, which is unrealistic for the human population, where geographic obstacles have greatly reduced mixing across the entire population.)
Of Homo sapiens:
The identical ancestors point for Homo sapiens has been the subject of debate. In 2003 Rohde estimated it to be between 5000 and 15,000 years ago. In 2004, Rohde, Olson and Chang showed through simulations that, given the false assumption of random mate choice without geographic barriers, the Identical Ancestors Point for all humans would be surprisingly recent, on the order of 5,000-15,000 years ago. Ralph and Coop (2013), considering the European population and working from genetics, came to similar conclusions for the recent common ancestry of Europeans.All living people share exactly the same set of ancestors before the Identical Ancestors Point, all the way to the very first single-celled organism. However, people will vary widely in how much ancestry and genes they inherit from each ancestor, which will cause them to have very different genotypes and phenotypes.
Of Homo sapiens:
This is illustrated in the 2003 simulation as follows: considering the ancestral populations alive at 5000 BC, close to the ACA point, a modern-day Japanese person will get 88.4% of their ancestry from Japan, and most of the remainder from China or Korea, with only 0.00049% traced to Norway; conversely, a modern-day Norwegian will get over 92% of their ancestry from Norway (or over 96% from Scandinavia) and only 0.00044% from Japan. Thus, even though the Norwegian and Japanese person share the same set of ancestors, these ancestors appear in their family tree in dramatically different proportions. A Japanese person in 5000 BC with present-day descendants will likely appear trillions of times in a modern-day Japanese person's family tree, but might appear only one time in a Norwegian person's family tree. A 5000 BC Norwegian person will similarly appear far more times in a typical Norwegian person's family tree than they will appear in a Japanese person's family tree.Note that a person in the population today does not necessarily inherit any genetic material from a given ancestor at the Identical Ancestors Point. For example, a Japanese person may not inherit any genetic material from his Norwegian ancestors. In that case, they are genealogical ancestors but not genetic ancestors. The same goes even for the most recent common ancestor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Classical ballet**
Classical ballet:
Classical ballet is any of the traditional, formal styles of ballet that exclusively employ classical ballet technique. It is known for its aesthetics and rigorous technique (such as pointe work, turnout of the legs, and high extensions), its flowing, precise movements, and its ethereal qualities.
Classical ballet:
There are stylistic variations related to an area or origin, which are denoted by classifications such as Russian ballet, French ballet, British ballet and Italian ballet. For example, Russian ballet features high extensions and dynamic turns, whereas Italian ballet tends to be more grounded, with a focus on fast, intricate footwork. Many of the stylistic variations are associated with specific training methods that have been named after their originators. Despite these variations, the performance and vocabulary of classical ballet are largely consistent throughout the world.
History:
Ballet originated in the Italian Renaissance courts and was brought to France by Catherine de' Medici in the 16th century. During ballet's infancy, court ballets were performed by aristocratic amateurs rather than professional dancers. Most of ballet's early movements evolved from social court dances and prominently featured stage patterns rather than formal ballet technique.
History:
In the 17th century, as ballet's popularity in France increased, ballet began to gradually transform into a professional art. It was no longer performed by amateurs, but instead ballet performances started to incorporate challenging acrobatic movements that could only be performed by highly skilled street entertainers. In response, the world's first ballet school, the Académie Royale de Danse, was established by King Louis XIV in 1661. The Academie's purpose was to improve the quality of dance training in France and to invent a technique or curriculum that could be used to transform ballet into a formal discipline. Shortly after the Academie was formed, in 1672, King Louis XIV established a performing company called the Academie Royal de Musique de Dance (today known as Paris Opera), and named Pierre Beauchamp the head dancing-master. While at the Academie Royal, Beauchamp revolutionized ballet technique by inventing the five positions (first, second, third, fourth and fifth) of ballet, which to this day remain the foundation of all formal classical ballet technique.
History:
Famous dancers in history Anna Pavlova: 12 February 1881 – 23 January 1931 Dame Margot Fonteyn: 18 May 1919 – 21 February 1991 Rudolf Nureyev: 17 March 1938 – 6 January 1993
Development:
Before classical ballet developed, ballet was in a period referred to as the Romantic era. Romantic ballet was known for its storytelling, and often held a softer aesthetic. Classical ballet came to be when a ballet master by the name of Marius Petipa (who is considered to be one of the greatest choreographers of all time) took Romantic ballet and combined it with different aspects of Russian ballet technique (as Petipa was once a choreographer and ballet master at Mariinsky Ballet). Elements pulled from these things include the storytelling found in Romantic ballet, and the athleticism of Russian technique. Therefore, a new era of ballet, which later became known as the classical era, began. Even though he was responsible for bringing in the classical ballet era, Petipa was also responsible for choreographing well-known romantic ballets such as Giselle.
Development:
During the classical era, Marius Petipa was largely responsible for creating choreographic structures that are still used in ballets today. For one, Petipa was the first to use the grand pas de deux in his choreography. Additionally, he cemented the usage of the corps de ballet as a standard part of a ballet. Despite his ushering in of the classical era, these elements can be seen in his romantic ballets as well.
Development:
Famous classical ballets Don Quixote: choreographed by Marius Petipa Swan Lake: choreographed by Marius Petipa and Lev Ivanov The Nutcracker: choreographed by Marius Petipa and Lev Ivanov
Technique:
Ballet technique is the foundational principles of body movement and form used in ballet. A distinctive feature of ballet technique is turnout; which is the outward rotation of the legs and feet emanating from the hip. This was first introduced into ballet by King Louis XIV because he loved to show off the shiny buckles on his shoes when he performed his own dances. There are five fundamental positions of the feet in ballet, all performed with turnout and named numerically as first through fifth positions. When performing jumps and leaps, classical ballet dancers strive to exhibit ballon, the appearance of briefly floating in the air. Pointe technique is the part of ballet technique that concerns pointe work, in which a ballet dancer supports all body weight on the tips of fully extended feet on specially designed and handcrafted pointe shoes. In professional companies, the shoes are made to fit the dancers' feet perfectly.
Training:
Students typically learn ballet terminology and the pronunciation, meaning, and precise body form and movement associated with each term. Emphasis is placed on developing flexibility and strengthening the legs, feet, and body core (the center, or abdominals) as a strong core is essential for turns and many other ballet movements. Dancers also learn to use their spot which teaches them to focus on something while turning so as not to become dizzy and lose their balance.
Training:
After learning basic ballet technique and developing sufficient strength and flexibility, female dancers begin to learn pointe technique and male and female dancers begin to learn partnering and more advanced jumps and turns. Depending on the teacher and training system, students may progress through various stages or levels of training as their skills advance.
Training:
Ballet class attire Female attire typically includes pink or flesh colored tights, a leotard, and sometimes a short wrap-skirt, or a skirted leotard. Males typically wear black or dark tights, a form-fitting white, or black, shirt or leotard worn under the tights, and a dance belt beneath the outer dancewear to provide support. In some cases, students may wear a unitard — a one-piece garment that combines tights and a leotard — to enhance the visibility of artistic lines.
Training:
All dancers wear soft ballet shoes (sometimes called flats). Typically, female dancers wear pink or beige shoes and men wear black or white shoes. Leg warmers are sometimes worn during the early part of a class to protect leg muscles until they become warm. Females are usually required to restrain their hair in a bun or some other hair style that exposes the neck that is not a ponytail. The customary attire and hair style are intended to promote freedom of movement and to reveal body form so that the teacher can evaluate dancers' alignment and technique. After warming up, advanced female students may wear pointe shoes whereas advanced male students continue to wear soft shoes. Pointe shoes are worn after the student is deemed strong enough in the ankles and can execute the routine to a high standard, usually around or after the age of 12, or after the dancers' feet have stopped developing, so as to protect the dancers' feet from injury common with premature wearing.
Training:
Methods There are several standardized, widespread, classical ballet training systems, each designed to produce a unique aesthetic quality from its students. Some systems are named after their creators; these are typically called methods or schools. For example, two prevailing systems from Russia are the Vaganova method (created by Agrippina Vaganova) and the Legat Method (by Nikolai Legat). The Cecchetti method is named after Italian dancer Enrico Cecchetti. Another training system was developed by and named after August Bournonville; this is taught primarily in Denmark. The Royal Academy of Dance (RAD) method was not created by an individual, but by a group of notable ballet professionals. Despite their associations with geographically named ballet styles, many of these training methods are used worldwide. For example, the RAD teaching method is used in more than 70 countries.
Training:
American-style ballet (Balanchine) is not taught by means of a standardized, widespread training system. Similarly, French ballet has no standard training system; each of the major French-style ballet schools, such as the Paris Opera Ballet School, Conservatoire National Supérieur de Musique et de Danse, and Académie de Danse Classique Princesse Grace (Monaco) employs a unique training system.
Stage reference points Some classical ballet training systems employ standardized layouts to define reference locations at the corners, and edges of stages, and dance studio rooms. In the latter case, there is no audience and a mirror typically spans the downstage wall of the room (e.g., points 1-2 of the Cecchetti layout). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cat behaviorist**
Cat behaviorist:
Cat behaviorists are individuals who specialize in working in close environments with not only the cats, but their owners, and dealing with managing the behavior of the cat. A cat behaviorist can be certified or certificated after years of academic study and practical case experience. However, it is also possible for a behaviorist to work locally without completing extensive training.
Education:
In the US, an ‘Associate Certified Applied Animal Behaviorist’ holds a degree from a college or university. Animal behavior study is essential, with focus on biological or behavioral science. The degree includes a thesis conducted with intense research. The coursework requires several credits in ethology, animal behavior, comparative psychology, animal learning, conditioning and animal psychology (experimental psychology). Associate behaviorist requirements also need experience along with the education. The Animal Behavior Society requires two or more years of experience with applied animal behavior and interaction with particular species. At least three letters of recommendation are necessary to prove experience and education.Whereas a ‘Certified Applied Animal Behaviorist’ holds a doctoral degree with focus on animal behavior, and possesses five years of experience, or holds a doctorate in veterinary medicine that requires two years of residency in animal behavior and three years of experience in applied animal behavior. The coursework and endorsements are identical to the Associate Certified Applied Animal Behaviorist. However, a Certified Animal Behaviorist will have to obtain the skills necessary for working closely with a species as a researcher, be an intern or research assistant, and show original creations or interpretations of animal behavior.
Duties:
It is a common goal for the cat behaviorist to sort out problem behaviors and to create a strong communication between owner and pet. A cat behaviorist will work with both the cat and owner to achieve understanding between the relationships. They also concentrate on the unique changes in behavior in the pet and normal behavior so they can identify any irregular activities. They can even work along with veterinarians to distinguish the right medications for the animal.As part of their duties, it is common for a behaviorist to work in a close environment with the cat, inspecting every detail necessary. A cat behaviorist must use their training of animal behavior to study responses and issues to lessen anxiety or fears rooted in the environment or elsewhere. They may question the owner's habits, house structure and living spaces of the pet to pinpoint certain concerns. It may be possible that in order to stop any unwanted behaviors, the owner of the pet will have to change their behavior first.Understanding each other is one of many steps to peace in a warring household, and a cat behaviorist exists to provide just that: a link of communication between cat and owner. They aid in preventing or stopping psychological, health, and physical problems in the cat, such as scratching, biting, fighting, obesity, urine marking and more. A behaviorist will also inform or educate the owners about development stages in the cat in order to create an understanding. Once the owner understands the physical and social needs of a cat, behavior issues and other problems will decline.A cat behaviorist will encourage socialization between the guardian and the cat to aid the process and gain a beneficial relationship. Social learning is extremely important for a cat and a cat behaviorist recognizes this and will incorporate these factors into the daily life. They promote healthy learning and stimulation with play and interaction. A cat behaviorist will also describe what is normal behavior and what is not, so that the owner can continue making that distinction and continue to help the cat.
Certification:
An applied animal behaviorist or clinical animal behaviourist can specialize not just with cats, but with dogs, horses, and even parrots. Often a certified behaviorist will have undergone graduate training in courses such as zoology, biology and animal behavior in certain universities.
Certification:
In the US, Certified Applied Animal Behaviorists (CAABs) are behaviorists with a doctoral degree and Associate Certified Applied Behaviorists (ACAABs) are those who studied with a master's degree. There are various different organizations and associations that provide certification. The American College of Veterinary Behaviorists (ACVB) has a list of requirements before an individual can become board-certified, including an internship, examinations, creating a scientific journal and more.The Animal Behaviour and Training Council, located in the United Kingdom, aims to regulate courses and organisations in which to become accredited as feline behaviourists.
Employment:
Once qualified, a cat behaviorist can find a place of work in different fields. The need for animal specialty care and service is expected to increase, so jobs are in high demand.
Employment:
Specifically, many cat behaviorists have started their own line of work as independent cat trainers and behavior modifiers, including Jackson Galaxy and Sophia Yin. Jackson Galaxy has partnered up with Animal Planet and provides a show called My Cat from Hell, which identifies behavioral issues in cats. Sophia Yin created her own website to help individuals with problem cats. Mieshelle Nagelschneider is another example, author of the book, The Cat Whisperer (Random House Publishing).
Employment:
Other cat behaviorists have developed interest in veterinary jobs, animal control, animal shelters, kennels, and other animal-related work. In the UK, feline behaviourists are also known as clinical animal behaviourists, and work in a variety of sectors.
Salary:
Since the employment opportunities for cat behaviorists differ, there is no set working salary. Those working for non-profit companies or researchers, such as zoos, typically earn less than those working for private companies. It also depends on the role of the job and where the behaviorist works. According to Michael Hutchins from the American Zoological Association, "Most animal behaviorists earn from $35,000 to $90,000 and more". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Canvas fingerprinting**
Canvas fingerprinting:
Canvas fingerprinting is one of a number of browser fingerprinting techniques for tracking online users that allow websites to identify and track visitors using the HTML5 canvas element instead of browser cookies or other similar means. The technique received wide media coverage in 2014 after researchers from Princeton University and KU Leuven University described it in their paper The Web never forgets.
Description:
Canvas fingerprinting works by exploiting the HTML5 canvas element. As described by Acar et al. in: When a user visits a page, the fingerprinting script first draws text with the font and size of its choice and adds background colors (1). Next, the script calls Canvas API’s ToDataURL method to get the canvas pixel data in dataURL format (2), which is basically a Base64 encoded representation of the binary pixel data. Finally, the script takes the hash of the text-encoded pixel data (3), which serves as the fingerprint ...
Description:
Variations in which the graphics processing unit (GPU), or the graphics driver, is installed may cause the fingerprint variation. The fingerprint can be stored and shared with advertising partners to identify users when they visit affiliated websites. A profile can be created from the user's browsing activity, allowing advertisers to target advertise to the user's inferred demographics and preferences.By January 2022, the concept was extended to fingerprinting performance characteristics of the graphics hardware, called DrawnApart by the researchers.
Description:
Uniqueness Since the fingerprint is primarily based on the browser, operating system, and installed graphics hardware, it does not uniquely identify users. In a small-scale study with 294 participants from Amazon's Mechanical Turk, an experimental entropy of 5.7 bits was observed. The authors of the study suggest more entropy could likely be observed in the wild and with more patterns used in the fingerprint. While not sufficient to identify individual users by itself, this fingerprint could be combined with other entropy sources to provide a unique identifier. It is claimed that because the technique is effectively fingerprinting the GPU, the entropy is "orthogonal" to the entropy of previous browser fingerprint techniques such as screen resolution and browser JavaScript capabilities.Much more unique identification becomes possible with DrawnApart, published in 2022, which was shown to boost tracking duration of individual fingerprints by 67% when used to enhance other methods.
History:
In May 2012, Keaton Mowery and Hovav Shacham, researchers at University of California, San Diego, wrote a paper Pixel Perfect: Fingerprinting Canvas in HTML5 describing how the HTML5 canvas could be used to create digital fingerprints of web users.Social bookmarking technology company AddThis began experimenting with canvas fingerprinting early in 2014 as a potential replacement for cookies. 5% of the top 100,000 websites used canvas fingerprinting while it was deployed. According to AddThis CEO Richard Harris, the company has only used data collected from these tests to conduct internal research. Users will be able to install an opt-out cookie on any computer to prevent being tracked by AddThis with canvas fingerprinting.A software developer writing in Forbes stated that device fingerprinting has been utilized for the purpose of preventing unauthorized access to systems long before it was used for tracking users without their consent.As of 2014 the technique is widespread in many websites, used by at least a dozen high-profile web ads and user tracking suppliers.In 2022, the capabilities of canvas fingerprinting were much deepened by taking minute differences between nominally identical units of the same GPU model into account. Those differences are rooted in the manufacturing process, making units more deterministic over time than between identical copies.
Mitigation:
Tor Project reference documentation states, "After plugins and plugin-provided information, we believe that the HTML5 Canvas is the single largest fingerprinting threat browsers face today." Tor Browser notifies the user of canvas read attempts and provides the option to return blank image data to prevent fingerprinting. However, Tor Browser is currently unable to distinguish between legitimate uses of the canvas element and fingerprinting efforts, so its warning cannot be taken as proof of a website's intent to identify and track its visitors. Browser add-ons like Privacy Badger, DoNotTrackMe, or Adblock Plus manually enhanced with EasyPrivacy list are able to block third-party ad network trackers and can be configured to block canvas fingerprinting, provided that the tracker is served by a third party server (as opposed to being implemented by the visited website itself). Canvas Defender, a browser add-on, spoofs Canvas fingerprints.The LibreWolf browser project includes technology to block access to the HTML5 canvas by default, only allowing it in specific instances green-lit by the user. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bun (hairstyle)**
Bun (hairstyle):
A bun is a type of hairstyle in which the hair is pulled back from the face, twisted or plaited, and wrapped in a circular coil around itself, typically on top or back of the head or just above the neck. A bun can be secured with a hair tie, barrette, bobby pins, one or more hair sticks, a hairnet, or a pen or pencil. Hair may also be wrapped around a piece called a "rat". Alternatively, hair bun inserts, or sometimes rolled up socks, may also be used to create donut-shaped buns. Buns may be tightly gathered, or loose and more informal.
Double bun:
Double or pigtail buns are often called odango (お団子), which is also a type of Japanese dumpling (usually called dango; the o- is honorific).
The term odango in Japanese can refer to any variety of bun hairstyle.
Double bun:
In China, the hairstyle is called niújiǎotóu (牛角头). It was a commonly used hairstyle up until the early 20th century, and can still be seen today when traditional attire is used. This hairstyle differs from the odango slightly in that it is gender neutral; Chinese paintings of children have frequently depicted girls as having matching ox horns, while boys have a single bun in the back.
Double bun:
In the United States they are called Side Buns, also known as "Space Buns", and were a popular festival hair trend in the 1990s. Today they have become mainstream. Instead of using wild color dyes, glitter, and braids, bobby pins, hair ties, or one's own hair are used for a softer, everyday feel.
Triple bun:
Star Wars: The Force Awakens had Rey debut a "triple bun" hairstyle.
Bun or top knot hairstyle in men:
Men in ancient China wore their hair in a topknot bun (Touji 頭髻); visual depictions of this can be seen on the terracotta soldiers. They were worn until the end of the Ming Dynasty in AD 1644, after which the Qing Dynasty government forced men to adopt the Manchu queue hairstyle (queue order).
Bun or top knot hairstyle in men:
Men of the Joseon Era of Korea wore the sangtu as a symbol of marriage. 16th century Japanese men wore the chonmage for samurai warriors and sumo wrestlers. In the west, topknots were frequently worn by "barbarian" peoples in the eyes of the Romans, such as the Goths, Vandals, and the Lombards. Later, the hairstyle survived in the pagan Scandinavian north (some believe the topknot hairstyle contains elements of Odinic cult worship) and with the eastern nomadic tribes such as the Bulgars, Cumans and Cossacks.Historical examples of men with long hair using this style include: Rishi knot The rishi (sage) knot is a topknot worn by Sikhi boys and men as a religious practice, in which the hair is formed into a bun. In the Sikh tradition, a turban is then worn atop the bun. This hairstyle is also known as joora, and has been traditionally worn by Hindu mendicants.
Bun or top knot hairstyle in men:
Man bun The man-bun is a topknot worn by long-haired men in the Western world. In London, the modern man-bun style may have begun around 2010, although David Beckham sported one earlier. The first Google Trends examples started to appear in 2013, and searches showed a steep increase through 2015. Some of the first celebrities to wear the style were George Harrison, Jared Leto, Joakim Noah, Chris Hemsworth, Leonardo DiCaprio, Scot Pollard, and Orlando Bloom. The hairstyle is also associated with Brooklyn hipsters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sinclair Scientific**
Sinclair Scientific:
The Sinclair Scientific calculator was a 12-function, pocket-sized scientific calculator introduced in 1974, dramatically undercutting in price other calculators available at the time. The Sinclair Scientific Programmable, released a year later, was advertised as the first budget programmable calculator.
Sinclair Scientific:
Significant modifications to the algorithms used meant that a chipset intended for a four-function calculator was able to process scientific functions, but at the cost of reduced speed and accuracy. Compared to contemporary scientific calculators, some functions were slow to execute, and others had limited accuracy or gave the wrong answer, but the cost of the Sinclair was a fraction of the cost of competing calculators.
History:
In 1972, Hewlett-Packard launched the HP-35, the world's first handheld scientific calculator. Despite market research suggesting that it was too expensive for there to be any real demand, production went ahead. It cost US$395 (about £165), but despite the price, over 300,000 were sold in the three and a half years for which it was produced.From 1971 Texas Instruments had been making available the building block for a simple calculator on a single chip and the TMS0803 chipset appeared in a number of Sinclair calculators. Clive Sinclair wanted to design a calculator to compete with the HP-35 using this series of chips. Despite scepticism about the feasibility of the project from Texas Instruments engineers, Nigel Searle was able to design algorithms that sacrificed some speed and accuracy in order to implement scientific functions on the TMS0805 variation.The Sinclair Scientific first appeared in a case derived from that of the Sinclair Cambridge, but it was not part of the same range. The initial retail price was £49.95 in the UK (equivalent to £478 in 2016), and in the US for US$99.95 as a kit or US$139.95 fully assembled. By July 1976, however, it was possible to purchase one for £7 (equivalent to £46 in 2016).
History:
The Sinclair Scientific Programmable was introduced in August 1975, and was larger than the Sinclair Scientific, at 73 by 155 by 34 millimetres (2.9 in × 6.1 in × 1.3 in). It was advertised as "the first ... calculator to offer a ... programming facility ... at a price within the reach of the general public," but was limited by having only 24 program steps.Both the Sinclair Scientific and the Sinclair Scientific Programmable were manufactured in England, like all other Sinclair calculators except the Sinclair President.
Design:
Sinclair Scientific The HP-35 used five chips, and had a been developed by twenty engineers at a cost of a million dollars, leading the Texas Instruments engineers to think that Sinclair's aim to build a scientific calculator around the TMS0805 chip, which could barely handle four-function arithmetic, was impossible. However, by sacrificing some speed and accuracy, Sinclair used clever algorithms to run scientific operations on a chip with room for just 320 instructions. Constants, rather than being stored in the calculator, were printed on the case below the screen.It displays only in scientific notation, with a five digit mantissa and a two digit exponent, although a sixth digit of the mantissa was stored internally. Because of the way the processor was designed, it uses Reverse Polish notation (RPN) to input calculations. RPN meant that the difficult implementation of brackets, and the associated recursive logic, was not necessary to implement in the hardware, but the effort was instead offloaded to the user. Instead of an "Equals" button, the + or - keys are used to enter the initial value of a calculation, followed by subsequent operand(s) each followed by their appropriate operator(s).
Design:
To fit the program into the 320 words available on the chip, some significant modification was used. By not using regular floating point numbers, which require many instructions to keep the decimal point in the right place, some space was freed up. Trigonometric functions were implemented in about 40 instructions, and inverse trigonometric functions are almost 30 more instructions. Logarithms are about 40 instructions, with anti-log taking about 20 more. The code to normalize and display the computed values are roughly the same in both the TI and Sinclair programs.The design of the algorithms meant that some calculations, such as arccos0.2, could take up to 15 seconds, whereas the HP-35 was designed to complete calculations in under a second. Accuracy in scientific functions was also limited to around three digits at most, and there were a number of bugs and limitations.Ken Shirriff, an employee of Google, reverse engineered a Sinclair Scientific and built a simulator using the original algorithms.
Design:
Assembly kit The assembly kit consisted of eight groups of components, plus a carry case. The build time was advertised as being around three hours, and required a soldering iron and a pair of cutters. In January 1975, the kit was available for US$49.95, half the price at the time of introduction a year earlier, and in December 1975 it was available for £9.95, less than a quarter of the introductory price.
Design:
Giant Scientific A version of the Scientific, with all the same functionality, was made to be 30 by 68 centimetres (12 in × 27 in), and was known as the Giant Scientific. It was powered by 240 V AC, and used discrete LEDs for its display.
Design:
Sinclair Scientific Programmable The Sinclair Scientific Programmable was introduced in 1975, with the same case as the Sinclair Oxford. It was larger than the Scientific, at 73 by 155 by 34 millimetres (2.9 in × 6.1 in × 1.3 in), and used a larger PP3 battery, but could also be powered by mains electricity.It had 24-step programming abilities, which meant it was highly limited for many purposes. It also lacked functions for the natural logarithm and exponential function. Constants used in programs were required to be integers, and the programming was wasteful, with start and end quotes needed to use a constant in a program.However, included with the calculator was a library of over 120 programs that performed common operations in mathematics, geometry, statistics, finance, physics, electronics, engineering, as well as fluid mechanics and materials science. The full library of standard programs contained over 400 programs in the Sinclair Program Library.
Design:
Calculations using the Sinclair Scientific The Sinclair used a slightly different Reverse Polish Notation method; lacking an enter key, the operation keys enter a number into the appropriate register and the calculation is performed. For example, "(1+2) * 3" could be calculated as: C 1 + 2 + 3 × to give the result of 9.0000 00 (9.0000×100, or 9). The "C" key performs a clear; pressing it sets the calculator to a state with zero in the internal registers. Pressing "C" followed by number keys then "+" effectively adds the number entered to the zero and stores it internally to be worked on in subsequent calculations. If the "-" key is pressed instead, the number is subtracted from zero, effectively entering a negative number.All numbers are entered in scientific notation. After entering the mantissa part of the number the "E" exponent key is pressed prior to entering the integer exponent of the number. Respect for the order of operations is placed on the user, and there are no bracket keys. The display shows only five digits, but six digits can be entered. As an example 12.3*(-123.4+123.456) could be entered as C 1 2 3 4 E 2 - 1 2 3 4 5 6 E 2 + 1 2 3 E 1 × for a displayed result of 6.8880 -01 (representing 6.8880×10−1, or 0.68880).
Design:
Four constants are printed on the calculator case for easy reference. For converting to and from base 10 logarithms and natural logarithms the natural logarithm of 10 (2.30259) and e (2.71828) are printed on the case. Pi (3.14159) and 57.2958 (180 / Pi) are also on the case for trigonometry calculations. There was not enough internal memory to store these constants internally. Angles are computed using radians; degree values must be converted to radians by dividing by 57.2958. As an example, to calculate 25 sin (600*0.05°) one would enter C 6 E 2 + 0 0 5 × 5 7 2 9 5 8 E 1 ÷ ▲ + 2 5 E 1 × to get a result of 1.2500 01 (representing 12.5 which is equal to 25 sin(30°) ). Sine is selected with the combination of the "▲" key followed by the "+" key. The "▼" (down) and "▲" (up) arrow keys are function select keys. The four operation keys ("-, +, ÷ and ×") all have two other function activated by using one of the arrow keys. The function available are Sine, Arcsine, Cosine, Arccosine, Tangent, Arctangent, Logarithm and Antilogarithm. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Powerhouse (comics)**
Powerhouse (comics):
Powerhouse is a name used by several different fictional characters appearing in American comic books published by Marvel Comics.
Powerhouse (Rieg Davan):
Publication history Powerhouse first appeared in Nova #2 (October 1976), and was created by Marv Wolfman, John Buscema, and Joe Sinnott.The character subsequently appears in Nova #6-8 (February–April 1977), #10-11 (June–July 1977), #24-25 (March & May 1979), Fantastic Four #206 (May 1979), #208-209 (July–August 1979), and ROM #24 (November 1981).
Powerhouse appeared as part of the "Champions of Xandar" entry in the Official Handbook of the Marvel Universe Deluxe Edition #16.
Powerhouse (Rieg Davan):
Fictional character biography Powerhouse is a member of Xandarian alien race, with superhuman strength and the ability to absorb energy from any source to temporarily enhance his physical strength fiftyfold, absorb the energy from a weapon used against him and redirect it against an assailant, and even create a psionic link with an opponent with whom he is in physical contact so as to control his opponent's use of his or her own powers.
Powerhouse (Rieg Davan):
Davan was a member of the Syfon division of the Nova Corps, Xandar's superhuman military. He was sent into space to perform surveillance on the starship of Zorr, the interstellar warlord who went on to shatter Xandar. Before Davan could report on Zorr's activities, his ship was hit by a meteor shower and careened off-course, eventually crossing the space of galaxies and landing in a body of water on Earth. Davan's ship was found by the Avian criminal Condor, who brainwashed Davan into serving him. Eventually Davan regained his memory and returned to Xandar, serving in the Champions of Xandar.He was killed fighting the forces of Nebula.
Powerhouse (Rieg Davan):
Powers and abilities Powerhouse had superhuman strength, and the ability to siphon the energies of any power-source, including living beings.
Powerhouse: the 1990s mutant:
Her first appearance was in Spider-Man #15 (1991).
At some point, under uncertain circumstances, Powerhouse developed mutant powers and a strong hatred for humanity.
Powerhouse: the 1990s mutant:
Beast was giving a lecture at ESU on genetics and Powerhouse decided to show up and send her anti-human message there. Simultaneously, a mutant hater called Masterblaster decided to make his presence and opinions felt at the same lecture. Though it is unclear whether one of them influenced the other's actions at the lecture, Masterblaster came to attack Beast, and Powerhouse attacked humans there and the Beast came to their aid and fought Powerhouse. They fought back and forth, with Powerhouse having the upper hand until Spider-Man showed up. While Spider-Man kept Powerhouse busy, Beast knocked out Masterblaster and the two proceeded to team up on Powerhouse, and defeated her, knocking her out. She was presumably taken into police custody.Wolverine and Warbird saw a news report of Powerhouse attacking the area around the U.N. building, where an international debate on human-mutant relations was to happen. She was attacking everything in sight as a symbol of the way humans' attitude towards mutants was. Before long, Wolverine and Warbird joined the fight, and Wolverine was flown into the air and landed on top of Powerhouse so they could fight. However, Powerhouse started draining Wolverine's energy and Warbird was doing more harm than good because she was drunk and being very reckless with her energy blasts. Eventually, Powerhouse knocked out Warbird. Without the drunk Warbird to get in his way, Wolverine managed to defeat Powerhouse.Powerhouse joined with Masterblaster in robbing a local bank, but were interrupted by Spider-Man, who defeated the duo. One of the men present at the bank at the time of the hold-up became infatuated with Powerhouse, and subsequently visited her in prison, which he intended to continue to do every two weeks for the next seven years, as she served her sentence.
Powerhouse: the 1990s mutant:
Powers and abilities Powerhouse has the ability to drain energy of other living things to increase her own power. She can absorb energy through contact with a victim, even brief contact like a punch. She has demonstrated the ability to fly, shoot destructive energy blasts, and superhuman strength to an unknown degree. However, it is unknown if she has any of these powers unless she has absorbed sufficient amounts of energy or whether they are independent within her. It is also unknown if she requires this energy to survive and if so, how often she needs to feed.
Powerhouse (Alex Power):
An alias used by Power Pack leader Alex Power as a member of the New Warriors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peroxisomal targeting signal**
Peroxisomal targeting signal:
In biochemical protein targeting, a peroxisomal targeting signal (PTS) is a region of the peroxisomal protein that receptors recognize and bind to. It is responsible for specifying that proteins containing this motif are localised to the peroxisome.
Overview:
All peroxisomal proteins are synthesized in the cytoplasm and must be directed to the peroxisome. The first step in this process is the binding of the protein to a receptor. The receptor then directs the complex to the peroxisome. Receptors recognize and bind to a region of the peroxisomal protein called a peroxisomal targeting signal, or PTS.
Overview:
Peroxisomes consist of a matrix surrounded by a specific membrane. Most peroxisomal matrix proteins contain a short sequence, usually three amino acids at the extreme carboxy tail of the protein, that serves as the PTS. The prototypic sequence (many variations exist) is serine-lysine-leucine (-SKL in the one letter amino acid code). This motif, and its variations, is known as the PTS1, and the receptor is termed the PTS1 receptor.
Overview:
It was found that the PTS1 receptor is encoded by the PEX5 gene. PEX5 imports folded proteins into the peroxisome, shuttling between the peroxisome and cytosol. PEX5 interacts with a large number of other proteins, including Pex8p, 10p, 12p, 13p, 14p.
A few peroxisomal matrix proteins have a different, and less conserved sequence, at their amino termini. This PTS2 signal is recognized by the PTS2 receptor, encoded by the PEX7 gene.
"PEX" refers to a group of genes that were identified as being important for peroxisomal synthesis. The numerical attributions, such as PEX5, generally refer to the order in which they were first discovered.
Overview:
A distinct motif is used for proteins destined for the peroxisomal membrane called the "mPTS" motif, which is more poorly defined and may consist of discontinuous subdomains. One of these usually is a cluster of basic amino acids (arginines and lysines) within a loop of protein (i.e., between membrane spans) that will face the matrix. The mPTS receptor is the product of PEX19. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**McLaren Report**
McLaren Report:
The McLaren Report (Russian: Доклад Макларена) is the name given to an independent report released in two parts by professor Richard McLaren into allegations and evidence of state-sponsored doping in Russia. It was commissioned by the World Anti-Doping Agency (WADA) in May 2016.
In July 2016, McLaren presented Part 1 of the report, indicating systematic state-sponsored subversion of the drug testing processes by the government of Russia during and subsequent to the 2014 Winter Olympics in Sochi, Russia. In December 2016, he published the second part of the report on doping in Russia.
July 2016 report (part 1):
On 18 July 2016, Richard McLaren, a Canadian attorney retained by WADA to investigate Grigory Rodchenkov's allegations, published a 97-page report covering significant state-sponsored doping in Russia. Although limited by a 57-day time frame, the investigation found corroborating evidence after conducting witness interviews, reviewing thousands of documents, cyber analysis of hard drives, forensic analysis of urine sample collection bottles, and laboratory analysis of individual athlete samples, with "more evidence becoming available by the day."The report concluded that it was shown "beyond a reasonable doubt" that Russia's Ministry of Sport, the Centre of Sports Preparation of the National Teams of Russia, the Federal Security Service (FSB), and the WADA-accredited laboratory in Moscow had "operated for the protection of doped Russian athletes" within a "state-directed failsafe system" using "the disappearing positive [test] methodology." McLaren stated that urine samples were opened in Sochi in order to swap them "without any evidence to the untrained eye".At the Olympics, urine samples are stored in security bottles named the "BEREG-KIT", which must be broken open after being closed; the investigation, however, found that using a specific tool, the bottles were possible to open, and found scratch marks on the inside normally invisible to the naked eye. The official producer of BEREG-KIT security bottles used for anti-doping tests, Berlinger Group, stated, "We have no knowledge of the specifications, the methods or the procedures involved in the tests and experiments conducted by the McLaren Commission."According to the McLaren report, the Disappearing Positive Methodology (DPM) operated from "at least late 2011 to August 2015." It was used on 643 positive samples, a number that the authors consider "only a minimum" due to limited access to Russian records.
July 2016 report (part 1):
Part 1 of the Report is available publicly on WADA's website: https://www.wada-ama.org/sites/default/files/resources/files/20160718_ip_report_newfinal.pdf
December 2016 report (part 2):
On 9 December 2016, McLaren published the second part of his independent report. The investigation found that from 2011 to 2015, more than 1,000 Russian competitors in various sports (including summer, winter, and Paralympic sports) benefited from the cover-up. Emails indicate that they included five blind powerlifters, who may have been given drugs without their knowledge, and a fifteen-year-old.Part 2 of the Report is available publicly on WADA's website: https://www.wada-ama.org/sites/default/files/resources/files/mclaren_report_part_ii_2.pdf
Reaction:
July 2016 report Russia was suspended from all international athletic competitions by the International Association of Athletics Federations, including the 2016 Summer Olympics. Russian weightlifters were banned from Rio Olympics for numerous anti-doping violations also.
Reaction:
On 24 July, the IOC rejected WADA's recommendation to ban Russia from the Summer Olympics and announced that a decision would be made by each sport federation, with each positive decision having to be approved by a CAS arbitrator. WADA's president Craig Reedie said, "WADA is disappointed that the IOC did not heed WADA's Executive Committee recommendations that were based on the outcomes of the McLaren Investigation and would have ensured a straight-forward, strong and harmonized approach." On the IOC's decision to exclude Yuliya Stepanova, WADA director general Olivier Niggli stated that his agency was "very concerned by the message that this sends whistleblowers for the future." Originally Russia submitted a list of 389 athletes for the Rio Olympics competition. On 7 August 2016, the IOC cleared 278 athletes, while 111 were removed because of the scandal (including 67 athletes removed by IAAF before the IOC's decision).
Reaction:
The IPC unanimously voted to ban Russian athletes from the 2016 Summer Paralympics in response to the discovery of a state-sponsored doping program.
Reaction:
Although the IOC stated in July 2016 that it would ask sports federations to seek alternative hosts, Russia has retained hosting rights for some major international sports events, including the 2017 FIFA Confederations Cup, 2018 FIFA World Cup, and 2019 Winter Universiade. In September 2016, Russia was awarded hosting rights for the 2021 World Biathlon Championships because the IOC's recommendation did not apply to events that had already been awarded or planned bids from the country.
Reaction:
December 2016 report In December 2016, the International Biathlon Union (IBU) provisionally suspended two Russian biathletes Olga Vilukhina and Yana Romanova for doping violations during the 2014 Winter Olympics.
In December 2016, the International Ski Federation (FIS) provisionally suspended six Russian cross-country skiers linked to doping violations during the 2014 Winter Olympics. The list includes Alexander Legkov, Maxim Vylegzhanin, Evgeniy Belov, Alexei Petukhov, Yevgeniya Shapovalova and Julia Ivanova.
Reaction:
Some international winter sports events were reallocated from Russia, including the 2017 FIBT World Championships in Sochi, the 2017 Biathlon Junior World Championships in Ostrov, the 2016–17 Biathlon World Cup stage in Tyumen, the 2016–17 FIS Cross-Country World Cup final stage in Tyumen, the 2016–17 ISU Speed Skating World Cup stage in Chelyabinsk, and the 2021 World Biathlon Championships in Tyumen. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reference Verification Methodology**
Reference Verification Methodology:
The Reference Verification Methodology (RVM) is a complete set of metrics and methods for performing Functional verification of complex designs such as for Application-specific integrated circuits or other semiconductor devices. It was published by Synopsys in 2003. RVM is implemented under OpenVera.
The SystemVerilog implementation of the RVM is known as the VMM (Verification Methodology Manual). It contains a small library of base classes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Street marketing**
Street marketing:
Street marketing is a form of guerrilla marketing that uses nontraditional or unconventional methods to promote a product or service. Many businesses use fliers, coupons, posters and art displays as a cost-effective alternative to the traditional marketing methods such as television, print and social media. Based on the shifting characteristics of modern-day consumers – such as increased product knowledge and expectations of transparency – the goal of street marketing is to use direct communication to enhance brand recognition.This style of marketing grew in popularity in 1986 when Jay Conrad Levinson published his book Guerrilla Marketing, which paved the future for unconventional and abnormal brand campaigns. Street marketing is often confused with ambient marketing, which is a marketing strategy of placing ads on unusual objects or in unusual places where you wouldn't usually expect to have an advertisement. Unlike typical public marketing campaigns that use billboards, street marketing involves the application of multiple techniques and practices in order to establish direct contact with the customers. The goals of this interaction include causing an emotional reaction in potential customers, and getting people to remember brands in a different way.
Origin:
By definition, unconventional marketing exists in complete opposition to commercial marketing, which stems from the introduction of McCarthy's 4 Ps in 1960. Over the last five decades, street marketing has become an evolving topic of discussion, especially among SME's (small and medium-sized enterprises) who have little or no advertising budget. In the 1960s and 1970s, street marketing was a massive success since many consumers did not inevitably recognize guerrilla activities as an advertisement at this time because of its uncommon nature.The concept of "street marketing" was first mentioned and analyzed by Jay Conrad Levinson in his 1984 book Guerrilla Marketing. Levinson came up with the idea of this new approach to brand promotion when a student of his asked about a book for marketers without big budgets. After discovering that there was no such book, Levinson decided to write it himself. Thus, the new strategy for SME's was born: “small budget, big results,” which claimed to help many businesses survive in the 1980s and 1990s through these innovative advertising activities. Early on, the distribution of leaflets, coupons, posters or fliers made up the earliest form of street marketing and could be used in a strategic way to advertise to consumers for smaller businesses. The ease of using this kind of advertising strategy triggered a huge increase in the number of small businesses being opened.There were two combined factors that carried street marketing to success; the first being that consumers had grown cynical and began to feel overwhelmed by the over-saturation of advertisements; the second was a shifting economic environment that forced businesses to create cost-effective ways to market their products. By the time the 2008 financial crisis hit, many large businesses were forced to cut their communication budgets drastically. In 2012, advertising revenue suffered a sharp drop, with television dropping 4.2% and a 8.1% drop for newspaper and press. These new budget cuts forced larger businesses and corporations to now adopt a new, unconventional way of advertising and promotion in the form of street marketing.
Comparison with guerrilla marketing:
Street marketing is a subset of guerrilla marketing, which is about investing time, energy, and imagination into a business campaign. Guerrilla marketing is popular among large and small businesses alike, as it uses low-cost unconventional communications which can provide a higher impact for a given investment. The use of viral marketing and engagement marketing help to heighten this impact. Guerrilla marketing exploits services which already exist, such as social networking sites, to create brand awareness. This could be spread by word of mouth or by exploiting social media. Viral messages appeal to individuals who already make high use of social networking, and because the messages do not look like traditional advertising the target audience is less likely to ignore them. Guerrilla marketing targets those who are more likely to share the message with others.Street marketing has the characteristic of being non-conventional. However, unlike other forms of guerrilla marketing, it is limited to the streets or public places and does not make use of other media or processes to establish communication with customers. One popular technique of street marketing is to place advertisements such as billboards and static ads in unexpected or random locations, such as down alleys or behind large buildings. Although the ad itself is conventional, the unexpected placement is intriguing and people may take an extra moment to ponder the ad.[8] Street marketing may also use brand ambassadors (typically one that appeals to the target demographic) who give away samples and coupons to customers that stop and take time to answer questions. Street marketing can be used as a general term encompassing six principal types of activities: Distribution of flyers or products – this activity is more traditional and the most common form of street marketing employed by brands.
Comparison with guerrilla marketing:
Product animations – the redressing of a high-traffic space using brand imagery. The idea is to create a micro-universe in order to promote a new product or service.
Human animations – creating a space in which the brand's message is communicated through human activity.
Roadshows – a mobile presentation, often using atypical transportation such as a taxibike, Segway, etc.
Uncovered actions – the customization of street elements.
Comparison with guerrilla marketing:
Event actions – spectacles, such as flash mobs or contests. The idea is to promote a product, service, or brand value through the organization of a public event.Before implementing a street-marketing plan, companies and their marketing firms should understand how they are perceived in the marketplace, how their products differ from those of competitors and what their most-appealing features are, and what markets they want to target. After identifying their target customers and where these people gather, specific goals for a street-marketing campaign can be established.
Campaign development:
A successful street marketing campaign strives to meet any combination of the following objectives: 1. Communicate with consumers in their natural, day-to-day environment.
2. Generate "buzz" or word of mouth around a product, brand, cause, or institution.
Campaign development:
3. Create brand awareness and loyalty through real-life participation in memorable experiences.Over the years, street marketing has developed to include campaigns that use the street as a platform for experiences lived by the consumers through interaction with products/brands and the actors or props mobilized for that purpose.Public places for the campaign should be identified, such as beaches, cultural events, places close to schools, sporting events and recreation centers for children. Companies then develop a plan to attract different media and the target market. Street marketing events involve unusual activities and technology, in order to gain the attention of potential consumers.Plans should take into account global communication; the campaign interacts directly with the customers and media at the scene, and through them has the potential to reach a much wider audience. They may also be developed to identify opportunities and collect information about products, markets and competitors. To retain customers, strategies are implemented to prevent losing market position and the street marketing campaign may be augmented with supplemental advertisement through other mediums, such as radio and television.
Campaign development:
Legal concerns Despite the fact that street marketing campaigns can be highly cost-effective and successful in creating brand loyalty, there are legal concerns that can pop up. By definition, street mobilization campaigns require the use of public space, and that use must be authorized by government authorities to be legal. This includes seemingly simple operations like distributing flyers and handing out coupons. Because of the nature of street marketing, other legal concerns can involve trespassing on private property, defacing private or public property and not getting direct permission from these property owners.
Campaign development:
Ethical problems Certain street marketing campaigns that are not executed properly can lead to certain ethical issues, like the 2007 Turner Broadcasting Bomb Scare in Boston, where the company placed LED placards in the shape of a film character for an upcoming movie campaign throughout Boston in random locations. When these placards lit up they resembled characteristics of explosive devices and resulted in the company paying 2 million dollars in fines.Of course, a provocative campaign that creates awareness and attention is the primary objective of street marketing. However, advertising that becomes too persistent or intruding might also evoke negative emotions such as disappointment, sadness, anger, and fury. Certain campaigns that receive excess attention while also having a negative image could create an impact on the downstream criteria of the chain of effects (e.g., image, purchase intention, loyalty).
Examples:
The majority of street marketing campaigns have been from small companies, but large companies have also been involved. Most of the examples put into action include costumed persons, the distribution of tickets, and people providing samples.
Examples:
Distribution of fliers can create awareness in consumers. One example of this took place in Montpelier, Vermont, where the New England Culinary Institute (NECI) sent a group of students to a movie theater to hand out 400 fliers. Those fliers had coupons in which NECI invited people to its monthly theme dinners. Another company, Boston's Kung-Fu Tai Chi Club, chose the option of disseminating fliers to promote its self-defense classes for women.Other businesses apply the technique of sending disguised people to promote things on the streets. For example, a dating website organized a street marketing activity in the "Feria del Libro" ("Book Fair") in Madrid. It consisted of a man dressed like a prince who walked among the crowd looking for his "true love", and got some women to try on a glass slipper. A woman followed him distributing bookmarks with messages such as "Times have changed; the way to find love, too" with the website's address. In Madrid and Barcelona, a campaign called "Avestruz" ("Ostrich") used a group of life-sized ostrich puppets to interact with young people to promote mobile phones.There are enterprises that disseminate passes or tickets to concerts and other events sponsored by a company. A more unusual example is a French fashion retailer which promoted a new store by distributing denim in the neighborhood. An Italian campaign for a video game plastered the streets with Post-it Notes shaped like game characters.Some street marketing may incite the ire of local authorities, such as when an agency attached a styrofoam replica of a car to the side of a downtown building in Houston, Texas. For the cost of a small city-issued fine, the company received front-page advertising in the Houston Chronicle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Volta (microarchitecture)**
Volta (microarchitecture):
Volta is the codename, but not the trademark, for a GPU microarchitecture developed by Nvidia, succeeding Pascal. It was first announced on a roadmap in March 2013, although the first product was not announced until May 2017. The architecture is named after 18th–19th century Italian chemist and physicist Alessandro Volta. It was NVIDIA's first chip to feature Tensor Cores, specially designed cores that have superior deep learning performance over regular CUDA cores. The architecture is produced with TSMC's 12 nm FinFET process. The Ampere microarchitecture is the successor to Volta.
Volta (microarchitecture):
The first graphics card to use it was the datacenter Tesla V100, e.g. as part of the Nvidia DGX-1 system. It has also been used in the Quadro GV100 and Titan V. There were no mainstream GeForce graphics cards based on Volta.
After two USPTO proceedings, on Jul. 03, 2023 NVIDIA lost the Volta trademark application in the field of artificial intelligence. The Volta trademark owner remains Volta Robots, a company specialized in AI and vision algorithms for robots and unmanned vehicles.
Details:
Architectural improvements of the Volta architecture include the following: CUDA Compute Capability 7.0 concurrent execution of integer and floating point operations TSMC's 12 nm FinFET process, allowing 21.1 billion transistors.
Details:
High Bandwidth Memory 2 (HBM2), NVLink 2.0: a high-bandwidth bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide 25 Gbit/s per lane. (Disabled for Titan V) Tensor cores: A tensor core is a unit that multiplies two 4×4 FP16 matrices, and then adds a third FP16 or FP32 matrix to the result by using fused multiply–add operations, and obtains an FP32 result that could be optionally demoted to an FP16 result. Tensor cores are intended to speed up the training of neural networks. Volta's Tensor cores are first generation while Ampere has third generation Tensor cores.
Details:
PureVideo Feature Set I hardware video decoding
Products:
Volta has been announced as the GPU microarchitecture within the Xavier generation of Tegra SoC focusing on self-driving cars.At Nvidia's annual GPU Technology Conference keynote on May 10, 2017, Nvidia officially announced the Volta microarchitecture along with the Tesla V100. The Volta GV100 GPU is built on a 12 nm process size using HBM2 memory with 900 GB/s of bandwidth.Nvidia officially announced the NVIDIA TITAN V on December 7, 2017.Nvidia officially announced the Quadro GV100 on March 27, 2018.
Application:
Volta is also reported to be included in the Summit and Sierra supercomputers, used for GPGPU compute. The Volta GPUs will connect to the POWER9 CPUs via NVLink 2.0, which is expected to support cache coherency and therefore improve GPGPU performance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Restrictor plate**
Restrictor plate:
A restrictor plate or air restrictor is a device installed at the intake of an engine to limit its power. This kind of system is occasionally used in road vehicles (e.g., motorcycles) for insurance purposes, but mainly in automobile racing, to limit top speed to provide equal level of competition, and to lower costs; insurance purposes have also factored in for motorsports.
Racing series:
A few top classes like Formula One limit only the displacement and air intake mouth dimension. However, in 2006 air restrictors (as well as rev limiters) were used by Scuderia Toro Rosso to facilitate the transition to a new engine formula.
Many other racing series use additional air restrictors.
Formula 3, 2000cc, 215 hp Formula SAE, 710cc, 20 mm restrictor.
Racing series:
Deutsche Tourenwagen Masters, 4000cc, 470 hp FIA GT Championship (now FIA GT1 World Championship) and other series using FIA GT regulations Le Mans Prototypes used in American Le Mans Series and Le Mans Series have restrictors based on precalculated tables depending on the type and size of the engine and fuel The ALMS in the 2010 season combines both LMP1 and LMP2 into a single LMP class on all races except 12 Hours of Sebring and Petit Le Mans; LMP1-categorized cars use a 5% smaller air restrictor compared to LMP2- categorized cars to balance performance in the races
Rallying:
After Group B cars were outlawed from rallying because they were too powerful (rumored to have reached 600 hp), too fast and too dangerous, the FISA decided that rally cars should not have more than 300 hp (220 kW). For a while no special restrictions were needed for that (e.g. the Group A Lancia Delta HF 4WD had about 250 hp in 1987). But with development in the 1990s, Group A cars were rumored to have reached 405 hp or more. So the FIA mandated restrictors for supercharged and turbocharged engines in all categories (World Rally Car, Group A and Group N).
Rallying:
This means that the rally version of a car like the Mitsubishi Lancer Evolution may have less power than the street version (the "280" hp Evo VII was believed to have more than 300 hp, and in some markets the FQ-320, FQ-340, FQ-360, FQ-400 versions were sold, with the number representing the total horsepower).
It also means that the torque and power curves of the engine are unusual. The engine produces peak torque and almost maximum power at a relatively low RPM, and from there to the rev limiter the torque drops and the power does not increase much.
In 1995 Toyota Team Europe used an illegal device to bypass the restrictor (allowing an estimated extra 50 hp).
Due to this the team lost their results in the 1995 season and was banned from rallying until the end of 1996.
NASCAR:
The NASCAR Cup Series and Xfinity Series have mandated the use of restrictor plates at Daytona International Speedway and Talladega Superspeedway since 1988, and until the 2019 Daytona 500 for Cup Series only. The plates were put into use in 1988 as a result of a wreck in the 1987 Winston 500 at Talladega that involved the car of Bobby Allison crashing into the frontstretch catch fence at a high enough speed to destroy almost 100 feet of the fence and put the race under a red flag condition for two hours. The following race at Talladega that year would be run with a smaller carburetor, however, NASCAR mandated the use of the restrictor plate at the end of the season.
NASCAR:
The restrictions are in the interest of driver and fan safety because speeds higher than the 190 mph range used for Daytona and Talladega risk cars turning over through sheer aerodynamic forces alone. The severity of crashes at higher speeds is also much greater, shown by telemetry readings of wrecks such as Elliott Sadler at Pocono Raceway and Michael McDowell at Texas Motor Speedway that were far higher than registered on restrictor plate tracks. Drivers such as Rusty Wallace have cited data showing that the roof flaps used on the cars cannot keep them on the ground above 204 mph.The drawback to the use of the restrictor plates has been the increased size of packs of cars caused by the decreased power coupled with the drag the vehicles naturally produce. At Daytona and Talladega, most races are marred by at least one wreck, usually referred to as "the Big One", as cars rarely become separated. Talladega has been considered the more likely track for these instances to occur as the track is incredibly wide, enough to have three to four distinct lines of cars running side by side. With the new pavement at Daytona, three-wide racing became far easier, and multi-car wrecks became more common. The 2011 Daytona 500 saw a record number of cautions including an early 17-car pile-up. These wrecks tend to be singled out for criticism despite multicar crashes at other tracks and the generally greater severity of impact on non-restricted tracks. In addition, the packs were far smaller in 1988 through 1990 until more teams mastered the nuances of this kind of racing and improved their cars (and drivers) accordingly.
NASCAR:
The 2011 Sprint Cup season was the last complete Cup season with carbureted engines; at the end of the 2011 season, NASCAR announced that it would change to an electronic fuel injection system for the 2012 racing season. The injection system used by NASCAR is a different system from that used in IndyCar Racing and other motorsports series; the EFI system that NASCAR put into use was compatible with the old restrictor plates, allowing NASCAR to continue to use them to keep the speeds lower at the superspeedways and save costs for race teams. The restrictor plates were bolted beneath a throttle body that sits in the same place as the former carburetors.The last race with the original restrictor plates was the 2019 Daytona 500; after that race, the cars moved to a variable-sized tapered spacer already used at all other tracks, with the exception that the spacer would have smaller holes than the ones used at the smaller tracks, to ensure speeds stay under 200 mph. The shape of the spacer helps a car funnel more air smoothly into the manifold, increasing fuel performance, while ensuring airflow is still restricted. With that change, NASCAR also mandated the use of larger rear spoilers, larger front splitters, and specially-placed front end aero ducts. The combination of those features increased drag on the cars, counteracting the increased horsepower, keeping the cars close to the speeds they were running prior to the switch to the tapered spacer. While the racing quality noticeably improved, and passing was made easier with larger horsepower and bigger runs, speeds also noticeably increased past 200 mph, and even into 205 mph ranges.Starting in 2022, restrictor plate rules will be used for Atlanta Motor Speedway because of concerns over speed after the circuit was repaved and reconfigured to 28 degree banking.
NASCAR:
Reason for restrictor plates There have been four eras that NASCAR used restrictor plates.
NASCAR:
The first use came in 1970 as part of a transition from the seven-litre era (430 cubic inch) to the six-litre era (366 cubic inch) engine. Following testing and input from drivers such as David Pearson, Bobby Isaac, and Bobby Allison, NASCAR mandated the use of a restrictor plate for the big block seven-litre engines. Small block engines, in the 366 cubic inch range, were exempt from the plates; the first car to race with a small block engine was Dick Brooks at the 1971 Daytona 500, where he ran a 1969 Dodge Daytona with a 305 CID engine. The transition period lasted until 1975, when the current 358 cubic inch (5870cc) limit was imposed. As the early 1970s use of restrictor plates was considered a transitional process, and as not every car used restrictor plates, this is not what most fans call "restrictor plate racing".The second use came following the crash of Bobby Allison at the 1987 Winston 500 at Talladega Superspeedway. Allison's Buick LeSabre blew a tire going into the tri-oval at 200 mph (320 km/h), spun around and became airborne, flying tail-first into the catch fencing. While the car did not enter the grandstands it tore down nearly 100 feet of fencing and flying debris injured several spectators. After a summer where the two subsequent superspeedway races were run with smaller carburetors (390 cubic feet per minute (cfm) instead of 830 cfm) proved to be inadequate to sufficiently slow the cars, NASCAR imposed restrictor plates again, this time at the two fastest circuits, both superspeedways: Daytona for all NASCAR-sanctioned races and Talladega for Cup races. The Automobile Racing Club of America also enforced restrictor plates at their events at the two tracks. In 1992, when the Busch Grand National series began racing at Talladega, the plates were implemented, in keeping with their use at Daytona.NASCAR's concerns with speeds because of power-to-weight ratios result in restrictor plates at other tracks. The Goody's Dash Series (known now as the ISCARS series with its new ownership) used restrictor plates at Bristol during at least the last years of the series' existence when the cars were using six-cylinder engines (compared to the traditional four cylinder engines), in addition to their Daytona races.However, restrictor plates were not initially used for Camping World Truck Series trucks. Rather, aerodynamic air intake reduction through the use of a 390 cfm carburetor, and eventually a tapered carburetor spacer were implemented for those races. Combined with the aerodynamic disadvantage of the trucks, this allowed NASCAR to avoid the use of such equipment for the trucks until 2008. In 2008, the Nationwide Series (now known as Xfinity Series) and Truck Series began implementation of tapered spacers in the engines to restrict power compared to Sprint Cup cars at all 35 (NNS) and 25 (NCTS) races. Both these NASCAR series now use a restrictor plate and tapered spacer at the two tracks.
NASCAR:
The third use came in 2000. Following fatal crashes of Adam Petty and Kenny Irwin Jr. at the New Hampshire International Speedway during the May Busch Series and July Winston Cup Series races, respectively, NASCAR adopted a one-inch (2.54 cm) restrictor plate to slow the cars headed towards the tight turns as part of a series of reforms to alleviate stuck throttle problems which were alleged to have caused both fatal crashes. For the Winston Cup race, it was used just once at the 2000 Dura Lube 300. Jeff Burton led all 300 laps in the ensuing race, despite a 23-car two-abreast battle in the first ten laps, a dramatic charge past 22 cars in 100 laps by John Andretti (who finished seventh), and two charges by Bobby Labonte in the final 50 laps where he took the lead but Burton beat him back to the stripe. The use of restrictor plates, intended as an emergency measure pending a more permanent replacement in any event, was discontinued at New Hampshire for the following race for Cup only. However, the Modifieds still use a restrictor plate because the speeds are too great for that class of racecar without them. The track has since been changed with SAFER Barriers to improve racing safety. Restrictor plates remain a permanent fixture on the Modifieds and the racing has often broken 20 official lead changes for 100–125 laps of competition.
NASCAR:
Rusty Wallace tested a car at Talladega Superspeedway without a restrictor plate in 2004, reaching a top speed of 228 mph (367 km/h) in the backstretch and a one-lap average of 221 mph (356 km/h). While admitting excitement at the achievement, Wallace also conceded, "There's no way we could be out there racing at those speeds... it would be insane to think we could have a pack of cars out there doing that."In 2016, following a series of uncompetitive races at Indianapolis Motor Speedway, NASCAR began a series of tests for the Xfinity Series using a smaller restrictor plate than used at Daytona and Talladega and aerodynamic aids. After the tests were successful, the rules package was imposed for the 2017 race at Indianapolis. For 2018, the package is being used at Indianapolis, Michigan, and Pocono for the Xfinity Series and in the All-Star Race in the Cup Series.
NASCAR:
The competitive quality of restrictor plate racing A frequent criticism of restrictor plates is the enormous size of packs in the racing, with "Big One" wrecks as noted above singled out for condemnation despite the greater violence of "smaller" crashes on unrestricted tracks. In restrictor plate racing the packs have brought about an often-enormous increase in positional passing; at Talladega Superspeedway the Sprint Cup cars have surpassed 40 official lead changes sixteen times from 1988 onward, including both 2010 Sprint Cup races at Talladega, which had 87 official lead changes in the regulation 188 laps. (The 2010 Aaron's 499 had 88 lead changes, but the 88th – the race-winning pass by Kevin Harvick – was on the last lap of the third attempt at a green-white-checkered finish). Daytona International Speedway has generally been less competitive because the age of the asphalt (the track was repaved in 1978 and again in 2010) has reduced grip for the cars and thus handling has impeded passing ability to a significant extent. The 2000 New Hampshire race was condemned because Jeff Burton led wire to wire; the plates were singled out as impeding ability to pass, a criticism contradicted by the use of restrictor plates in a Busch North support race the day before where the lead changed seven times in 100 laps and by the highly competitive nature of restrictor plated Modified races; as noted above the 300 also saw a 23-car battle for third in the first ten laps and a burst by 22 cars from John Andretti.
NASCAR:
The criticism stems from reduction in throttle response brought by the restriction. The reduction in throttle response, however, has never been shown to have impeded ability to pass; the criticism was shot down in the first "modern" plate race, the 1988 Daytona 500, as the lead changed 25 times officially and saw several bursts where the lead changed several times a lap and also several bursts of sustained side-by-side racing, notably in the final 50 laps between Bobby Allison, Darrell Waltrip, Neil Bonnett, and Buddy Baker.
NASCAR:
Said Waltrip before the race, "I feel, as a driver, now I can do more than I could before (the plates). Now, instead of a car just blasting by me with a burst of speed and a lot of horsepower, he's got to think his way, he got to drive his way around me." In the transitional years (1971–76) where the seven-litre engines (430 cu in) had restrictor plates, Daytona and Talladega broke 40 official lead changes six times, while Michigan International Speedway broke 35 official lead changes in both of its 1971 races. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nurse call button**
Nurse call button:
A nurse call button is a button or cord found in hospitals and nursing homes, at places where patients are at their most vulnerable, such as beside their bed and in the bathroom. It allows patients in health care settings to alert a nurse or other health care staff member remotely of their need for help. When the button is pressed, a signal alerts staff at the nurse's station, and usually, a nurse or nurse assistant responds to such a call. Some systems also allow the patient to speak directly to the staffer; others simply beep or buzz at the station, requiring a staffer to actually visit the patient's room to determine the patient's needs.
Nurse call button:
The call button provides the following benefits to patients: Enables a patient who is confined to bed and has no other way of communicating with staff to alert a nurse of the need for any type of assistance Enables a patient who is able to get out of bed, but for whom this may be hazardous, exhausting, or otherwise difficult to alert a nurse of the need for any type of assistance Provides the patient an increased sense of securityThe call button can also be used by a health care staff member already with the patient to call for another when such assistance is needed, or by visitors to call for help on behalf of the patient.
Laws and regulations:
Laws in most places require that a call button must be in reach of the patient at all times; for example, in the patient's bed or on an easily reachable surface (such as an end table or nightstand.) Call buttons are essential to patients in emergencies. There are also laws that determine the amount of time in which staff must respond to a call, though mandatory response times vary by location.It is the responsibility of nursing staff to explain to the patients that they have a call button and to teach them how to use it.
Overuse:
Some patients develop the habit of overusing call buttons. This can lead staff to develop frustration and alarm fatigue, which can result in them ignoring patient calls or otherwise not taking them seriously. "Alarm fatigue" occurs when staff are exposed to many frequent alarms, some of which may be false alarms, and consequently become desensitized to them. Alarm fatigue is particularly prevalent in nurses who must answer many patient calls per day. Staff cannot legally ignore such calls, as doing so violates the law in most places. Sometimes, mental health professionals will work with such patients in order to curtail their use of the button to serious need.
System types:
Basic The most basic system has a single button near the patient's bedside, which they can push to call for assistance. When the button is pressed, nursing staff is alerted by a light and/or an audible sound at the nurse's station. This can only be turned off from the patient's bedside, thereby compelling staff to respond to the patient.
System types:
Wireless nurse call Like hardwired systems, wireless call buttons have the ability to alert nursing staff using sounds or lights at the nurse's station. In addition, many wireless call systems can display messages on a terminal. An advantage to wireless call buttons is that installing them requires less wiring, reducing their cost. However, dome lights in hallways still usually require wiring for power. Disadvantages of wireless systems include the heightened risk of signal interference with other systems in the facility, the necessity of installing batteries in each patient station and changing them as needed, and a limited selection among UL 1069 approved wireless systems. There are a variety of Radio Frequencies used across the globe - for example the majority of Europe uses 433/868MHz.
System types:
Intercoms In some facilities, often in hospitals, a more advanced system is used, which allows staff from the nurse's station to communicate directly with patients via intercom. This has the advantage of allowing nursing staff to immediately assess the severity of the situation and determine if immediate assistance is needed or if the patient can wait.With the intercom system, the alert can be turned off from the nurse's station, allowing staff to avoid entering the patient's room if it is determined that the patient's need can be met without doing so.
System types:
Cell phone alerts Newer technology allows call buttons to reach cell phone-like devices carried around by nursing staff. Staffers can then answer the calls from wherever they are located within the facility, thereby improving the speed and efficiency of the response. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Éléments de géométrie algébrique**
Éléments de géométrie algébrique:
The Éléments de géométrie algébrique ("Elements of Algebraic Geometry") by Alexander Grothendieck (assisted by Jean Dieudonné), or EGA for short, is a rigorous treatise, in French, on algebraic geometry that was published (in eight parts or fascicles) from 1960 through 1967 by the Institut des Hautes Études Scientifiques. In it, Grothendieck established systematic foundations of algebraic geometry, building upon the concept of schemes, which he defined. The work is now considered the foundation stone and basic reference of modern algebraic geometry.
Editions:
Initially thirteen chapters were planned, but only the first four (making a total of approximately 1500 pages) were published. Much of the material which would have been found in the following chapters can be found, in a less polished form, in the Séminaire de géométrie algébrique (known as SGA). Indeed, as explained by Grothendieck in the preface of the published version of SGA, by 1970 it had become clear that incorporating all of the planned material in EGA would require significant changes in the earlier chapters already published, and that therefore the prospects of completing EGA in the near term were limited. An obvious example is provided by derived categories, which became an indispensable tool in the later SGA volumes, but was not yet used in EGA III as the theory was not yet developed at the time. Considerable effort was therefore spent to bring the published SGA volumes to a high degree of completeness and rigour. Before work on the treatise was abandoned, there were plans in 1966–67 to expand the group of authors to include Grothendieck's students Pierre Deligne and Michel Raynaud, as evidenced by published correspondence between Grothendieck and David Mumford. Grothendieck's letter of 4 November 1966 to Mumford also indicates that the second-edition revised structure was in place by that time, with Chapter VIII already intended to cover the Picard scheme. In that letter he estimated that at the pace of writing up to that point, the following four chapters (V to VIII) would have taken eight years to complete, indicating an intended length comparable to the first four chapters, which had been in preparation for about eight years at the time.
Editions:
Grothendieck nevertheless wrote a revised version of EGA I which was published by Springer-Verlag. It updates the terminology, replacing "prescheme" by "scheme" and "scheme" by "separated scheme", and heavily emphasizes the use of representable functors. The new preface of the second edition also includes a slightly revised plan of the complete treatise, now divided into twelve chapters.
Grothendieck's EGA V which deals with Bertini type theorems is to some extent available from the Grothendieck Circle website. Monografie Matematyczne in Poland has accepted this volume for publication, but the editing process is quite slow (as of 2010).
James Milne has preserved some of the original Grothendieck notes and a translation of them into English. They may be available from his websites connected with the University of Michigan in Ann Arbor.
Chapters:
The following table lays out the original and revised plan of the treatise and indicates where (in SGA or elsewhere) the topics intended for the later, unpublished chapters were treated by Grothendieck and his collaborators.
In addition to the actual chapters, an extensive "Chapter 0" on various preliminaries was divided between the volumes in which the treatise appeared. Topics treated range from category theory, sheaf theory and general topology to commutative algebra and homological algebra. The longest part of Chapter 0, attached to Chapter IV, is more than 200 pages.
Grothendieck never gave permission for the 2nd edition of EGA I to be republished, so copies are rare but found in many libraries. The work on EGA was finally disrupted by Grothendieck's departure first from IHÉS in 1970 and soon afterwards from the mathematical establishment altogether. Grothendieck's incomplete notes on EGA V can be found at Grothendieck Circle.
Chapters:
In historical terms, the development of the EGA approach set the seal on the application of sheaf theory to algebraic geometry, set in motion by Serre's basic paper FAC. It also contained the first complete exposition of the algebraic approach to differential calculus, via principal parts. The foundational unification it proposed (see for example unifying theories in mathematics) has stood the test of time.
Chapters:
EGA has been scanned by NUMDAM and is available at their website under "Publications mathématiques de l'IHÉS", volumes 4 (EGAI), 8 (EGAII), 11 (EGAIII.1re), 17 (EGAIII.2e), 20 (EGAIV.1re), 24 (EGAIV.2e), 28 (EGAIV.3e) and 32 (EGAIV.4e).
Bibliographic information:
Grothendieck, Alexandre; Dieudonné, Jean (1971). Éléments de géométrie algébrique: I. Le langage des schémas. Grundlehren der Mathematischen Wissenschaften (in French). Vol. 166 (2nd ed.). Berlin; New York: Springer-Verlag. ISBN 978-3-540-05113-8.
Grothendieck, Alexandre; Dieudonné, Jean (1960). "Éléments de géométrie algébrique: I. Le langage des schémas". Publications Mathématiques de l'IHÉS. 4: 5–228. doi:10.1007/bf02684778. MR 0217083.
Grothendieck, Alexandre; Dieudonné, Jean (1961). "Éléments de géométrie algébrique: II. Étude globale élémentaire de quelques classes de morphismes". Publications Mathématiques de l'IHÉS. 8: 5–222. doi:10.1007/bf02699291. MR 0217084.
Grothendieck, Alexandre; Dieudonné, Jean (1961). "Eléments de géométrie algébrique: III. Étude cohomologique des faisceaux cohérents, Première partie". Publications Mathématiques de l'IHÉS. 11: 5–167. doi:10.1007/bf02684274. MR 0217085.
Grothendieck, Alexandre; Dieudonné, Jean (1963). "Éléments de géométrie algébrique: III. Étude cohomologique des faisceaux cohérents, Seconde partie". Publications Mathématiques de l'IHÉS. 17: 5–91. doi:10.1007/bf02684890. MR 0163911.
Grothendieck, Alexandre; Dieudonné, Jean (1964). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Première partie". Publications Mathématiques de l'IHÉS. 20: 5–259. doi:10.1007/bf02684747. MR 0173675.
Grothendieck, Alexandre; Dieudonné, Jean (1965). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Seconde partie". Publications Mathématiques de l'IHÉS. 24: 5–231. doi:10.1007/bf02684322. MR 0199181.
Grothendieck, Alexandre; Dieudonné, Jean (1966). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Troisième partie". Publications Mathématiques de l'IHÉS. 28: 5–255. doi:10.1007/bf02684343. MR 0217086.
Grothendieck, Alexandre; Dieudonné, Jean (1967). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Quatrième partie". Publications Mathématiques de l'IHÉS. 32: 5–361. doi:10.1007/bf02732123. MR 0238860. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intelligent automation**
Intelligent automation:
Intelligent automation, or alternately intelligent process automation, is a software term that refers to a combination of artificial intelligence (AI) and robotic process automation (RPA). Companies use intelligent automation to cut costs by using artificial-intelligence-powered robotic software to replace workers who handle repetitive tasks. The term is similar to hyperautomation, a concept identified by research group Gartner as being one of the top technology trends of 2020.
Technology:
Intelligent automation applies the assembly line concept of breaking tasks into repetitive steps to digital business processes. Rather than having humans do each step, intelligent automation replaces each step with an intelligent software robot or bot, improving efficiency.
Applications:
The technology is used to process unstructured content. Common applications include self-driving cars, self-checkouts at grocery stores, smart home assistants, and appliances. Businesses can apply data and machine learning to build predictive analytics that react to consumer behavior changes, or to implement RPA to improve manufacturing floor operations.The technology has also been used to automate the workflow behind distributing Covid-19 vaccines. Data provided by hospital systems’ electronic health records can be processed to identify and educate patients, and schedule vaccinations.Intelligent Automation can provide real-time insights on profitability and efficiency. However in an April 2022 survey by Alchemmy, despite three quarters of businesses acknowledging the importance of Artificial Intelligence to their future development, just a quarter of business leaders (25%) considered Intelligent Automation a “game changer” in understanding current performance. 42% of CTOs see “shortage of talent” as the main obstacle to implementing Intelligent Automation in their business, while 36% of CEOs see ‘upskilling and professional development of existing workforce’ as the most significant adoption barrier. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HLA-B52**
HLA-B52:
HLA-B52 (B52) is an HLA-B serotype. The serotype identifies the more common HLA-B*52 gene products.B52 is a split antigen of the broad antigen B5, and is a sister type of B51. B*5201 likely formed as a result of a gene conversion event between another HLA-B allele and HLA-B*5101.
There are a number of alleles within the B*52 allele group.
Alleles:
There are 18 alleles, with 14 amino acid sequence variants in B52. Of these only 9 are frequent enough to have been reliably serotyped. B*5201 is the most common, but others have a large regional abundance.
Disease:
In ulcerative colitis HLA-B52 appears to have the strongest linkage to ulcerative colitis in Japan. This form of disease is frequently found with Takayasu's arteritis.
In Takayasu's arteritis Takayasu's arteritis appears to have an independent link to B52 associated disease. The association with B*5201 increases risk of pulmonary infarction, ischemic heart disease, aortic regurgitation, systemic hypertension, renal artery stenosis, cerebrovascular disease, and visual disturbance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IBM 5151**
IBM 5151:
The IBM 5151 is a 12" transistor–transistor logic (TTL) monochrome monitor, shipped with the original IBM Personal Computer for use with the IBM Monochrome Display Adapter. A few other cards were designed to work with it, such as the Hercules Graphics Card.
IBM 5151:
The monitor has an 11.5-inch wide CRT (measured diagonally) with 90 degree deflection, etched to reduce glare, with a resolution of 350 horizontal lines and a 50 Hz refresh rate. It uses TTL digital inputs through a 9-pin D-shell connector, being able to display at least three brightness levels, according to the different pin 6 and 7 signals. It is also plugged into the female AC port on the IBM PC power supply, and thus did not have a power switch of its own.
IBM 5151:
The IBM 5151 uses the P39 phosphor type, producing a bright green monochrome image intended for displaying high-resolution text. This phosphor has high persistence, which decreases display flicker but causes smearing when the image changes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cortical visual impairment**
Cortical visual impairment:
Cortical visual impairment (CVI) is a form of visual impairment that is caused by a brain problem rather than an eye problem. (The latter is sometimes termed "ocular visual impairment" when discussed in contrast to cortical visual impairment.) Some people have both CVI and a form of ocular visual impairment.
Cortical visual impairment:
CVI is also sometimes known as cortical blindness, although most people with CVI are not totally blind. The term neurological visual impairment (NVI) covers both CVI and total cortical blindness. Delayed visual maturation, another form of NVI, is similar to CVI, except the child's visual difficulties resolve in a few months. Though the vision of a person with CVI may change, it rarely if ever becomes totally normal.
Cortical visual impairment:
The major causes of CVI are as follows: asphyxia, hypoxia (a lack of sufficient oxygen in the body's blood cells), or ischemia (not enough blood supply to the brain), all of which may occur during the birth process; developmental brain defects; head injury; hydrocephalus (when the cerebrospinal fluid does not circulate properly around the brain, and collects in the head, putting pressure on the brain); a stroke involving the occipital lobe; and infections of the central nervous system, such as meningitis and encephalitis.
Visual and Behavioural Characteristics:
Visual and Behavioural Characteristics of CVI are individual and may include several (but not necessarily all) of the following: The person with CVI may exhibit what at first appears to be frequent variable changes in vision. However, it is the environment that impacts their visual function rather than their neurology or capacity for vision at that moment. A person's change in visual function when processing visual information in a complex environment, or trying to process a certain complex or unfamiliar visual target likely accounts for what looks like their visual ability changing from one day to the next, or from minute to minute. Feeling tired or unwell may also compound this challenge. For some people with CVI the complexity of the environment, including variables like the complexity and familiarity of the visual input, or processing various forms of secondary sensory input (for instance, sound or touch) may cause them to have difficulty using their vision to their full potential. The amount of sensory input an individual can tolerate without impacting function significantly may change throughout development and person to person. Managing fatigue may reduce fluctuations but does not eliminate them, however utilizing breaks and being well rested may help to increase resilience during these times. Addressing these changes may mean adapting strategies that work for the individual person. For example, When undertaking critical activities, people with CVI should be prepared for their vision to fluctuate, by taking precautions such as always carrying a white cane, if they use one, even if they don't always use it to the fullest. Another example is having very large print available, just in case it's needed (For example, consider the consequences of losing vision while giving a public speech). Other potential adaptations to improve visual function in higher environmental complexity may include strategies like clearing away excess clutter from a work spaces, or providing a plain background to view objects against, reducing noise levels, or other methods of monitoring and adapting for environmental complexity.
Visual and Behavioural Characteristics:
One eye may perform significantly worse than the other, and depth perception can be very limited (although not necessarily zero).
Visual and Behavioural Characteristics:
The field of view may be severely limited. The best vision might be in the centre (like tunnel vision) but more often it is at some other point, and it is difficult to tell what the person is really looking at. Note that if the person also has a common ocular visual impairment such as nystagmus then this can also affect which part(s) of the visual field are best. (Sometimes there exists a certain gaze direction which minimises the nystagmus, called a "null point.") Even though the field of view may be very narrow indeed, it is often possible for the person to detect and track movement. Movement is handled by the 'V5' part of the visual cortex, which may have escaped the damage. Sometimes a moving object can be seen better than a stationary one; at other times the person can sense movement but cannot identify what is moving. (This can be annoying if the movement is prolonged, and to escape the annoyance the person may have to either gaze right at the movement or else obscure it.) Sometimes it is possible for a person with CVI to see things while moving their gaze around that they didn't detect when stationary. However, movement that is too fast can be hard to track; some people find that fast-moving objects "disappear." Materials with reflective properties, which can simulate movement, may be easier for a person with CVI to see. However, too many reflections can be confusing (see cognitive overload).
Visual and Behavioural Characteristics:
Some objects may be easier to see than others. For example, the person may have difficulty recognising faces or facial expressions but have fewer problems with written materials. This is presumably due to the different way that the brain processes different things.
Visual and Behavioural Characteristics:
Colour and contrast are important. The brain's colour processing is distributed in such a way that it is more difficult to damage, so people with CVI usually retain full perception of colour. This can be used to advantage by colour-coding objects that might be hard to identify otherwise. Sometimes yellow and red objects are easier to see, as long as this does not result in poor contrast between the object and the background.
Visual and Behavioural Characteristics:
People with CVI strongly prefer a simplified view. When dealing with text, for example, the person might prefer to see only a small amount of it at once. People with CVI frequently hold text close to their eyes, both to make the text appear larger and to minimise the amount they must look at. This also ensures that important things such as letters are not completely hidden behind any scotomas (small defects in parts of the functioning visual field), and reduces the chances of getting lost in the text. However, the simplification of the view should not be done in such a way that it requires too rapid a movement to navigate around a large document, since too much motion can cause other problems (see above).
Visual and Behavioural Characteristics:
In viewing an array of objects, people with CVI can more easily see them if they only have to look at one or two at a time. People with CVI also see familiar objects more easily than new ones. Placing objects against a plain background also makes them easier for the person with CVI to see.
For the same reason (simplified view), the person may also dislike crowded rooms and other situations where their functioning is dependent on making sense of a lot of visual 'clutter'.
Visual and Behavioural Characteristics:
Visual processing can take a lot of effort. Often the person has to make a conscious choice about how to divide mental effort between making sense of visual data and performing other tasks. For some people, maintaining eye contact is difficult, which can create problems in Western culture (for example, bonding can be difficult for some parents who have an infant with CVI, and lack of contact in an older child can cause others to regard him or her with suspicion).
Visual and Behavioural Characteristics:
It can also be difficult for some people with CVI to look at an object and reach for it at the same time. Looking and reaching are sometimes accomplished as two separate acts: look, then look away and reach.
Visual and Behavioural Characteristics:
People with CVI can sometimes benefit from a form of blindsight, which manifests itself as a kind of awareness of one's surroundings that cannot consciously be explained (for example, the person correctly guesses what they should do in order to avoid an obstacle but does not actually see that obstacle). However, this cannot be relied on to work all the time. In contrast, some people with CVI exhibit spatial difficulties and may have trouble moving about in their environment.
Visual and Behavioural Characteristics:
Approximately one third of people with CVI have some photophobia. It can take longer than usual to adjust to large changes in light level, and flash photography can be painful. On the other hand, CVI can also in some cases cause a desire to gaze compulsively at light sources, including such things as candle flames and fluorescent overhead lights. The use of good task lighting (especially low-temperature lamps which can be placed at very close range) is often beneficial.
Visual and Behavioural Characteristics:
Although people (with or without CVI) generally assume that they see things as they really are, in reality the brain may be doing a certain amount of guessing and "filling in", which is why people sometimes think they see things that turn out on closer inspection not to be what they seemed. This can occur more frequently when a person has CVI. Hence, a person with CVI can look at an optical illusion or abstract picture and perceive something that is significantly different from what a person without CVI will perceive.The presence of CVI does not necessarily mean that the person's brain is damaged in any other way, but it can often be accompanied by other neurological problems, the most common being epilepsy.
Diagnosis:
Diagnosing CVI is difficult. A diagnosis is usually made when visual performance is poor but it is not possible to explain this from an eye examination. Before CVI was widely known among professionals, some would conclude that the patient was faking their problems or had for some reason engaged in self-deception. However, there are now testing techniques that do not depend on the patient's words and actions, such as fMRI scanning, or the use of electrodes to detect responses to stimuli in both the retina and the brain. These can be used to verify that the problem is indeed due to a malfunction of the visual cortex and/or the posterior visual pathway. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**20 Leonis Minoris**
20 Leonis Minoris:
20 Leonis Minoris is a binary star system in the northern constellation of Leo Minor. It is faintly visible to the naked eye, having an apparent visual magnitude of +5.4. Based upon an annual parallax shift of 66.46 mas, it is located 49 light years from the Sun. The star has a relatively high proper motion and is moving away from the Sun with a radial velocity of +56 km/s. The system made its closest approach about 150,000 years ago when it came within 32.2 ly (9.86 pc).The primary member of this system is a G-type main-sequence star with a stellar classification of G3 Va Hδ1. It has 12% more mass and a 25% larger radius than the Sun. The star is about seven billion years old and is spinning with a rotation period of 10.6 days. The small companion is an active red dwarf star that has a relatively high metallicity. The two stars are currently separated by 14.5 arc seconds, corresponding to a projected separation of 2016 AU.In 2020, a candidate exoplanet was detected orbiting 20 Leonis Minoris (HD 86728). With a minimum mass of 0.032 MJ (10.2 MEarth) and an orbital period of 31 days, this would most likely be a hot Neptune. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**First-pass yield**
First-pass yield:
First-pass yield (FPY), also known as throughput yield (TPY), is defined as the number of units coming out of a process divided by the number of units going into that process over a specified period of time.
Example:
Consider the following: You have a process that is divided into four sub-processes: A, B, C and D. Assume that you have 100 units entering process A. To calculate first time yield (FTY) you would: Calculate the yield (number out of step/number into step) of each step.
Example:
Multiply these together.For example: (# units leaving the process as good parts) / (# units put into the process) = FTY 100 units enter A and 90 leave as good parts. The FTY for process A is 90/100 = 0.9000 90 units go into B and 80 leave as good parts. The FTY for process B is 80/90 = 0.8889 80 units go into C and 75 leave as good parts. The FTY for C is 75/80 = 0.9375 75 units go into D and 70 leave as good parts. The FTY for D is 70/75 = 0.9333The total first time yield is equal to FTYofA * FTYofB * FTYofC * FTYofD or 0.9000 * 0.8889 * 0.9375 * 0.9333 = 0.7000.
Example:
You can also get the total process yield for the entire process by simply dividing the number of good units produced by the number going into the start of the process. In this case, 70/100 = 0.70 or 70% yield.
Example:
The same example using first pass yield (FPY) would take into account rework: (# units leaving process A as good parts with no rework) / (# units put into the process) 100 units enter process A, 5 were reworked, and 90 leave as good parts. The FPY for process A is (90-5)/100 = 85/100 = 0.8500 90 units go into process B, 0 are reworked, and 80 leave as good parts. The FPY for process B is (80-0)/90 = 80/90 = 0.8889 80 units go into process C, 10 are reworked, and 75 leave as good parts. The FPY for process C is (75-10)/80 = 65/80 = 0.8125 75 units go into process D, 8 are reworked, and 70 leave as good parts. The FPY for process D is (70-8)/75 = 62/75 = 0.8267First pass yield is only used for an individual sub-process. Multiplying the set of processes would give you Rolling throughput yield (RTY). RTY is equal to FPYofA * FPYofB * FPYofC * FPYofD = 0.8500 * 0.8889 * 0.8125 * 0.8267 = 0.5075 Notice that the number of units going into each next process does not change from the original example, as that number of good units did, indeed, enter the next process. Yet the number of FPY units of each process counts only those that made it through the process as good parts that needed no rework to be good parts. The calculation of RTY, rolling throughput yield, shows how good the overall set of processes is at producing good overall output without having to rework units. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vomit fraud**
Vomit fraud:
Vomit fraud is a type of fraud in which a driver of a vehicle for hire falsely claims that an "incident requiring cleanup" occurred while a passenger was riding in the driver's vehicle. The company then charges the passenger a "cleanup fee" to reimburse the driver for having to clean the vehicle.
History:
The Miami Herald first reported on the issue in July 2018. Passengers may face a fee of up to US$150 for causing incidents requiring significant cleanups of drivers' vehicles. By filing false reports of these incidents, drivers will receive the cleanup fees from the customers even though no incident occurred.
Criminality:
Due to company-friendly terms of service typically agreed to by passengers, police departments have been reluctant to press criminal charges against individuals who engage in the fraud, instead treating these cases as civil matters. However, in late October 2018, a Harwood, North Dakota, man who had driven for both Uber and Lyft was charged with two counts of attempted theft of property for two separate instances of false cleanup claims. In one instance, the man was caught on surveillance video purchasing food, throwing it on the inside and outside of his vehicle, taking photos of the alleged damage, then running the vehicle through a car wash, all after he had already dropped his passenger off at his destination. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gas leak**
Gas leak:
A gas leak refers to a leak of natural gas or another gaseous product from a pipeline or other containment into any area where the gas should not be present. Gas leaks can be hazardous to health as well as the environment. Even a small leak into a building or other confined space may gradually build up an explosive or lethal concentration of gas. Leaks of natural gas and refrigerant gas into the atmosphere are especially harmful due to their global warming potential and ozone depletion potential.Leaks of gases associated with industrial operations and equipment are also generally known as fugitive emissions. Natural gas leaks from fossil fuel extraction and use are known as fugitive gas emissions. Such unintended leaks should not be confused with similar intentional types of gas release, such as: gas venting emissions which are controlled releases, and often practised as a part of routine operations, or "emergency pressure releases" which are intended to prevent equipment damage and safeguard life.Gas leaks should also not be confused with "gas seepage" from the earth or oceans - either natural or due to human activity.
Fire and explosion safety:
Pure natural gas is colorless and odorless, and is composed primarily of methane. Unpleasant scents in the form of traces of mercaptans are usually added, to assist in identifying leaks. This odor may be perceived as rotting eggs, or a faintly unpleasant skunk smell. Persons detecting the odor must evacuate the area and abstain from using open flames or operating electrical equipment, to reduce the risk of fire and explosion.
Fire and explosion safety:
As a result of the Pipeline Safety Improvement Act of 2002 passed in the United States, federal safety standards require companies providing natural gas to conduct safety inspections for gas leaks in homes and other buildings receiving natural gas. The gas company is required to inspect gas meters and inside gas piping from the point of entry into the building to the outlet side of the gas meter for gas leaks. This may require entry into private homes by the natural gas companies to check for hazardous conditions.
Harm to vegetation:
Gas leaks can damage or kill plants. In addition to leaks from natural gas pipes, methane and other gases migrating from landfill garbage disposal sites can also cause chlorosis and necrosis in grass, weeds, or trees. In some cases, leaking gas may migrate as far as 100 feet (30 m) from the source of the leak to an affected tree.
Harm to animals:
Methane is an asphyxiant gas which can reduce the normal oxygen concentration in breathing air. Small animals and birds are also more sensitive to toxic gas like carbon monoxide that are sometimes present with natural gas. The expression "canary in a coal mine" derives from the historical practice of using a canary as an animal sentinel to detect dangerously high concentrations of naturally occurring coal gas.
Greenhouse gas emissions:
Methane, the primary constituent of natural gas, is up to 120 times as potent a greenhouse gas as carbon dioxide. Thus, the release of unburned natural gas produces much stronger effects than the carbon dioxide that would have been released if the gas had been burned as intended.
Leak grades:
In the United States, most state and federal agencies have adopted the Gas Piping and Technology Committee (GPTC) standards for grading natural gas leaks.
A Grade 1 leak is a leak that represents an existing or probable hazard to persons or property, and requires immediate repair or continuous action until the conditions are no longer hazardous.
Examples of a Grade 1 leak are: Any leak which, in the judgment of operating personnel at the scene, is regarded as an immediate hazard.
Escaping gas that has ignited.
Any indication of gas which has migrated into or under a building, or into a foreign sub-structure.
Any reading at the outside wall of a building, or where gas would likely migrate to an outside wall of a building.
Any reading of 80% LEL, or greater, in a confined space.
Any reading of 80% LEL, or greater in small substructures (other than gas associated sub structures) from which gas would likely migrate to the outside wall of a building.
Any leak that can be seen, heard, or felt, and which is in a location that may endanger the general public or property.A Grade 2 leak is a leak that is recognized as being non-hazardous at the time of detection, but justifies scheduled repair based on probable future hazard.
Examples of a Grade 2 Leak are: Leaks Requiring Action Ahead of Ground Freezing or Other Adverse Changes in Venting Conditions. Any leak which, under frozen or other adverse soil conditions, would likely migrate to the outside wall of a building.
Leaks requiring action within six months Any reading of 40% LEL, or greater, under a sidewalk in a wall-to-wall paved area that does not qualify as a Grade 1 leak.
Any reading of 100% LEL, or greater, under a street in a wall-to-wall paved area that has significant gas migration and does not qualify as a Grade 1 leak.
Any reading less than 80% LEL in small substructures (other than gas associated substructures) from which gas would likely migrate creating a probable future hazard.
Any reading between 20% LEL and 80% LEL in a confined space.
Any reading on a pipeline operating at 30 percent specified minimum yield strength (SMYS) or greater, in a class 3 or 4 location, which does not qualify as a Grade 1 leak.
Any reading of 80% LEL, or greater, in gas associated sub-structures.
Any leak which, in the judgment of operating personnel at the scene, is of sufficient magnitude to justify scheduled repair.A Grade 3 leak is non-hazardous at the time of detection and can be reasonably expected to remain non-hazardous.
Examples of a Grade 3 Leak are: Any reading of less than 80% LEL in small gas associated substructures.
Any reading under a street in areas without wall-to-wall paving where it is unlikely the gas could migrate to the out-side wall of a building.
Any reading of less than 20% LEL in a confined space.
Studies:
In 2012, Boston University professor Nathan Phillips and his students drove along all 785 miles (1,263 km) of Boston roads with a gas sensor, identifying 3300 leaks. The Conservation Law Foundation produced a map showing around 4000 leaks reported to the Massachusetts Department of Public Utilities. In July 2014, the Environmental Defense Fund released an interactive online map based on gas sensors attached to three mapping cars which already were being driven along Boston streets to update Google Earth Street View. This survey differed from the previous studies in that an estimate of leak severity was produced, rather than just leak detection. This map should help the gas utility to prioritize leak repairs, as well as raising public awareness of the problem.In 2017, Rhode Island released an estimated 15.7 million metric tons of greenhouse gases, about a third of which comes from leaks in natural gas pipes. This figure, published in 2019, was calculated based on an assumed leakage rate of 2.7% (as that is the rate of leakage in the nearby city of Boston). The study's authors estimated that fixing the leaks would incur an annual cost of $1.6 billion to $4 billion.
Regulation:
Massachusetts Legislation passed in 2014 requires gas suppliers to make greater efforts to control some of the 20,000 documented leaks in the US state of Massachusetts. The new law requires grade 1 and 2 leaks to be repaired if the street above a gas pipe is dug up, and requires priority be given to leaks near schools. It provides a mechanism for increased revenue from ratepayers (up to 1.5% without further approval) to cover the cost of repairs and replacement of leak-prone materials (like cast iron and non-cathodically protected steel) on an accelerated basis. The law sets a target of 20 years for replacement of pipes made from leak-prone materials if feasible given the revenue cap; as of 2015, Columbia Gas of Massachusetts (formerly named "Bay State Gas"), Berkshire Gas, Liberty Utilities, National Grid, and Unitil say they will meet this target, but NSTAR says it will take 25 years to complete. Leaks, statistics on leak-prone materials, and financial statements are reported annually to the Department of Public Utilities, which also has responsibility for rate-setting.
Regulation:
Additional proposals not included in the law would have required grade 3 leaks to be repaired during road construction, and priority for leaks which are killing trees or which were near hospitals or churches.An attorney for the Conservation Law Foundation stated that the leaks were worth $38.8 million in lost natural gas, which also contributes 4% of the state's greenhouse gas emissions. A federal study prompted by US Senator Edward J. Markey concluded that Massachusetts consumers paid approximately $1.5 billion from 2000–2011 for gas which leaked and benefited no one. Markey has also backed legislation that would implement similar requirements at the national level, along with financing provisions for repairs.
History:
Catastrophic gas leaks, such as the Bhopal disaster are well-recognized as problems, but the more-subtle effects of chronic low-level leaks have been slower to gain recognition.
Other contexts:
In work with dangerous gases (such as in a lab or industrial setting), a gas leak may require hazmat emergency response, especially if the leaked material is flammable, explosive, corrosive, or toxic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Limestone**
Limestone:
Limestone (calcium carbonate CaCO3) is a type of carbonate sedimentary rock which is the main source of the material lime. It is composed mostly of the minerals calcite and aragonite, which are different crystal forms of CaCO3. Limestone forms when these minerals precipitate out of water containing dissolved calcium. This can take place through both biological and nonbiological processes, though biological processes, such as the accumulation of corals and shells in the sea, have likely been more important for the last 540 million years. Limestone often contains fossils which provide scientists with information on ancient environments and on the evolution of life.About 20% to 25% of sedimentary rock is carbonate rock, and most of this is limestone. The remaining carbonate rock is mostly dolomite, a closely related rock, which contains a high percentage of the mineral dolomite, CaMg(CO3)2. Magnesian limestone is an obsolete and poorly-defined term used variously for dolomite, for limestone containing significant dolomite (dolomitic limestone), or for any other limestone containing a significant percentage of magnesium. Most limestone was formed in shallow marine environments, such as continental shelves or platforms, though smaller amounts were formed in many other environments. Much dolomite is secondary dolomite, formed by chemical alteration of limestone. Limestone is exposed over large regions of the Earth's surface, and because limestone is slightly soluble in rainwater, these exposures often are eroded to become karst landscapes. Most cave systems are found in limestone bedrock.
Limestone:
Limestone has numerous uses: as a chemical feedstock for the production of lime used for cement (an essential component of concrete), as aggregate for the base of roads, as white pigment or filler in products such as toothpaste or paints, as a soil conditioner, and as a popular decorative addition to rock gardens. Limestone formations contain about 30% of the world's petroleum reservoirs.
Description:
Limestone is composed mostly of the minerals calcite and aragonite, which are different crystal forms of calcium carbonate (CaCO3). Dolomite, CaMg(CO3)2, is an uncommon mineral in limestone, and siderite or other carbonate minerals are rare. However, the calcite in limestone often contains a few percent of magnesium. Calcite in limestone is divided into low-magnesium and high-magnesium calcite, with the dividing line placed at a composition of 4% magnesium. High-magnesium calcite retains the calcite mineral structure, which is distinct from dolomite. Aragonite does not usually contain significant magnesium. Most limestone is otherwise chemically fairly pure, with clastic sediments (mainly fine-grained quartz and clay minerals) making up less than 5% to 10% of the composition. Organic matter typically makes up around 0.2% of a limestone and rarely exceeds 1%.Limestone often contains variable amounts of silica in the form of chert or siliceous skeletal fragments (such as sponge spicules, diatoms, or radiolarians). Fossils are also common in limestone.Limestone is commonly white to gray in color. Limestone that is unusually rich in organic matter can be almost black in color, while traces of iron or manganese can give limestone an off-white to yellow to red color. The density of limestone depends on its porosity, which varies from 0.1% for the densest limestone to 40% for chalk. The density correspondingly ranges from 1.5 to 2.7 g/cm3. Although relatively soft, with a Mohs hardness of 2 to 4, dense limestone can have a crushing strength of up to 180 MPa. For comparison, concrete typically has a crushing strength of about 40 MPa.Although limestones show little variability in mineral composition, they show great diversity in texture. However, most limestone consists of sand-sized grains in a carbonate mud matrix. Because limestones are often of biological origin and are usually composed of sediment that is deposited close to where it formed, classification of limestone is usually based on its grain type and mud content.
Description:
Grains Most grains in limestone are skeletal fragments of marine organisms such as coral or foraminifera. These organisms secrete structures made of aragonite or calcite, and leave these structures behind when they die. Other carbonate grains composing limestones are ooids, peloids, and limeclasts (intraclasts and extraclasts).Skeletal grains have a composition reflecting the organisms that produced them and the environment in which they were produced. Low-magnesium calcite skeletal grains are typical of articulate brachiopods, planktonic (free-floating) foraminifera, and coccoliths. High-magnesium calcite skeletal grains are typical of benthic (bottom-dwelling) foraminifera, echinoderms, and coralline algae. Aragonite skeletal grains are typical of molluscs, calcareous green algae, stromatoporoids, corals, and tube worms. The skeletal grains also reflect specific geological periods and environments. For example, coral grains are more common in high-energy environments (characterized by strong currents and turbulence) while bryozoan grains are more common in low-energy environments (characterized by quiet water).Ooids (sometimes called ooliths) are sand-sized grains (less than 2mm in diameter) consisting of one or more layers of calcite or aragonite around a central quartz grain or carbonate mineral fragment. These likely form by direct precipitation of calcium carbonate onto the ooid. Pisoliths are similar to ooids, but they are larger than 2 mm in diameter and tend to be more irregular in shape. Limestone composed mostly of ooids is called an oolite or sometimes an oolitic limestone. Ooids form in high-energy environments, such as the Bahama platform, and oolites typically show crossbedding and other features associated with deposition in strong currents.Oncoliths resemble ooids but show a radial rather than layered internal structure, indicating that they were formed by algae in a normal marine environment.Peloids are structureless grains of microcrystalline carbonate likely produced by a variety of processes. Many are thought to be fecal pellets produced by marine organisms. Others may be produced by endolithic (boring) algae or other microorganisms or through breakdown of mollusc shells. They are difficult to see in a limestone sample except in thin section and are less common in ancient limestones, possibly because compaction of carbonate sediments disrupts them.Limeclasts are fragments of existing limestone or partially lithified carbonate sediments. Intraclasts are limeclasts that originate close to where they are deposited in limestone, while extraclasts come from outside the depositional area. Intraclasts include grapestone, which is clusters of peloids cemented together by organic material or mineral cement. Extraclasts are uncommon, are usually accompanied by other clastic sediments, and indicate deposition in a tectonically active area or as part of a turbidity current.
Description:
Mud The grains of most limestones are embedded in a matrix of carbonate mud. This is typically the largest fraction of an ancient carbonate rock. Mud consisting of individual crystals less than 5 μm (0.20 mils) in length is described as micrite. In fresh carbonate mud, micrite is mostly small aragonite needles, which may precipitate directly from seawater, be secreted by algae, or be produced by abrasion of carbonate grains in a high-energy environment. This is converted to calcite within a few million years of deposition. Further recrystallization of micrite produces microspar, with grains from 5 to 15 μm (0.20 to 0.59 mils) in diameter.Limestone often contains larger crystals of calcite, ranging in size from 0.02 to 0.1 mm (0.79 to 3.94 mils), that are described as sparry calcite or sparite. Sparite is distinguished from micrite by a grain size of over 20 μm (0.79 mils) and because sparite stands out under a hand lens or in thin section as white or transparent crystals. Sparite is distinguished from carbonate grains by its lack of internal structure and its characteristic crystal shapes. Geologists are careful to distinguish between sparite deposited as cement and sparite formed by recrystallization of micrite or carbonate grains. Sparite cement was likely deposited in pore space between grains, suggesting a high-energy depositional environment that removed carbonate mud. Recrystallized sparite is not diagnostic of depositional environment.
Description:
Other characteristics Limestone outcrops are recognized in the field by their softness (calcite and aragonite both have a Mohs hardness of less than 4, well below common silicate minerals) and because limestone bubbles vigorously when a drop of dilute hydrochloric acid is dropped on it. Dolomite is also soft but reacts only feebly with dilute hydrochloric acid, and it usually weathers to a characteristic dull yellow-brown color due to the presence of ferrous iron. This is released and oxidized as the dolomite weathers. Impurities (such as clay, sand, organic remains, iron oxide, and other materials) will cause limestones to exhibit different colors, especially with weathered surfaces.
Description:
The makeup of a carbonate rock outcrop can be estimated in the field by etching the surface with dilute hydrochloric acid. This etches away the calcite and aragonite, leaving behind any silica or dolomite grains. The latter can be identified by their rhombohedral shape.Crystals of calcite, quartz, dolomite or barite may line small cavities (vugs) in the rock. Vugs are a form of secondary porosity, formed in existing limestone by a change in environment that increases the solubility of calcite.Dense, massive limestone is sometimes described as "marble". For example, the famous Portoro "marble" of Italy is actually a dense black limestone. True marble is produced by recrystallization of limestone during regional metamorphism that accompanies the mountain building process (orogeny). It is distinguished from dense limestone by its coarse crystalline texture and the formation of distinctive minerals from the silica and clay present in the original limestone.
Description:
Classification Two major classification schemes, the Folk and Dunham, are used for identifying the types of carbonate rocks collectively known as limestone.
Description:
Folk classification Robert L. Folk developed a classification system that places primary emphasis on the detailed composition of grains and interstitial material in carbonate rocks. Based on composition, there are three main components: allochems (grains), matrix (mostly micrite), and cement (sparite). The Folk system uses two-part names; the first refers to the grains and the second to the cement. For example, a limestone consisting mainly of ooids, with a crystalline matrix, would be termed an oosparite. It is helpful to have a petrographic microscope when using the Folk scheme, because it is easier to determine the components present in each sample.
Description:
Dunham classification Robert J. Dunham published his system for limestone in 1962. It focuses on the depositional fabric of carbonate rocks. Dunham divides the rocks into four main groups based on relative proportions of coarser clastic particles, based on criteria such as whether the grains were originally in mutual contact, and therefore self-supporting, or whether the rock is characterized by the presence of frame builders and algal mats. Unlike the Folk scheme, Dunham deals with the original porosity of the rock. The Dunham scheme is more useful for hand samples because it is based on texture, not the grains in the sample.A revised classification was proposed by Wright (1992). It adds some diagenetic patterns to the classification scheme.
Description:
Other descriptive terms Travertine is a term applied to calcium carbonate deposits formed in freshwater environments, particularly waterfalls, cascades and hot springs. Such deposits are typically massive, dense, and banded. When the deposits are highly porous, so that they have a spongelike texture, they are typically described as tufa. Secondary calcite deposited by supersaturated meteoric waters (groundwater) in caves is also sometimes described as travertine. This produces speleothems, such as stalagmites and stalactites.Coquina is a poorly consolidated limestone composed of abraded pieces of coral, shells, or other fossil debris. When better consolidated, it is described as coquinite.Chalk is a soft, earthy, fine-textured limestone composed of the tests of planktonic microorganisms such as foraminifera, while marl is an earthy mixture of carbonates and silicate sediments.
Formation:
Limestone forms when calcite or aragonite precipitate out of water containing dissolved calcium, which can take place through both biological and nonbiological processes. The solubility of calcium carbonate (CaCO3) is controlled largely by the amount of dissolved carbon dioxide (CO2) in the water. This is summarized in the reaction: CaCO3 + H2O + CO2 → Ca2+ + 2HCO−3Increases in temperature or decreases in pressure tend to reduce the amount of dissolved CO2 and precipitate CaCO3. Reduction in salinity also reduces the solubility of CaCO3, by several orders of magnitude for fresh water versus seawater.
Formation:
Near-surface water of the earth's oceans are oversaturated with CaCO3 by a factor of more than six. The failure of CaCO3 to rapidly precipitate out of these waters is likely due to interference by dissolved magnesium ions with nucleation of calcite crystals, the necessary first step in precipitation. Precipitation of aragonite may be suppressed by the presence of naturally occurring organic phosphates in the water. Although ooids likely form through purely inorganic processes, the bulk of CaCO3 precipitation in the oceans is the result of biological activity. Much of this takes place on carbonate platforms.
Formation:
The origin of carbonate mud, and the processes by which it is converted to micrite, continue to be a subject of research. Modern carbonate mud is composed mostly of aragonite needles around 5 μm (0.20 mils) in length. Needles of this shape and composition are produced by calcareous algae such as Penicillus, making this a plausible source of mud. Another possibility is direct precipitation from the water. A phenomenon known as whitings occurs in shallow waters, in which white streaks containing dispersed micrite appear on the surface of the water. It is uncertain whether this is freshly precipitated aragonite or simply material stirred up from the bottom, but there is some evidence that whitings are caused by biological precipitation of aragonite as part of a bloom of cyanobacteria or microalgae. However, stable isotope ratios in modern carbonate mud appear to be inconsistent with either of these mechanisms, and abrasion of carbonate grains in high-energy environments has been put forward as a third possibility.Formation of limestone has likely been dominated by biological processes throughout the Phanerozoic, the last 540 million years of the Earth's history. Limestone may have been deposited by microorganisms in the Precambrian, prior to 540 million years ago, but inorganic processes were probably more important and likely took place in an ocean more highly oversaturated in calcium carbonate than the modern ocean.
Formation:
Diagenesis Diagenesis is the process in which sediments are compacted and turned into solid rock. During diagenesis of carbonate sediments, significant chemical and textural changes take place. For example, aragonite is converted to low-magnesium calcite. Diagenesis is the likely origin of pisoliths, concentrically layered particles ranging from 1 to 10 mm (0.039 to 0.394 inches) in diameter found in some limestones. Pisoliths superficially resemble ooids but have no nucleus of foreign matter, fit together tightly, and show other signs that they formed after the original deposition of the sediments.
Formation:
Silicification occurs early in diagenesis, at low pH and temperature, and contributes to fossil preservation. Silicification takes place through the reaction: CaCO CO SiO SiO Ca HCO 3−+2H2O Fossils are often preserved in exquisite detail as chert.Cementing takes place rapidly in carbonate sediments, typically within less than a million years of deposition. Some cementing occurs while the sediments are still under water, forming hardgrounds. Cementing accelerates after the retreat of the sea from the depositional environment, as rainwater infiltrates the sediment beds, often within just a few thousand years. As rainwater mixes with groundwater, aragonite and high-magnesium calcite are converted to low-calcium calcite. Cementing of thick carbonate deposits by rainwater may commence even before the retreat of the sea, as rainwater can infiltrate over 100 km (60 miles) into sediments beneath the continental shelf.As carbonate sediments are increasingly deeply buried under younger sediments, chemical and mechanical compaction of the sediments increases. Chemical compaction takes place by pressure solution of the sediments. This process dissolves minerals from points of contact between grains and redeposits it in pore space, reducing the porosity of the limestone from an initial high value of 40% to 80% to less than 10%. Pressure solution produces distinctive stylolites, irregular surfaces within the limestone at which silica-rich sediments accumulate. These may reflect dissolution and loss of a considerable fraction of the limestone bed. At depths greater than 1 km (0.62 miles), burial cementation completes the lithification process. Burial cementation does not produce stylolites.When overlying beds are eroded, bringing limestone closer to the surface, the final stage of diagenesis takes place. This produces secondary porosity as some of the cement is dissolved by rainwater infiltrating the beds. This may include the formation of vugs, which are crystal-lined cavities within the limestone.Diagenesis may include conversion of limestone to dolomite by magnesium-rich fluids. There is considerable evidence of replacement of limestone by dolomite, including sharp replacement boundaries that cut across bedding. The process of dolomitization remains an area of active research, but possible mechanisms include exposure to concentrated brines in hot environments (evaporative reflux) or exposure to diluted seawater in delta or estuary environments (Dorag dolomitization). However, Dorag dolomitization has fallen into disfavor as a mechanism for dolomitization, with one 2004 review paper describing it bluntly as "a myth". Ordinary seawater is capable of converting calcite to dolomite, if the seawater is regularly flushed through the rock, as by the ebb and flow of tides (tidal pumping). Once dolomitization begins, it proceeds rapidly, so that there is very little carbonate rock containing mixed calcite and dolomite. Carbonate rock tends to be either almost all calcite/aragonite or almost all dolomite.
Occurrence:
About 20% to 25% of sedimentary rock is carbonate rock, and most of this is limestone. Limestone is found in sedimentary sequences as old as 2.7 billion years. However, the compositions of carbonate rocks show an uneven distribution in time in the geologic record. About 95% of modern carbonates are composed of high-magnesium calcite and aragonite. The aragonite needles in carbonate mud are converted to low-magnesium calcite within a few million years, as this is the most stable form of calcium carbonate. Ancient carbonate formations of the Precambrian and Paleozoic contain abundant dolomite, but limestone dominates the carbonate beds of the Mesozoic and Cenozoic. Modern dolomite is quite rare. There is evidence that, while the modern ocean favors precipitation of aragonite, the oceans of the Paleozoic and middle to late Cenozoic favored precipitation of calcite. This may indicate a lower Mg/Ca ratio in the ocean water of those times. This magnesium depletion may be a consequence of more rapid sea floor spreading, which removes magnesium from ocean water. The modern ocean and the ocean of the Mesozoic have been described as "aragonite seas".Most limestone was formed in shallow marine environments, such as continental shelves or platforms. Such environments form only about 5% of the ocean basins, but limestone is rarely preserved in continental slope and deep sea environments. The best environments for deposition are warm waters, which have both a high organic productivity and increased saturation of calcium carbonate due to lower concentrations of dissolved carbon dioxide. Modern limestone deposits are almost always in areas with very little silica-rich sedimentation, reflected in the relative purity of most limestones. Reef organisms are destroyed by muddy, brackish river water, and carbonate grains are ground down by much harder silicate grains. Unlike clastic sedimentary rock, limestone is produced almost entirely from sediments originating at or near the place of deposition.
Occurrence:
Limestone formations tend to show abrupt changes in thickness. Large moundlike features in a limestone formation are interpreted as ancient reefs, which when they appear in the geologic record are called bioherms. Many are rich in fossils, but most lack any connected organic framework like that seen in modern reefs. The fossil remains are present as separate fragments embedded in ample mud matrix. Much of the sedimentation shows indications of occurring in the intertidal or supratidal zones, suggesting sediments rapidly fill available accommodation space in the shelf or platform. Deposition is also favored on the seaward margin of shelves and platforms, where there is upwelling deep ocean water rich in nutrients that increase organic productivity. Reefs are common here, but when lacking, ooid shoals are found instead. Finer sediments are deposited close to shore.The lack of deep sea limestones is due in part to rapid subduction of oceanic crust, but is more a result of dissolution of calcium carbonate at depth. The solubility of calcium carbonate increases with pressure and even more with higher concentrations of carbon dioxide, which is produced by decaying organic matter settling into the deep ocean that is not removed by photosynthesis in the dark depths. As a result, there is a fairly sharp transition from water saturated with calcium carbonate to water unsaturated with calcium carbonate, the lysocline, which occurs at the calcite compensation depth of 4,000 to 7,000 m (13,000 to 23,000 feet). Below this depth, foraminifera tests and other skeletal particles rapidly dissolve, and the sediments of the ocean floor abruptly transition from carbonate ooze rich in foraminifera and coccolith remains (Globigerina ooze) to silicic mud lacking carbonates.
Occurrence:
In rare cases, turbidites or other silica-rich sediments bury and preserve benthic (deep ocean) carbonate deposits. Ancient benthic limestones are microcrystalline and are identified by their tectonic setting. Fossils typically are foraminifera and coccoliths. No pre-Jurassic benthic limestones are known, probably because carbonate-shelled plankton had not yet evolved.Limestones also form in freshwater environments. These limestones are not unlike marine limestone, but have a lower diversity of organisms and a greater fraction of silica and clay minerals characteristic of marls. The Green River Formation is an example of a prominent freshwater sedimentary formation containing numerous limestone beds. Freshwater limestone is typically micritic. Fossils of charophyte (stonewort), a form of freshwater green algae, are characteristic of these environments, where the charophytes produce and trap carbonates.Limestones may also form in evaporite depositional environments. Calcite is one of the first minerals to precipitate in marine evaporites.
Occurrence:
Limestone and living organisms Most limestone is formed by the activities of living organisms near reefs, but the organisms responsible for reef formation have changed over geologic time. For example, stromatolites are mound-shaped structures in ancient limestones, interpreted as colonies of cyanobacteria that accumulated carbonate sediments, but stromatolites are rare in younger limestones. Organisms precipitate limestone both directly as part of their skeletons, and indirectly by removing carbon dioxide from the water by photosynthesis and thereby decreasing the solubility of calcium carbonate.Limestone shows the same range of sedimentary structures found in other sedimentary rocks. However, finer structures, such as lamination, are often destroyed by the burrowing activities of organisms (bioturbation). Fine lamination is characteristic of limestone formed in playa lakes, which lack the burrowing organisms. Limestones also show distinctive features such as geopetal structures, which form when curved shells settle to the bottom with the concave face downwards. This traps a void space that can later be filled by sparite. Geologists use geopetal structures to determine which direction was up at the time of deposition, which is not always obvious with highly deformed limestone formations.The cyanobacterium Hyella balani can bore through limestone; as can the green alga Eugamantia sacculata and the fungus Ostracolaba implexa.
Occurrence:
Micritic mud mounds Micricitic mud mounds are subcircular domes of micritic calcite that lacks internal structure. Modern examples are up to several hundred meters thick and a kilometer across, and have steep slopes (with slope angles of around 50 degrees). They may be composed of peloids swept together by currents and stabilized by Thalassia grass or mangroves. Bryozoa may also contribute to mound formation by helping to trap sediments.Mud mounds are found throughout the geologic record, and prior to the early Ordovician, they were the dominant reef type in both deep and shallow water. These mud mounds likely are microbial in origin. Following the appearance of frame-building reef organisms, mud mounds were restricted mainly to deeper water.
Occurrence:
Organic reefs Organic reefs form at low latitudes in shallow water, not more than a few meters deep. They are complex, diverse structures found throughout the fossil record. The frame-building organisms responsible for organic reef formation are characteristic of different geologic time periods: Archaeocyathids appeared in the early Cambrian; these gave way to sponges by the late Cambrian; later successions included stromatoporoids, corals, algae, bryozoa, and rudists (a form of bivalve mollusc). The extent of organic reefs has varied over geologic time, and they were likely most extensive in the middle Devonian, when they covered an area estimated at 5,000,000 km2 (1,900,000 sq mi). This is roughly ten times the extent of modern reefs. The Devonian reefs were constructed largely by stromatoporoids and tabulate corals, which were devastated by the late Devonian extinction.Organic reefs typically have a complex internal structure. Whole body fossils are usually abundant, but ooids and interclasts are rare within the reef. The core of a reef is typically massive and unbedded, and is surrounded by a talus that is greater in volume than the core. The talus contains abundant intraclasts and is usually either floatstone, with 10% or more of grains over 2mm in size embedded in abundant matrix, or rudstone, which is mostly large grains with sparse matrix. The talus grades to planktonic fine-grained carbonate mud, then noncarbonate mud away from the reef.
Limestone landscape:
Limestone is partially soluble, especially in acid, and therefore forms many erosional landforms. These include limestone pavements, pot holes, cenotes, caves and gorges. Such erosion landscapes are known as karsts. Limestone is less resistant to erosion than most igneous rocks, but more resistant than most other sedimentary rocks. It is therefore usually associated with hills and downland, and occurs in regions with other sedimentary rocks, typically clays.Karst regions overlying limestone bedrock tend to have fewer visible above-ground sources (ponds and streams), as surface water easily drains downward through joints in the limestone. While draining, water and organic acid from the soil slowly (over thousands or millions of years) enlarges these cracks, dissolving the calcium carbonate and carrying it away in solution. Most cave systems are through limestone bedrock. Cooling groundwater or mixing of different groundwaters will also create conditions suitable for cave formation.Coastal limestones are often eroded by organisms which bore into the rock by various means. This process is known as bioerosion. It is most common in the tropics, and it is known throughout the fossil record.Bands of limestone emerge from the Earth's surface in often spectacular rocky outcrops and islands. Examples include the Rock of Gibraltar, the Burren in County Clare, Ireland; Malham Cove in North Yorkshire and the Isle of Wight, England; the Great Orme in Wales; on Fårö near the Swedish island of Gotland, the Niagara Escarpment in Canada/United States; Notch Peak in Utah; the Ha Long Bay National Park in Vietnam; and the hills around the Lijiang River and Guilin city in China.The Florida Keys, islands off the south coast of Florida, are composed mainly of oolitic limestone (the Lower Keys) and the carbonate skeletons of coral reefs (the Upper Keys), which thrived in the area during interglacial periods when sea level was higher than at present.Unique habitats are found on alvars, extremely level expanses of limestone with thin soil mantles. The largest such expanse in Europe is the Stora Alvaret on the island of Öland, Sweden. Another area with large quantities of limestone is the island of Gotland, Sweden. Huge quarries in northwestern Europe, such as those of Mount Saint Peter (Belgium/Netherlands), extend for more than a hundred kilometers.
Uses:
Limestone is a raw material that is used globally in a variety of different ways including construction, agriculture and as industrial materials. Limestone is very common in architecture, especially in Europe and North America. Many landmarks across the world, including the Great Pyramid and its associated complex in Giza, Egypt, were made of limestone. So many buildings in Kingston, Ontario, Canada were, and continue to be, constructed from it that it is nicknamed the 'Limestone City'. Limestone, metamorphosed by heat and pressure produces marble, which has been used for many statues, buildings and stone tabletops. On the island of Malta, a variety of limestone called Globigerina limestone was, for a long time, the only building material available, and is still very frequently used on all types of buildings and sculptures.Limestone can be processed into many various forms such as brick, cement, powdered/crushed, or as a filler. Limestone is readily available and relatively easy to cut into blocks or more elaborate carving. Ancient American sculptors valued limestone because it was easy to work and good for fine detail. Going back to the Late Preclassic period (by 200–100 BCE), the Maya civilization (Ancient Mexico) created refined sculpture using limestone because of these excellent carving properties. The Maya would decorate the ceilings of their sacred buildings (known as lintels) and cover the walls with carved limestone panels. Carved on these sculptures were political and social stories, and this helped communicate messages of the king to his people. Limestone is long-lasting and stands up well to exposure, which explains why many limestone ruins survive. However, it is very heavy (density 2.6), making it impractical for tall buildings, and relatively expensive as a building material.
Uses:
Limestone was most popular in the late 19th and early 20th centuries. Railway stations, banks and other structures from that era were made of limestone in some areas. It is used as a facade on some skyscrapers, but only in thin plates for covering, rather than solid blocks. In the United States, Indiana, most notably the Bloomington area, has long been a source of high-quality quarried limestone, called Indiana limestone. Many famous buildings in London are built from Portland limestone. Houses built in Odesa in Ukraine in the 19th century were mostly constructed from limestone and the extensive remains of the mines now form the Odesa Catacombs.Limestone was also a very popular building block in the Middle Ages in the areas where it occurred, since it is hard, durable, and commonly occurs in easily accessible surface exposures. Many medieval churches and castles in Europe are made of limestone. Beer stone was a popular kind of limestone for medieval buildings in southern England.
Uses:
Limestone is the raw material for production of lime, primarily known for treating soils, purifying water and smelting copper. Lime is an important ingredient used in chemical industries. Limestone and (to a lesser extent) marble are reactive to acid solutions, making acid rain a significant problem to the preservation of artifacts made from this stone. Many limestone statues and building surfaces have suffered severe damage due to acid rain. Likewise limestone gravel has been used to protect lakes vulnerable to acid rain, acting as a pH buffering agent. Acid-based cleaning chemicals can also etch limestone, which should only be cleaned with a neutral or mild alkali-based cleaner.
Uses:
Other uses include: It is the raw material for the manufacture of quicklime (calcium oxide), slaked lime (calcium hydroxide), cement and mortar.
Pulverized limestone is used as a soil conditioner to neutralize acidic soils (agricultural lime).
Is crushed for use as aggregate—the solid base for many roads as well as in asphalt concrete.
As a reagent in flue-gas desulfurization, where it reacts with sulfur dioxide for air pollution control.
In glass making, particularly in the manufacture of soda–lime glass.
As an additive toothpaste, paper, plastics, paint, tiles, and other materials as both white pigment and a cheap filler.
As rock dust, to suppress methane explosions in underground coal mines.
Purified, it is added to bread and cereals as a source of calcium.
As a calcium supplement in livestock feed, such as for poultry (when ground up).
For remineralizing and increasing the alkalinity of purified water to prevent pipe corrosion and to restore essential nutrient levels.
In blast furnaces, limestone binds with silica and other impurities to remove them from the iron.
Uses:
It can aid in the removal of toxic components created from coal burning plants and layers of polluted molten metals.Many limestone formations are porous and permeable, which makes them important petroleum reservoirs. About 20% of North American hydrocarbon reserves are found in carbonate rock. Carbonate reservoirs are very common in the petroleum-rich Middle East, and carbonate reservoirs hold about a third of all petroleum reserves worldwide. Limestone formations are also common sources of metal ores, because their porosity and permeability, together with their chemical activity, promotes ore deposition in the limestone. The lead-zinc deposits of Missouri and the Northwest Territories are examples of ore deposits hosted in limestone.
Uses:
Scarcity Limestone is a major industrial raw material that is in constant demand. This raw material has been essential in the iron and steel industry since the nineteenth century. Companies have never had a shortage of limestone; however, it has become a concern as the demand continues to increase and it remains in high demand today. The major potential threats to supply in the nineteenth century were regional availability and accessibility. The two main accessibility issues were transportation and property rights. Other problems were high capital costs on plants and facilities due to environmental regulations and the requirement of zoning and mining permits. These two dominant factors led to the adaptation and selection of other materials that were created and formed to design alternatives for limestone that suited economic demands.Limestone was classified as a critical raw material, and with the potential risk of shortages, it drove industries to find new alternative materials and technological systems. This allowed limestone to no longer be classified as critical as replacement substances increased in production; minette ore is a common substitute, for example.
Uses:
Occupational safety and health Powdered limestone as a food additive is generally recognized as safe and limestone is not regarded as a hazardous material. However, limestone dust can be a mild respiratory and skin irritant, and dust that gets into the eyes can cause corneal abrasions. Because limestone contains small amounts of silica, inhalation of limestone dust could potentially lead to silicosis or cancer.
Uses:
United States The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for limestone exposure in the workplace as 15 mg/m3 (0.0066 gr/cu ft) total exposure and 5 mg/m3 (0.0022 gr/cu ft) respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 (0.0044 gr/cu ft) total exposure and 5 mg/m3 (0.0022 gr/cu ft) respiratory exposure over an 8-hour workday.
Uses:
Graffiti Removing graffiti from weathered limestone is difficult because it is a porous and permeable material. The surface is fragile so usual abrasion methods run the risk of severe surface loss. Because it is an acid-sensitive stone some cleaning agents cannot be used due to adverse effects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital contact tracing**
Digital contact tracing:
Digital contact tracing is a method of contact tracing relying on tracking systems, most often based on mobile devices, to determine contact between an infected patient and a user. It came to public prominence in the form of COVID-19 apps during the COVID-19 pandemic. Since the initial outbreak, many groups have developed nonstandard protocols designed to allow for wide-scale digital contact tracing, most notably BlueTrace and Exposure Notification.When considering the limitations of mobile devices, there are two competing ways to trace proximity: GPS and Bluetooth; each with their own drawbacks. Additionally, the protocols can either be centralized or decentralized, meaning contact history can either be processed by a central health authority, or by individual clients in the network. On 10 April 2020, Google and Apple jointly announced that they would integrate functionality to support such Bluetooth-based apps directly into their Android and iOS operating systems.
History:
Digital contact tracing has existed as a concept since at least 2007, and it was proven to be effective in the first empirical investigation using Bluetooth data in 2014. However, it was largely held back by the necessity of widespread adoption. A 2018 patent application by Facebook discussed a Bluetooth proximity-based trust method. The concept came to prominence during the COVID-19 pandemic, where it was deployed on a wide scale for the first time through multiple government and private COVID-19 apps. Many countries, however, saw poor adoption, with Singapore's digital contact tracing app, TraceTogether, seeing an adoption rate of only 10-20%. COVID-19 apps tend to be voluntary rather than mandatory, which may also have an impact on the rate of adoption. Israel was the only country in the world to use its internal security agency (Shin Bet) to track citizens' geolocations to slow the spread of the virus. However, cellphone-based location tracking proved to be insufficiently accurate, as scores of Israeli citizens were falsely identified as carriers of COVID-19 and subsequently ordered to self-quarantine. In an attempt to contain the spread of the Omicron Variant, Israel reinstated the use of Shin Bet counterterrorism surveillance measures for a limited period of time.Apps were often met with overwhelming criticism about concerns with the data health authorities were collecting. Experts also criticized protocols like the Pan-European Privacy-Preserving Proximity Tracing and BlueTrace for their centralized contact log processing, that meant the government could determine who you had been in contact with.MIT SafePaths published the earliest paper, 'Apps Gone Rogue', on a decentralized GPS algorithm as well as the pitfalls of previous methods. MIT SafePaths was also the first to release a privacy-preserving Android and iOS GPS app.Covid Watch was the first organization to develop and open source an anonymous, decentralized Bluetooth digital contact tracing protocol, publishing their white paper on the subject on 20 March 2020. The group was founded as a research collaboration between Stanford University and the University of Waterloo. The protocol they developed, the CEN Protocol, later renamed the TCN Protocol, was first released on 17 March 2020 and presented at Stanford HAI's COVID-19 and AI virtual conference on April 1.NOVID is the first digital contact tracing app which primarily uses Ultrasound. Their ultrasound technology yields much higher accuracy than Bluetooth-based apps, and they are the only app with sub-meter contact tracing accuracy.
Methodologies:
Bluetooth proximity tracing Bluetooth, more specifically Bluetooth Low Energy, is used to track encounters between two phones. Typically, Bluetooth is used to transmit anonymous, time-shifting identifiers to nearby devices. Receiving devices then commit these identifiers to a locally stored contact history log. Given epidemiological recommendations, devices store inputs only of the encountered devices for a fixed time, exceeding a threshold (e.g., more than 15 min) at a certain distance (e.g., less than 2 meters). Bluetooth protocols with encryption are perceived to have less privacy problems and have lower battery usage than GPS-based schemes.: table. 1 Because a user's location is not logged as part of the protocols, the system is unable to track patients who may have become infected by touching a surface an ill patient has also touched. Another serious concern is the potential inaccuracy of Bluetooth at detecting contact events. Potential challenges for high received signal strength fluctuations in BLE proximity tracing are line-of-sight vs. non-line-of-sight conditions, various BLE advertising channels, different device placements, possible WiFi interference.
Methodologies:
Location tracking Location tracking can be achieved via cell phone tower networks or using GPS. Cell phone tower network-based location tracking has the advantage of eliminating the need to download an app. Location tracking enables calculating user position with certain levels of accuracy in 2D or 3D. The first contact tracing protocol of this type was deployed in Israel. The accuracy is however typically not sufficient for meaningful contact tracing.Smartphone GPS logging solutions are more private than Bluetooth based solutions because the smartphone can passively record the GPS values. The concern with Bluetooth-based solutions is that the smartphone will continuously emit an RF signal every 200ms, which can be spied on. On the other hand, digital contact tracing solutions that force users to release their location trails to a central system without encryption can lead to privacy problems.
Methodologies:
GEO-QR code tagging Another method of tracking is assigning a venue or a place to a QR code and having the people scan the QR code by their mobiles to tag their visits. By this method, people voluntarily check in and check out from the location and they have control on their privacy, and they need not download or install any app. Should a positive COVID-19 case be identified later, such systems can detect any possible encounter within the venue or place between the positive case individual and others who might have visited and tagged their visits to the venue at the same time. Such method have been used in Malaysia by Malaysian government and also in Australia and New Zealand by private sector under QR-code visitor check-in systems. In Australia and New Zealand, respective local governments have later sought to centralize contact tracing by requiring businesses to use the state's QR-code system.
Methodologies:
Ultrasound Using ultrasound is another way to record contacts. Smartphones emit ultrasound signals which are detected by other smartphones. NOVID, which is the only digital contact tracing app with sub-meter contact tracing accuracy, primarily uses Ultrasound.
CCTV with facial recognition CCTV with facial recognition can also be used to detect confirmed cases and those breaking control measures. The systems may or may not store identifying data or use a central database.
Reporting centralization:
One of the largest privacy concerns raised about protocols such as BlueTrace or PEPP-PT is the usage of centralised report processing. In a centralised report processing protocol a user must upload their entire contact log to a health authority administered server, where the health authority is then responsible for matching the log entries to contact details, ascertaining potential contact, and ultimately warning users of potential contact.Alternatively, anonymous decentralized report processing protocols, while still having a central reporting server, delegate the responsibility to process logs to clients on the network. Tokens exchanged by clients contain no intrinsic information or static identifiers. Protocols using this approach, such as TCN and DP-3T, have the client upload a number from which encounter tokens can be derived by individual devices. Clients then check these tokens against their local contact logs to determine if they have come in contact with an infected patient. Inherent in the fact the government does not process nor have access to contact logs, this approach has major privacy benefits. However, this method also presents some issues, primarily the lack of human in the loop reporting, leading to a higher occurrence of false positives; and potential scale issues, as some devices might become overwhelmed with a large number of reports. Anonymous decentralised reporting protocols are also less mature than their centralized counterparts as governments were initially much more keen to adopt centralized surveillance systems.
Ephemeral IDs:
Ephemeral IDs, also known as EphIDs, Temporary IDs or Rolling Proximity IDs, are the tokens exchanged by clients during an encounter to uniquely identify themselves. These IDs regularly change, generally ever 20 minutes, and are not constituted by plain text personally identifiable information. The variable nature of a client's identifier is needed for the prevention of tracking by malicious third parties who, by observing static identifiers over a large geographical area over time, could track users and deduce their identity. Because EphIDs are not static, there is theoretically no way a third party could track a client for a period longer than the lifetime of the EphID. There may, however, still be incidental leakage of static identifiers, such as was the case on the BlueTrace apps TraceTogether and COVIDSafe before they were patched.Generally, there are two ways of generating Ephemeral IDs. Centralized protocols such as BlueTrace issue Temporary IDs from the central reporting server, where they are generated by encrypting a static User ID with a secret key only known to the health authority. Alternatively, anonymous decentralized protocols such as TCN and DP-3T have the clients deterministically generate the IDs from a secret key only known to the client. This secret key is later revealed and used by clients to determine contact with an infected patient.
Issues and controversies:
During the unfolding COVID-19 pandemic, reactions to digital contact tracing applications worldwide have at times been drastic and often polarized.
Issues and controversies:
Despite holding the promise to drastically reduce contagion and allow for a relaxation of social distancing measures, digital contact tracing applications have been criticized by academia and the public alike. The main issues concern the technical efficacy of such systems and their ethical implications, in particular regarding privacy, freedoms and democracy.The US non-profit, ForHumanity, called for independent audit and governance of contact tracing and subsequently launched the first comprehensive audit vetted by a team of global experts, known as ForHumanity Fellows on privacy, algorithmic bias, trust, ethics and cybersecurity. NY State Senate Bill S-8448D, which passed in the Senate in July 2020, calls for independent audit of digital contact tracing.
Issues and controversies:
Independent audit and governance Voluntary adoption of digital contact tracing has fallen short of some estimated thresholds for efficacy. This has been referred to as a "trust-gap" and advocates for digital contact tracing have endeavored to identify ways to bridge the gap. Independent Governance suggests that contact tracing authorities and technology providers do not have adequate trust from the traced populace and therefore requires independent oversight which exists on behalf of the traced for the purposes of looking after their best interests.Independent audit borrows from the financial accounting industry the process of third-party oversight assuring compliance with existing rules and best-practices. The third party auditor examines all details of digital contact tracing in the areas of ethics, trust, privacy, bias and cybersecurity. The audit provides oversight, transparency and accountability over the authority providing the digital contact tracing.
Issues and controversies:
Technical feasibility The technical feasibility and necessity of digital contact tracing is the subject of debate, with its major proponents claiming it to be indispensable to stop the spread of pandemics, as COVID-19, and its opponents raising points on its technical functioning and adoption rate by citizens. The conflict between the opt-in voluntary usage by citizens in many countries and the necessity of an almost universal adoption rate is unresolved. According to a study published in Science, an adoption rate of around 60% of the total population is needed for digital contact tracing applications to be effective. In countries where this was made voluntary, like Singapore, the adoption rate remained below 20%. Also, the efficacy of using Bluetooth technology to determine proximity is subject to scrutiny, with critics pointing out that false positives could be reported due to the inaccuracy of the technology. Instances of this are interference by physical objects (e.g. two people in two adjacent rooms) and connections being made even at 10–20 meters distances.
Issues and controversies:
System requirements Smartphone-based digital contact tracing applications have system requirements such as Android/iOS version, bluetooth enabled, gps enabled. The system requirements facilitate maintainability and technical effectiveness at the cost of the adoption rate. Smartphones stop receiving software updates a few years after release (2–3 years for Android, 5 years for iOS). Improvements to this ecosystem would benefit the adoption rate of future digital contact tracing applications.
Issues and controversies:
Ethical issues Other than having doubts about the technical effectiveness of smartphone-based contact tracing systems, publics and academia are confronted with ethical issues about the use of smartphone data by central governments to track and direct citizen behaviour. The most pressing questions pertain privacy and surveillance, liberty, and ownership. Around the world, governments and publics have taken different positions on this issue.
Issues and controversies:
Privacy On privacy, the main problem about digital contact tracing regards type of information which can be collected from each person and the way related data is treated by companies and institutions. The type of data which is collected, and the approach being used (centralized or decentralized) determine the severity of the issue. In other words, a privacy-first approach that sacrifices data for privacy or a data-first approach that collects citizen data in exchange for private information from citizens. Moreover, critics point out that claims of anonymity and protection of personal data, even if made by institutions, cannot be verified and that individual's user profiles can be traced back in several cases.
Issues and controversies:
Surveillance Closely related to privacy, comes the issue of surveillance: too much personal data in centralized governmental database could set a dangerous precedent on the way governments are capable of “spying” on individual behaviour. The possibility that a wide-ranging adoption of digital contact tracing could set a dangerous precedent for surveillance and control has been abundantly treated by media and academia alike. In short, the main concern here relates to the tendency of temporary measures, justified by an emergency situation, to be normalized and extended indefinitely in a society. Concerns of normalizing exceptional surveillance practices were raised Israel, where existing cellphone surveillance measures used for counterterrorism purposes were employed for COVID-19 contact tracing purposes.
Issues and controversies:
Environment Electronic waste may result from the need to purchase a new smartphone to meet the system requirements of smartphone-based digital contact tracing applications. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Formic anhydride**
Formic anhydride:
Formic anhydride, also called methanoic anhydride, is an organic compound with the chemical formula C2H2O3 and a structural formula of (H(C=O)−)2O. It can be viewed as the anhydride of formic acid (HCOOH).
Preparation:
Formic anhydride can be obtained by reaction of formyl fluoride with excess sodium formate and a catalytic amount of formic acid in ether at −78 °C. It can also be produced by reacting formic acid with N,N′-dicyclohexylcarbodiimide ((C6H11−N=)2C) in ether at −10 °C. It can also be obtained by disproportionation of acetic formic anhydride.
Properties:
Formic anhydride is a liquid with boiling point 24 °C at 20 mmHg. It is stable in diethyl ether solution. It can be isolated by low-temperature, low-pressure distillation, but decomposes on heating above room temperature. At room temperature and higher, it decomposes through a decarbonylation reaction into formic acid and carbon monoxide. Due to its instability, formic anhydride is not commercially available and must be prepared fresh and used promptly.
Properties:
The decomposition of formic anhydride may be catalyzed by formic acid.Formic anhydride can be detected in the gas-phase reaction of ozone with ethylene. The molecule is planar in the gas phase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Merzling**
Merzling:
Merzling is a white grape variety used for wine. It was bred in 1960 by Johannes Zimmermann at the viticultural institute in Freiburg, Germany by crossing Seyve-Villard 5276 with the cross Riesling × Pinot gris.
The variety was initially known under its breeding code FR 993-60, and was later named after Merzhausen, a location on the southern edge of Freiburg where some of the vineyards of the institute are located. It received varietal protection in 1993.
Properties:
Merzling ripens early, give high yields and shows good resistance against fungal diseases and spring frosts. Its wines are similar to those of Müller-Thurgau.
Offspring:
Due to its resistance against fungal diseases, Merzling and Merzling offspring has been used as a crossing partner for many other new crossings, including Baron, Bronner, Cabernet Cantor, Cabernet Carol, Cabernet Cortis, Helios, Monarch, Prior and Solaris.
Synonyms:
The only synonyms of Merzling are FR 993-60 or Freiburg 993-60. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sunchime**
Sunchime:
A sunchime is a device analogous to a windchime but which uses sunlight instead of wind. A number of embodiments are documented, including: an architectural sculpture by Jeff G. Smith in Sandcastle Retreat, Clearwater, Florida using glass and steel elements suspended in a 12-foot-diameter (3.7 m) skylight a large public work of art in AZ Mills Mall, Tempe, Arizona by Zischke Studio | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pocket computer**
Pocket computer:
A pocket computer was a 1980s-era user programmable calculator-sized computer that had fewer screen lines, and often fewer characters per line, than the Pocket-sized computers introduced beginning in 1989. Manufacturers included Casio, Hewlett-Packard, Sharp, Tandy/Radio Shack (selling Casio and Sharp models under their own TRS line) and many more. The last Sharp pocket computer, the PC-G850V (2001) is programmable in C, BASIC, and Assembler. An important feature of pocket computers was that all programming languages were available for the device itself, not downloaded from a cross-compiler on a larger computer.
Pocket computer:
The programming language was usually BASIC. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Xinetd**
Xinetd:
In computer networking, xinetd (Extended Internet Service Daemon) is an open-source super-server daemon which runs on many Unix-like systems, and manages Internet-based connectivity.It offers a more secure alternative to the older inetd ("the Internet daemon"), which most modern Linux distributions have deprecated.
Description:
xinetd listens for incoming requests over a network and launches the appropriate service for that request. Requests are made using port numbers as identifiers and xinetd usually launches another daemon to handle the request. It can be used to start services with both privileged and non-privileged port numbers.
xinetd features access control mechanisms such as TCP Wrapper ACLs, extensive logging capabilities, and the ability to make services available based on time. It can place limits on the number of servers that the system can start, and has deployable defense mechanisms to protect against port scanners, among other things.
On some implementations of Mac OS X, this daemon starts and maintains various Internet-related services, including FTP and telnet. As an extended form of inetd, it offers enhanced security. It replaced inetd in Mac OS X v10.3, and subsequently launchd replaced it in Mac OS X v10.4. However, Apple has retained inetd for compatibility purposes.
Configuration:
Configuration of xinetd resides in the default configuration file /etc/xinetd.conf, and configuration of the services it supports resides in configuration files stored in the /etc/xinetd.d directory. The configuration for each service usually includes a switch to control whether xinetd should enable or disable the service.
An example configuration file for the RFC 868 time server: # default: off # description: An RFC 868 time server. This protocol provides a # site-independent, machine readable date and time. The Time service sends back # to the originating source the time in seconds since midnight on January first # 1900.
# This is the tcp version.
service time { disable = yes type = INTERNAL id = time-stream socket_type = stream protocol = tcp user = root wait = no } # This is the udp version.
Configuration:
service time { disable = yes type = INTERNAL id = time-dgram socket_type = dgram protocol = udp user = root wait = yes } The lines with the "#" character at the beginning are comments without any effect on the service. There are two service versions: the first one is based on the Transmission Control Protocol (TCP), the second one is based on the User Datagram Protocol (UDP). The type and planned usage of a service determines the necessary core protocol. In a simple way, the UDP cannot handle huge data transmissions, because it lacks the abilities to rearrange packages in a specified order or guarantee their integrity, but it is faster than TCP. TCP has these functions, but it is slower. There are two columns in each version inside the braces. The first is the type of option, the second is the applied variable.
Configuration:
The disable option is a switch to run a service or not. In most cases, the default state is yes. To activate the service, change it to no.
There are three types of services. The type is INTERNAL if the service is provided by xinetd, RPC when it based on Remote procedure call (commonly listed in the /etc/rpc file), or it can be UNLISTED when the service is neither in the /etc/services nor in the /etc/rpc files.
The id is the unique identifier of the service.
The socket_type determines the way of data transmission through the service. There are three types: stream, dgram and raw. This last one is useful when we want to establish a service based on a non-standard protocol.
With the user option, it is possible to choose a user to be the owner of the running service. It is highly recommended to choose a non-root user for security reasons.
When the wait is on yes, the xinetd will not receive a request for the service if it has a connection. So, the number of connections is limited to one. It provides very good protection when we want to establish only one connection per time.
There are many more options available for xinetd. In most Linux distributions, the full list of possible options and their description is accessible with a "man xinetd.conf" command.
To apply the new configuration, a SIGHUP signal must be sent to the xinetd process to make it re-read the configuration files. This can be achieved with the following command: kill -SIGHUP "PID". PID is the actual process identifier number of the xinetd, which can be obtained with the command pgrep xinetd. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tap and flap consonants**
Tap and flap consonants:
In phonetics, a flap or tap is a type of consonantal sound, which is produced with a single contraction of the muscles so that one articulator (such as the tongue) is thrown against another.
Contrast with stops and trills:
The main difference between a tap or flap and a stop is that in a tap/flap there is no buildup of air pressure behind the place of articulation and consequently no release burst. Otherwise a tap/flap is similar to a brief stop.
Contrast with stops and trills:
Taps and flaps also contrast with trills, where the airstream causes the articulator to vibrate. Trills may be realized as a single contact, like a tap or flap, but are variable, whereas a tap/flap is limited to a single contact. When a trill is brief and made with a single contact it is sometimes erroneously described as an (allophonic) tap/flap, but a true tap or flap is an active articulation whereas a trill is a passive articulation. That is, for a tap or flap the tongue makes an active gesture to contact the target place of articulation, whereas with a trill the contact is due to the vibration caused by the airstream rather than any active movement.
Tap vs. flap:
Many linguists use the terms tap and flap indiscriminately. Peter Ladefoged proposed for a while that it might be useful to distinguish between them. However, his usage was inconsistent and contradicted itself even between different editions of the same text. One proposed version of the distinction was that a tap strikes its point of contact directly, as a very brief stop, but a flap strikes the point of contact tangentially: "Flaps are most typically made by retracting the tongue tip behind the alveolar ridge and moving it forward so that it strikes the ridge in passing." Later, however, he used the term flap in all cases. Subsequent work on the labiodental flap has clarified the issue: flaps involve retraction of the active articulator, and a forward-striking movement.For linguists who do not make the distinction, alveolars are typically called taps, and other articulations are called flaps.
Tap vs. flap:
A few languages have been reported to contrast a tap and a flap at the same place of articulation. This is the case for Norwegian, in which the alveolar apical tap /ɾ/ and the post-alveolar/retroflex apical flap /ɽ/ have the same place of articulation for some speakers, and Kamviri, which also has apical alveolar taps and flaps.
IPA symbols:
The tap and flap consonants identified by the International Phonetic Alphabet are: The Kiel Convention of the IPA recommended that for other taps and flaps, a homorganic consonant, such as a stop or trill, should be used with a breve diacritic: Tap or flaps: where no independent symbol for a tap is provided, the breve diacritic should be used, e.g. [ʀ̆] or [n̆].
IPA symbols:
However, the former could be mistaken for a short trill, and is more clearly transcribed ⟨ɢ̆⟩, whereas for a nasal tap the unambiguous transcription ⟨ɾ̃⟩ is generally used.
Types of taps and flaps:
Most of the alternative transcriptions in parentheses imply a tap rather than flap articulation, so for example the flap [ⱱ̟] and the tapped stop [b̆] are arguably distinct, as are flapped [ɽ̃] and tapped [ɳ̆].
Types of taps and flaps:
Alveolar taps and flaps Spanish features a good illustration of an alveolar flap, contrasting it with a trill: pero /ˈpeɾo/ "but" vs. perro /ˈpero/ "dog". Among the Germanic languages, the tap allophone occurs in American and Australian English and in Northern Low Saxon. In American and Australian English it tends to be an allophone of intervocalic /t/ and /d/, leading to homophonous pairs such as "metal" / "medal" and "latter" / "ladder" – see tapping. In a number of Low Saxon dialects it occurs as an allophone of intervocalic /d/ or /t/; e.g. bäden /beeden/ → [ˈbeːɾn] 'to pray', 'to request', gah to Bedde! /gaa tou bede/ → [ˌɡɑːtoʊˈbeɾe] 'go to bed!', Water /vaater/ → [ˈvɑːɾɜ] 'water', Vadder /fater/ → [ˈfaɾɜ] 'father'. (In some dialects this has resulted in reanalysis and a shift to /r/; thus bären [ˈbeːrn], to Berre [toʊˈbere], Warer [ˈvɑːrɜ], Varrer [ˈfarɜ].) Occurrence varies; in some Low Saxon dialects it affects both /t/ and /d/, while in others it affects only /d/. Other languages with this are Portuguese, Korean, and Austronesian languages with /r/.
Types of taps and flaps:
In Galician, Portuguese and Sardinian, a flap often appears instead of a former /l/. This is part of a wider phenomenon called rhotacism.
Retroflex flaps Most Indic and Dravidian languages have retroflex flaps. In Hindi there are three, a simple retroflex flap as in [bɐɽɑː] big, a murmured retroflex flap as in [koɽʱiː] leper, and a retroflex nasal flap in the Hindicized pronunciation of Sanskrit [mɐɽ̃i] ruby. Some of these may be allophonic.
A retroflex flap is also common in Norwegian dialects and some Swedish dialects.
Types of taps and flaps:
Lateral taps and flaps Many of the languages of Africa, Asia, and the Pacific that do not distinguish [r] from l may have a lateral flap. However, it is also possible that many of these languages do not have a lateral–central contrast at all, so that even a consistently neutral articulation may be perceived as sometimes lateral [ɺ] or [l], sometimes central [ɾ]. This has been suggested to be the case for Japanese, for example.The Iwaidja language of Australia has both alveolar and retroflex lateral flaps. These contrast with lateral approximants at the same positions, as well as a retroflex tap [ɽ], alveolar tap [ɾ], and retroflex approximant [ɻ]. However, the flapped, or tapped, laterals in Iwaidja are distinct from 'lateral flaps' as represented by the corresponding IPA symbols (see below). These phones consist of a flap component followed by a lateral component, whereas In Iwaidja the opposite is the case. For this reason, current IPA transcriptions of these sounds by linguists working on the language consist of an alveolar lateral followed by a superscript alveolar tap and a retroflex lateral followed by a superscript retroflex tap.
Types of taps and flaps:
A velar lateral tap may exist as an allophone in a few languages of New Guinea, according to Peter Ladefoged and Ian Maddieson.
Non-coronal flaps The only common non-coronal flap is the labiodental flap, found throughout central Africa in languages such as Margi. In 2005, the IPA adopted a right-hook v, for this sound. (Supported by some fonts: [ⱱ].) Previously, it had been transcribed with the use of the breve diacritic, [v̆], or other ad hoc symbols.
Types of taps and flaps:
Other taps or flaps are much less common. They include an epiglottal tap; a bilabial flap in Banda, which may be an allophone of the labiodental flap; and a velar lateral tap as an allophone in Kanite and Melpa. These are often transcribed with the breve diacritic, as [w̆, ʟ̆]. Note here that, like a velar trill, a central velar flap or tap is not possible because the tongue and soft palate cannot move together easily enough to produce a sound.
Types of taps and flaps:
If other flaps are found, the breve diacritic could be used to represent them, but would more properly be combined with the symbol for the corresponding voiced stop. A palatal or uvular tap or flap, which unlike a velar tap is believed to be articulatorily possible, could be represented this way (by *[ɟ̆, ɢ̆~ʀ̆]).Though deemed impossible on the IPA chart, a velar tap has been reported to occur allophonically in the Kamviri dialect of the Kamkata-vari language and in Dàgáárè, though at least in the latter case this may in fact be a palatal tap.
Types of taps and flaps:
Nasal taps and flaps Nasalized consonants include taps and flaps, although these are rarely phonemic. Many West African languages have a nasal flap [ɾ̃] (or [n̆]) as an allophone of /ɾ/ before a nasal vowel; Pashto, however, has a phonemic nasal retroflex lateral flap.
Tapped fricatives Voiced and voiceless tapped alveolar fricatives have been reported from a few languages. Flapped fricatives are possible but do not seem to be used. See voiced alveolar tapped fricative, voiceless alveolar tapped fricative. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tibial tuberosity advancement**
Tibial tuberosity advancement:
Tibial Tuberosity Advancement (TTA) is an orthopedic procedure to repair deficient cranial cruciate ligaments in dogs. It has also been used in cats. This procedure was developed by Dr. Slobodan Tepic and Professor Pierre Montavon at the School of Veterinary Medicine, University of Zurich, in Zurich, Switzerland beginning in the late 1990s. Dr. Slobodan Tepic later founded KYON, a leading provider of veterinary orthopaedic implants, in 1999. Kyon became the first veterinary orthopedic implant company offering this procedure to veterinarians. The cranial cruciate ligament (CrCL) in dogs, provides the same function as the anterior cruciate ligament in humans. It stabilizes the knee joint, called the stifle joint in quadrupeds, and limits the tibia from sliding forward in relation to the femur. It is attached to the cranial (anterior) medial side of the interdylar notch of the tibia at one end and the caudal (posterior) side of the lateral femoral condyle at the other end. It also helps to prevent the stifle (knee) joint from over-extending or rotating.
Tibial tuberosity advancement:
Trauma to the equivalent ligament in humans is common, and damage most frequently occurs during some form of sporting activity (including football, rugby and golf). The nature of the injury is very different in dogs. Rather than the ligament suddenly breaking due to excessive trauma, it usually degenerates slowly over time, rather like a fraying rope. This important difference is the primary reason why the treatment options recommended for cruciate ligament injury in dogs are so different from the treatment options recommended for humans.
Tibial tuberosity advancement:
In the vast majority of dogs, the cranial cruciate ligament (CrCL) ruptures as a result of long-term degeneration, whereby the fibres within the ligament weaken over time. The precise cause of this is not known, but genetic factors are probably most important, with certain breeds being predisposed (including Labradors, Rottweilers, Boxers, West Highland White Terriers and Newfoundlands). Supporting evidence for a genetic cause was primarily obtained by assessment of family lines, coupled with the knowledge that many animals will rupture the CrCL in both knees, often relatively early in life. Other factors such as obesity, individual conformation, hormonal imbalance and certain inflammatory conditions of the joint may also play a role. Uncorrected CrCL deficiencies have been associated with meniscal damage and degenerative joint diseases such as osteoarthritis.TTA is a surgical procedure designed to correct CrCL deficient stifles. The objective of the TTA is to advance the tibial tuberosity, which changes the angle of the patellar ligament to neutralize the tibiofemoral shear force during weight bearing. A microsaggital saw is used to cut the Tibial Tuberosity off then a special titanium cage is used to advance the tibial tuberosity. A titanium plate is used to hold the tibial tuberosity in position. By neutralizing the shear forces in the stifle caused by a ruptured or weakened CrCL, the joint becomes more stable without compromising joint congruency.
Tibial tuberosity advancement:
TTA appears to be a less invasive procedure than some other techniques for stabilizing the deficient cranial cruciate ligament such as TPLO (Tibial Plateau Leveling Osteotomy) and TWO (Tibial Wedge Osteotomy), as TTA does not disrupt the primary loading axis of the tibia. Since KYON first developed the TTA procedure, they have pioneered a new less invasive version of the procedure known as TTA-II. This new TTA procedure delivers the same TTA outcomes with less trauma, fewer implants, a simplified technique and at a reduced cost.
Tibial tuberosity advancement:
Recently, TR BioSurgical has developed a bioscaffold to be used for veterinary osteotomies as a substitute for autologous cancellous bone grafting.In 2012 TTA RAPID was introduced by the German manufacturer RITA LEIBINGER Medical GmbH & Co. KG in cooperation with the University of Ghent, Belgium. The TTA RAPID implant is a biocompatible sponge-construction which combines a wedge-cage with a plate on the top. In this way there is only one implant needed for the whole TTA surgery. It is called "rapid" because the implantation is very quick, easy to learn and offers a high stability.
Tibial tuberosity advancement:
The surgery is based on the Maquet-Hole-Technique.
Alternative procedures:
Tibial-plateau-leveling osteotomy Tightrope CCL Triple tibial osteotomy Simitri Stable in Stride Cranial tibial wedge osteotomy | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multimedia studies**
Multimedia studies:
Multimedia studies is an interdisciplinary field of academic discourse focused on the understanding of technologies and cultural dimensions of linking traditional media sources with ones based on new media to support social systems.
History:
Multimedia studies as a discipline came out of the need for media studies to be made relevant to the new world of CD-ROMs and hypertext in the 1990s. Revolutionary books like Jakob Nielsen's Hypertext and Hypermedia lay the foundations for understanding multimedia alongside traditional cognitive science and interface design issues. Software like Authorware Attain, now owned by Adobe, made the design of multimedia systems accessible to those unskilled in programming and became major applications by the end of the 1990s Recent challenges The Internet age that has been growing since the launch of Windows 98 has brought new challenges for the discipline including developing new models and rules for the World Wide Web. Areas such as usability have had to develop specific guidelines for website design and traditional concepts like genre, narrative theory, and stereotypes have had to be updated to take account of cyberculture. Cultural aspects of multimedia studies have been conceptualised by authors such as Lev Manovich, Arturo Escobar and Fred Forest.The increase in Internet trolling and so-called Internet addiction has thrown up new problems. Concepts like emotional design and affective computing are driving multimedia studies research to consider ways of becoming more seductive and able to take account of the needs of users.
History:
Media studies 2.0 Some academics, such as David Gauntlett, have preferred the neologism, "Media Studies 2.0" to multimedia studies, in order to give it the feel of other fields like Web 2.0 and Classroom 2.0. The media studies 2.0 neologism has received strong criticism. Andy Medhurst at Sussex University for instance wrote of the media studies 2.0 neologism introduced by David Gauntlett, "Isn't it odd that whenever someone purportedly identifies a new paradigm, they see themselves as already a leading practitioner of it?"
Issues and concepts:
Media ecology and information ecology Cybercultures and new media Online communities and virtual communities Internet trolling and Internet addiction Captology
Universities offering degrees in multimedia studies:
University of the Philippines Open University CIIT College of Arts and Technology iAcademy Arizona State University (BS multimedia writing & technical communication) Aston University (BSc multimedia computing) Birmingham City University (BSc multimedia technology) University of East London (BSc multimedia studies) Glyndwr University (BA graphic design and multimedia) University of Mary Hardin-Baylor (BS multimedia & information technology) Middlesex University (BSc multimedia computing) Robert Gordon University (BSc multimedia development) University of Westminster (multimedia computing) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Haemolacria**
Haemolacria:
Haemolacria or hemolacria is a physical condition that causes a person to produce tears that are partially composed of blood.
Description:
Haemolacria can manifest as tears ranging from merely red-tinged to appearing to be entirely made of blood, and may also be indicative of a tumor in the lacrimal apparatus. It is most often provoked by local factors such as bacterial conjunctivitis, environmental damage or injuries.Acute haemolacria can occur in fertile women and seems to be induced by hormones, similarly to what happens in endometriosis.
Cases:
Twinkle Dwivedi From Lucknow, India, Dwivedi presented a rare condition that appeared to cause her to spontaneously bleed from her eyes and other parts of her body without presenting any visible wounds. Dwivedi was the subject of numerous medical research studies and TV shows including Body Shock and a National Geographic documentary.
Cases:
In the absence of a medical explanation for her condition, some religious explanations have been posed. It was suggested that she could have had an unknown disease, but more skeptical views hypothesized that the case might be explained by Münchausen syndrome by proxy, where her mother, seemingly the only one to witness her bleeding actually starting, was fabricating the story and somehow inducing the effect on the girl. Sanal Edamaruku observed in 2010 that the pattern seemed to match her menstrual cycle and believed that she was faking the symptoms.Calvino Inman Aged 22, reported to weep tears of blood 5 times a day.Rashida Khatoon From India, was reportedly crying blood up to five times a day in 2009, and fainting with every weeping.Débora Santos Age 17, from Brazil. Was reported to have cried tears of blood several times in her life.Yaritza Oliva (not officially diagnosed) Age 21, from Chile. Was reported to have cried tears of blood several times a day in 2013.Linnie Ikeda (not officially diagnosed) Age 25, from Waikele, Hawai'i on the island of 'O'ahu. She was diagnosed after 2008 with Gardner–Diamond syndrome for her random bruising, but in 2010 had symptoms of the splitting of her tongue which would bleed profusely. In 2011, Ikeda has started bleeding from her eyes.Marnie-Rae Harvey (not officially diagnosed) Age 17, from the United Kingdom. Started in 2013 with initially coughing up blood but now persists in her tears since 2015.Sakhina Khatun From Bhagwangola, Murshidabad, West Bengal, India, was reportedly crying blood many times a day in 2019, and fainting with every weeping.
In popular culture:
French author Marquis De Sade claimed to have "wept tears of blood" after he thought his novel The 120 Days of Sodom was thought to be lost in July 1789. However, the work was later recovered. It is unclear if De Sade actually suffered haemolacria, or if he was just using it as a figure of speech.
Le Chiffre, the main antagonist of the 2006 film Casino Royale, suffers from haemolacria.
On the television series Manifest, Dr. Saanvi Bahl suffered from hemolacria and erratic blood pressure in the season 3 episode Bogey. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Code (set theory)**
Code (set theory):
In set theory, a code for a hereditarily countable set x∈Hℵ1 is a set E⊂ω×ω such that there is an isomorphism between (ω,E) and (X, ∈ ) where X is the transitive closure of {x}. If X is finite (with cardinality n), then use n×n instead of ω×ω and (n,E) instead of (ω,E).
Code (set theory):
According to the axiom of extensionality, the identity of a set is determined by its elements. And since those elements are also sets, their identities are determined by their elements, etc.. So if one knows the element relation restricted to X, then one knows what x is. (We use the transitive closure of {x} rather than of x itself to avoid confusing the elements of x with elements of its elements or whatever.) A code includes that information identifying x and also information about the particular injection from X into ω which was used to create E. The extra information about the injection is non-essential, so there are many codes for the same set which are equally useful.
Code (set theory):
So codes are a way of mapping Hℵ1 into the powerset of ω×ω. Using a pairing function on ω (such as (n,k) goes to (n2+2·n·k+k2+n+3·k)/2), we can map the powerset of ω×ω into the powerset of ω. And we can map the powerset of ω into the Cantor set, a subset of the real numbers. So statements about Hℵ1 can be converted into statements about the reals. Therefore, Hℵ1⊂L(R).
Code (set theory):
Codes are useful in constructing mice. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**S11 (classification)**
S11 (classification):
S11, SB11, SM11 are disability swimming classifications for blind swimmers.
Sport:
This classification is for swimming. In the classification title, S represents Freestyle, Backstroke and Butterfly strokes. SB means breaststroke. SM means individual medley. Jane Buckley, writing for the Sporting Wheelies, describes the swimmers in this classification as having: "unable to see at all and are considered totally blind (see IBSA B1 – appendix). Swimmers must wear blackened goggles if they swim in this class. They will also require someone to tap them when they are approaching a wall."
Getting classified:
Internationally, the classification is done by the International Blind Sports Association. In Australia, to be classified in this category, athletes contact the Australian Paralympic Committee or their state swimming governing body. In the United States, classification is handled by the United States Paralympic Committee on a national level. The classification test has three components: "a bench test, a water test, observation during competition." American swimmers are assessed by four people: a medical classified, two general classified and a technical classifier.
At the Paralympic Games:
For the 2016 Summer Paralympics in Rio, the International Paralympic Committee had a zero classification at the Games policy. This policy was put into place in 2014, with the goal of avoiding last minute changes in classes that would negatively impact athlete training preparations. All competitors needed to be internationally classified with their classification status confirmed prior to the Games, with exceptions to this policy being dealt with on a case-by-case basis.
Competitions:
For this classification, organisers of the Paralympic Games have the option of including the following events on the Paralympic programme: 50m, 100m and 400m Freestyle, 100m Backstroke, 100m Breaststroke, 100m Butterfly, 200m Individual Medley, and 4 × 100 m Freestyle Relay and 4 × 100 m Medley Relay.
Records:
As of February 2013, in the S11 50 m Freestyle Long Course, the men's world record is held by Yang Bozan and the women's world record is held by Cecilia Camellini. In the S11 400 m Freestyle Long Course, the men's world record is held by the American John Morgan and the women's world record is held by the American Anastasia Pagonis.
Competitors:
Swimmers who have competed in this classification include Alexander Chekurov, Enhamed Enhamed and Junichi Kawai who all won medals in their class at the 2008 Paralympics.American swimmers who have been classified by the United States Paralympic Committee as being in this class include Katie Pavlacka, Rio Popper, Julianna Raiche and Rylie Robinson. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cervicitis**
Cervicitis:
Cervicitis is inflammation of the uterine cervix. Cervicitis in women has many features in common with urethritis in men and many cases are caused by sexually transmitted infections. Non-infectious causes of cervicitis can include intrauterine devices, contraceptive diaphragms, and allergic reactions to spermicides or latex condoms.
Cervicitis affects over half of all women during their adult life.Cervicitis may ascend and cause endometritis and pelvic inflammatory disease (PID). Cervicitis may be acute or chronic.
Symptoms and signs:
Cervicitis may have no symptoms. If symptoms do manifest, they may include: Abnormal vaginal bleeding after intercourse between periods Unusual gray, white, or yellow vaginal discharge Painful sexual intercourse Pain in the vagina Pressure or heaviness in the pelvis Frequent, painful urination
Causes:
Cervicitis can be caused by any of a number of infections, of which the most common are chlamydia and gonorrhea, with chlamydia accounting for approximately 40% of cases. Other causes include Trichomonas vaginalis, herpes simplex virus, and Mycoplasma genitalium.While sexually transmitted infections (STIs) are the most common cause of cervicitis, there are other potential causes as well. This includes vaginitis caused by bacterial vaginosis or Trichomonas vaginalis. This also includes a device inserted into the pelvic area (i.e. a cervical cap, IUD, pessary, etc.); an allergy to spermicides or latex in condoms; or, exposure to a chemical, for example while douching. Inflammation can also be idiopathic, where no specific cause is found. While IUDs do not cause cervicitis, active cervicitis is a contraindication to placing an IUD. If a person with an IUD develops cervicitis, it usually does not need to be removed, if the person wants to continue using it.There are also certain behaviors that can place individuals at a higher risk for contracting cervicitis. High-risk sexual behavior, a history of STIs, many sexual partners, sex at an early age, and sexual partners who engage in high-risk sexual behavior or have had an STI can increase the likelihood of contracting cervicitis.
Diagnosis:
To diagnose cervicitis, a clinician will perform a pelvic exam. This exam includes a speculum exam with visual inspection of the cervix for abnormal discharge, which is usually purulent or bleeding from the cervix with little provocation. Swabs can be used to collect a sample of this discharge for inspection under a microscope and/or lab testing for gonorrhea, chlamydia, and Trichomonas vaginalis. A bimanual exam in which the clinician palpates the cervix to see if there is any associated pain should be done to assess for pelvic inflammatory disease.
Prevention:
The risk of contracting cervicitis from STIs can be reduced by using condoms during every sexual encounter. Condoms are effective against the spread of STIs like chlamydia and gonorrhea that cause cervicitis. Also, being in a long-term monogamous relationship with an uninfected partner can lower the risk of an STI.Ensuring that foreign objects like tampons are properly placed in the vagina and following instructions how long to leave it inside, how often to change it, and/or how often to clean it can reduce the risk of cervicitis. In addition, avoiding potential irritants like douches and deodorant tampons can prevent cervicitis.
Treatment:
Non-infectious causes of cervicitis are primarily treated by eliminating or limiting exposure to the irritant. Antibiotics, usually azithromycin or doxycycline, or antiviral medications are used to treat infectious causes. Women at increased risk of sexually transmitted infections (i.e., less than 25 years of age and a new sexual partner, a sexual partner with other partners, or a sexual partner with a known sexually transmitted infection), should be treated presumptively for chlamydia and possibly gonorrhea, particularly if follow-up care cannot be ensured or diagnostic testing is not possible. For lower risk women, deferring treatment until test results are available is an option.To reduce the risk of reinfection, women should abstain from sexual intercourse for seven days after treatment is started. Also, sexual partners (within the last sixty days) of anyone with infectious cervicitis should be referred for evaluation or treated through expedited partner therapy (EPT). EPT is the process by which a clinician treats the sexual partner of a patient diagnosed with a sexually transmitted infection without first meeting or examining the partner. Sexual partners should also avoid sexual intercourse until they and their partners are adequately treated.Untreated cervicitis is also associated with an increased susceptibility to HIV infection. Women with infectious cervicitis should be tested for other sexually transmitted infections, including HIV and syphilis.Cervicitis should be followed up. Women with a specific diagnosis of chlamydia, gonorrhea, or trichomonas should see a clinician in three months after treatment for repeat testing because they are at higher risk of getting reinfected, regardless of whether their sex partners were treated. Treatment in pregnant women is the same as those who are not pregnant. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GABBR2**
GABBR2:
Gamma-aminobutyric acid (GABA) B receptor, 2 (GABAB2) is a G-protein coupled receptor subunit encoded by the GABBR2 gene in humans.
Function:
B-type receptors for the neurotransmitter GABA (gamma-aminobutyric acid) inhibit neuronal activity through G protein-coupled second-messenger systems, which regulate the release of neurotransmitters and the activity of ion channels and adenylyl cyclase. See GABBR1 (MIM 603540) for additional background information on GABA-B receptors.[supplied by OMIM]
Interactions:
GABBR2 has been shown to interact with GABBR1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Momentum transfer**
Momentum transfer:
In particle physics, wave mechanics and optics, momentum transfer is the amount of momentum that one particle gives to another particle. It is also called the scattering vector as it describes the transfer of wavevector in wave mechanics.
In the simplest example of scattering of two colliding particles with initial momenta p→i1,p→i2 , resulting in final momenta p→f1,p→f2 , the momentum transfer is given by q→=p→i1−p→f1=p→f2−p→i2 where the last identity expresses momentum conservation. Momentum transfer is an important quantity because Δx=ℏ/|q| is a better measure for the typical distance resolution of the reaction than the momenta themselves.
Wave mechanics and optics:
A wave has a momentum p=ℏk and is a vectorial quantity. The difference of the momentum of the scattered wave to the incident wave is called momentum transfer. The wave number k is the absolute of the wave vector k=p/ℏ and is related to the wavelength k=2π/λ . Momentum transfer is given in wavenumber units in reciprocal space Q=kf−ki Diffraction The momentum transfer plays an important role in the evaluation of neutron, X-ray and electron diffraction for the investigation of condensed matter. Laue-Bragg diffraction occurs on the atomic crystal lattice, conserves the wave energy and thus is called elastic scattering, where the wave numbers final and incident particles, kf and ki , respectively, are equal and just the direction changes by a reciprocal lattice vector G=Q=kf−ki with the relation to the lattice spacing G=2π/d . As momentum is conserved, the transfer of momentum occurs to crystal momentum.
Wave mechanics and optics:
The presentation in reciprocal space is generic and does not depend on the type of radiation and wavelength used but only on the sample system, which allows to compare results obtained from many different methods. Some established communities such as powder diffraction employ the diffraction angle 2θ as the independent variable, which worked fine in the early years when only a few characteristic wavelengths such as Cu-K α were available. The relationship to Q -space is sin (θ) with k=2π/λ and basically states that larger 2θ corresponds to larger Q | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gallocyanin stain**
Gallocyanin stain:
The gallocyanin stain, also known as the gallocyanin-chromalum stain, is a stain of the oxazine group for total nucleic acids. It is prepared from gallocyanin and is an ideal method for numerous slides that need to be stained serially, equivalently, and reproducible.Structures containing basophilic compounds take on a bluish color.
History:
It has been known since the early work of Einarson (1932) that the gallocyanin dye worked well for nucleotide constituents. Gersch and colleagues at Chicago are often credited with the earliest efforts of using gallocyanin for staining.
Sandritter demonstrated that a stoichiometric relationship occurs between intensity of staining and quantity of nucleic acid present.
Function:
Its method of binding and specificity are still not completely known. However, it is thought that gallocyanin-Cr(H2O)4 selectively binds to nucleic acid phosphate groups, particularly within a pH range of 1.5-1.75. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dermal equivalent**
Dermal equivalent:
The dermal equivalent, also known as dermal replacement or neodermis, is an in vitro model of the dermal layer of skin. There is no specific way of forming a dermal equivalent, however the first dermal equivalent was constructed by seeding dermal fibroblasts into a collagen gel. This gel may then be allowed to contract as a model of wound contraction. This collagen gel contraction assay may be used to screen for treatments which promote or inhibit contraction and thus affect the development of a scar. Other cell types may be incorporated into the dermal equivalent to increase the complexity of the model. For example, keratinocytes may be seeded on the surface to create a skin equivalent, or macrophages may be incorporated to model the inflammatory phase of wound healing.A number of commercial dermal equivalents with different compositions and development methods are available. These include Integra, AlloDerm, and Dermagraft, among others.
Purpose:
Autotransplantation has been common practice for treating individuals who have a need for skin transplants. However, there is the issue of needing repeated grafts or transplants for patients with serious injuries such as burn victims, leading to numerous problems including lack of supply of the skin, preservation, and the possibility if disease transmission. Thus, this prompted for the development of various techniques to create artificial skin, including dermal equivalents.
Purpose:
Now, the use of dermal equivalents has expanded from burn wounds to other areas such as various reconstructive surgeries and treatment of chronic wounds.
Risks There are potential risks when it comes to the application of any dermal equivalent, as there is with any skin grafting or skin substitution technique. These concerns include but are not limited to a negative immune response, possible infection, slow healing, pain, and scarring.
History:
The development of artificial skin and dermis began in the 20th century. It was prompted by the discovery of the ability to isolate and culture cells in vitro, which was in 1907 by American embryologist Ross Granville Harrison when he was able to isolate and grow embryonic tissues from frogs in his laboratory. In 1975, keratinocytes, which are cells that account for the majority epidermal skin cells, were first isolated and successfully cultured in vitro by James G. Rheinwald and Howard Green. Afterwards, in 1981, bilayer artificial skin or dermal graft was developed by John F. Burke, Ioannis Yannas, and other researchers, which was successful in covering “physiologically close to 60% of the body surface.”Burke’s dermal graft was one of the earliest developments of the dermal equivalent, or “neodermis”. Years later, Integra artificial skin, which is now called Integra Dermal Regeneration Template (IDRT) by Integra LifeSciences, was developed from Burke et al.'s innovation. It became the first commercial product approved by the FDA for dermal replacements and listed as one of the "Significant Medical Device Breakthroughs" in 1996.
Commercial products and applications:
There are a variety of dermal equivalents from how they are developed and what they are used for. The following three are some of the most commonly reviewed and assessed dermal equivalents.
Commercial products and applications:
Integra The initial research of dermal equivalent leading to the Integra product resulted in a bilayer structure consisting of a dermal portion and epidermal portion. The dermal portion is composed of bovine hide collagen and chondroitin 6-sulfate that is crosslinked with glutaraldehyde. The epidermal portion is composed of Silastic covering the dermis. For application, the bilayer structure is placed on the wound after removal of the eschar and left for several days. Then, the epidermal layer is removed and replaced with artificial epidermis. The dermal equivalent, or neodermis layer, is not removed as it is suitable for growth of cells and vessels. The two layer process, however, may potentially lead to an infection due to any unwanted accumulation between the layers. The main and primary use of Integra was for burn victims who required skin grafts.
Commercial products and applications:
Integra Dermal Regeneration Template Formerly known as Integra artificial skin, Integra Dermal Regeneration Template, or IDRT, was the first FDA approved product for dermal replacements. The Integra Dermal Regeneration Template’s bilayer structure is composed of bovine tendon collagen and chondroitin-6-sulfate for the dermal layer, and polysiloxane for the epidermal layer. The polysiloxane epidermal layer is semipermeable, allowing for the controlled water vapor loss, flexible anti-bacterial support of the wound, and mechanical strength for the dermal equivalent. The dermal layer scaffold promotes vascularization and generation of a neodermis. Similar to its predecessor, the method of application is the same. IDRT has low risks of immunogenic response, as well as low disease transmission.
Commercial products and applications:
AlloDerm AlloDerm is the first type of acellular dermal matrix (ADM) derived from the skin of cadavers from the collagen fiber network after the removal of the epidermal layer of the cadaveric skin. It is widely used in dental surgeries for gingival grafting, abdominal hernia repair, oculoplastic and orbital surgeries, and breast surgeries. Due to its acellular structure, there is no immunogenic response caused from the application of AlloDerm.
Commercial products and applications:
Dermagraft Dermagraft is a human fibroblast–derived dermal replacement. It is derived from neonatal dermal fibroblasts implanted into a bioabsorbable polyglactin mesh scaffold along with extracellular matrix proteins that are secreted by the fibroblasts. It can promote re-epithelization, however, there is a potential for antigenic response. Dermagraft is mainly used for the treatment of chronic wounds such as various ulcers including diabetic foot ulcers and venous foot ulcers. It received premarket approval from the FDA in 2001 for the treatment of diabetic foot ulcers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Antimony oxychloride**
Antimony oxychloride:
Antimony oxychloride, known since the 15th century, has been known by a plethora of alchemical names. Since the compound functions as both an emetic and a laxative, it was originally used as a purgative.
History:
Its production was first described by Basil Valentine in Currus Triumphalis Antimonii. In 1659, Johann Rudolf Glauber gave a relatively exact chemical interpretation of the reaction.
Vittorio Algarotti introduced the substance into medicine, and derivatives of his name (algarot, algoroth) were associated with this compound for many years.
The exact composition was unknown for a very long time. The suggestion of SbOCl being a mixture of antimony trichloride and antimony oxide or pure SbOCl were raised. Today the hydrolysis of antimony trichloride is understood; first the SbOCl oxychloride is formed which later forms Sb4O5Cl2.
Natural occurrence:
Neither SbOCl nor the latter compound occur naturally. However, onoratoite is a known Sb-O-Cl mineral, its composition being Sb8Cl2O11.
Alternative historical names:
mercurius vitæ ("mercury of life") powder of algaroth algarel Pulvis angelicus.
Synthesis:
Dissolving antimony trichloride in water yields antimony oxychloride: SbCl3 + H2O → SbOCl + 2 HCl | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rob Galbraith**
Rob Galbraith:
Rob Galbraith is a photographer and photojournalism teacher who became well-known as a photography writer with his Digital Photography Insights (DPI) website, known for its memory card benchmarks and its analysis of the Canon 1D autofocus system.In 2012 Rob Galbraith announced that he would be focusing on his photojournalism teaching job and thus he would cease posting updates to his website. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Debian-Installer**
Debian-Installer:
Debian-Installer is a system installer designed for the Debian Linux distribution. It originally appeared in the Debian release 3.1 (Sarge), released on June 6, 2005, although the first release of a Linux distribution that used it was Skolelinux (Debian-Edu) 1.0, released in June 2004.It is also one of two official installers available for Ubuntu, the other being called Ubiquity (itself based on parts of debian-installer) which was introduced in Ubuntu 6.06 (Dapper Drake).
Debian-Installer:
It makes use of cdebconf (a re-implementation of debconf in C) to perform configuration at install time.
Originally, it was only supported under text-mode and ncurses. A graphical front-end (using GTK-DirectFB) was first introduced in Debian 4.0 (Etch). Since Debian 6.0 (Squeeze), it is used over Xorg instead of DirectFB.
debootstrap:
debootstrap is software which allows installation of a Debian base system into a subdirectory of another, already installed operating system. It needs access to a Debian repository and doesn't require an installation CD. It can also be installed and run from another operating system or to create a "cross-debootstrapping", a rootfs for a machine of a different architecture, for instance, OpenRISC. There is also a largely equivalent version written in C – cdebootstrap, which is used in debian-installer.debootstrap can be used to install Debian in a system without using an installation disk but can also be used to run a different Debian flavor in a chroot environment. This way it is possible to create a full (minimal) Debian installation which can be used for testing purposes, or for building packages in a "clean" environment (e.g., as pbuilder does).
Features:
Set language Select location Configure keyboard Configure network Setup users and passwords Configure clock Partition disk Create partition Format device LVM/Cryptsetup Install system base Configure package manager Configure mirrorlist Configure bootloader | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FAIRE-Seq**
FAIRE-Seq:
FAIRE-Seq (Formaldehyde-Assisted Isolation of Regulatory Elements) is a method in molecular biology used for determining the sequences of DNA regions in the genome associated with regulatory activity. The technique was developed in the laboratory of Jason D. Lieb at the University of North Carolina, Chapel Hill. In contrast to DNase-Seq, the FAIRE-Seq protocol doesn't require the permeabilization of cells or isolation of nuclei, and can analyse any cell type. In a study of seven diverse human cell types, DNase-seq and FAIRE-seq produced strong cross-validation, with each cell type having 1-2% of the human genome as open chromatin.
Workflow:
The protocol is based on the fact that the formaldehyde cross-linking is more efficient in nucleosome-bound DNA than it is in nucleosome-depleted regions of the genome. This method then segregates the non cross-linked DNA that is usually found in open chromatin, which is then sequenced. The protocol consists of cross linking, phenol extraction and sequencing the DNA in aqueous phase.
Workflow:
FAIRE FAIRE uses the biochemical properties of protein-bound DNA to separate nucleosome-depleted regions in the genome. Cells will be subjected to cross-linking, ensuring that the interaction between the nucleosomes and DNA are fixed. After sonication, the fragmented and fixed DNA is separated using a phenol-chloroform extraction. This method creates two phases, an organic and an aqueous phase. Due to their biochemical properties, the DNA fragments cross-linked to nucleosomes will preferentially sit in the organic phase. Nucleosome depleted or ‘open’ regions on the other hand will be found in the aqueous phase. By specifically extracting the aqueous phase, only nucleosome-depleted regions will be purified and enriched.
Workflow:
Sequencing FAIRE-extracted DNA fragments can be analyzed in a high-throughput way using next-generation sequencing techniques. In general, libraries are made by ligating specific adapters to the DNA fragments that allow them to cluster on a platform and be amplified resulting in the DNA sequences being read/determined, and this in parallel for millions of the DNA fragments.
Depending on the size of the genome FAIRE-seq is performed on, a minimum of reads is required to create an appropriate coverage of the data, ensuring a proper signal can be determined. In addition, a reference or input genome, which has not been cross-linked, is often sequenced alongside to determine the level of background noise.
Note that the extracted FAIRE-fragments can be quantified in an alternative method by using quantitative PCR. However, this method does not allow a genome wide / high-throughput quantification of the extracted fragments.
Sensitivity:
There are several aspects of FAIRE-seq that require attention when analysing and interpreting the data. For one, it has been stated that FAIRE-seq will have a higher coverage at enhancer regions over promoter regions. This is in contrast to the alternative method of DNase-seq who is known to show a higher sensitivity towards promoter regions. In addition, FAIRE-seq has been stated to show prefers for internal introns and exons. In general it is also believed that FAIRE-seq data displays a higher background level, making it a less sensitive method.
Computational analysis:
In a first step FAIRE-seq data are mapped to the reference genome of the model organism used.
Computational analysis:
Next, the identification of genomic regions with open chromatin, is done by using a peak calling algorithm. Different tools offer packages to do this (e.g. ChIPOTle ZINBA and MACS2). ChIPOTle uses a sliding window of 300bp to identify statistically significant signals. In contrast, MACS2 identifies the enriched signal by combining the parameter callpeak with other options like 'broad', 'broad cutoff', 'no model' or 'shift'. ZINBA is a generic algorithm for detection of enrichment in short read dataset. It thus helps in the accurate detection of signal in complex datasets having low signal-to noise ratio.
Computational analysis:
BedTools is used to merge the enriched regions residing close to each other to form COREs (Cluster of open regulatory elements). This helps in the identification of chromatin accessible regions and gene regulation patterns which would have been undetectable otherwise, considering the lower resolution FAIRE-seq often brings with it.
Data is typically visualized as tracks (e.g. bigWig) and can be uploaded to the UCSC genome browser.The major limitation of this method, i.e. the low signal-to-noise ratio compared to other chromatin accessibility assays, makes the computational interpretation of these data very difficult.
Alternative methods:
There are several methods that can be used as an alternative to FAIRE-seq. DNase-seq uses the ability of the DNase I enzyme to cleave free/open/accessible DNA to identify and sequence open chromatin. The subsequently developed ATAC-seq employs the Tn5 transposase, which inserts specified fragments or transposons into accessible regions of the genome to identify and sequence open chromatin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gifted Education Programme (Singapore)**
Gifted Education Programme (Singapore):
The Gifted Education Programme (GEP) is an academic programme in Singapore, initially designed to identify the top 0.25% (later expanded to 0.5%, then 1%) of students from each academic year with outstanding intelligence. The tests are based on verbal, mathematical and spatial abilities (as determined by two rounds of tests). Selected students will then be transferred to schools offering the GEP. GEP classes are designed to fit the students' learning ability, and may cover subjects in greater breadth and depth. The curriculum is designed by the Gifted Education Branch and eschews the use of textbooks for notes that have been prepared by GEP teachers. The programme has now been expanded to 1% of the students from each academic year.
History:
The Gifted Education Programme was first implemented in Singapore in 1984 amid some public concern. It was initiated by the Ministry of Education (MOE) in line with its policy under the New Education System to allow each student to learn at his/her own pace. The MOE has a commitment to ensure that the potential of each pupil is recognised, nurtured and developed. It was recognised that intellectually gifted pupils should be given apter classes to reach their full potential. From its inauguration in two primary schools and two secondary schools, the programme has now expanded to nine primary schools (as of October 2004) and was at its peak before the introduction of the Integrated Programme (IP).
Primary Schools offering GEP:
As of 2020, nine primary schools offer GEP.
Primary Schools offering GEP:
Anglo-Chinese School (Primary) Catholic High School (Primary) Henry Park Primary School Nan Hua Primary School Nanyang Primary School Raffles Girls' Primary School Rosyth School Saint Hilda's Primary School Tao Nan School Impact of the Integrated Programme In 2004, five secondary schools started implementing Integrated Programmes with their affiliated Junior Colleges, and are officially no longer offering the GEP. However, they still have programmes within their respective Integrated Programmes to cater to gifted students. While the secondary schools that have implemented the Integrated Programme remain generally unaffected by the change, Victoria School, which continued to offer the GEP, saw a drastic decrease in enrolment.
Primary Schools offering GEP:
Secondary Schools that are offering GEP, or SBGE The Gifted Education Programme came to a close in secondary schools in 2008 and was replaced by the School-Based Gifted Education (SBGE) programme.
All of the secondary schools that offer the SBGE are IP schools. There are generally two classes per cohort/year/level for SBGE students, but sometimes there may only be one class per cohort, depending on the cohort size.
Primary Schools offering GEP:
Anglo-Chinese School (Independent) Dunman High School Hwa Chong Institution Nanyang Girls' High School NUS High School of Mathematics and Science Raffles Girls School (Secondary) Raffles InstitutionBeginning in 2006, the MOE started to phase out the secondary school GEP due to the impact of the IP. However, GEP pupils who do not wish to take up the Integrated Programme after 2008 can enroll in schools with school-based special programmes at Secondary One. Examples of such schools are Anglo-Chinese School (Independent), Catholic High School, Methodist Girls' School and St. Joseph's Institution.
Selection Process:
At Primary Three (P3), all students, except those who opt-out, will take the first round of admission tests, the Screening Test. About 10% of students identified based on the Screening Test results will be invited to participate in the second round, the Selection Test. Based on the Selection Test results, the top 1% of the cohort will be identified and invited to join the Gifted Education Programme, usually by November of that year.
Selection Process:
English and Mathematics papers are included as part of the Screening Test, while another two papers, General Ability I and General Ability II, are included in the Selection Test.
Selection Process:
Before 2003, there was a third round of testing to allow entry for pupils who missed the chance in P3, after the PSLE. This last round of testing was offered to students who achieved 3 or more A*s for the PSLE. Students who enrolled at this stage were referred to as Supplementary Intake students. However, this practice was discontinued in 2003. The IP schools and the new NUS High School, specialising in Mathematics and Science, opened up opportunities for more pupils who were not already part of the primary school GEP, thus, there were ample opportunities to join these schools and therefore no need for a supplementary exercise to select students for the GEP at secondary schools.
Progress in the Programme:
The pupils will have to study in this programme from Primary 4 to 6, and after that, the pupils can choose to continue studying in the programme only, in the Integrated Programme, or in the mainstream (not the GEP). Students also have a variety of top secondary schools to choose from depending on their PSLE results. Once the school is chosen, they will automatically enter the Express stream unless they choose otherwise.
Distinction:
Research Project Studies (RPS) starting in Primary 4, is a program to teach skills needed in research. Individualized Study Options (ISO) is a compulsory programme for pupils in Primary 5, wherein pupils do research on a specific topic. The students are asked to choose their own projects in Primary Five under Teacher Mentors. The student-teacher ratio is normally from 4:1 to 5:1. The Study Options given were: Individualized Research Studies (IRS) → research and present your findings InnoVation Programme (IvP, formerly IP) → students invent or improve things to solve everyday problems Future Problem Solving ( FPS) → Students solve future problems we may face in societyPupils in the GEP have to take Social Studies as a graded subject. Based on the mainstream textbook syllabus, students will have to study in-depth content. Lessons in the GEP are conducted with no textbooks or workbooks, with the exception of Chinese and Higher Chinese; lessons are more discussion, worksheet, and project oriented.
Distinction:
Pupils in GEP learn poetry and literature (A Single Shard in Primary 4, The Giver in Primary 5, and Friedrich in Primary 6) as part of the Concept Unit under the English Language subject.
A Wrinkle in Time was used as the literature book for Primary 5 students until 2014 when it was replaced with The Giver. The main purpose is to show students how a dystopian society functions.
For English, students have to do different process writings based on the genres they have studied, including mysteries and fairy tales.
Distinction:
In Primary 6, a graded Mathematics Alternative Assessment (Math AA) is given. The pupils will have to choose from six or seven projects that GEP branch officers in the Ministry of Education (MOE) create. These projects are individual and include research, a product to be made and reflections. They will also be required to do a biography unit, of which one is an oral assignment, with the latter a written assignment.
Integration with mainstream:
In an article in The Straits Times on 3 November 2007, the MOE announced its new scheme to "encourage" greater integration between GEP and mainstream students, to combat elitism and encourage socialisation. GEP students in the nine primary GEP centres would spend up to 50% of their lesson time with the top 2% to 5% of the cohort, or the top mainstream students. Non-core subjects such as art, music, and physical education are conducted with the mainstream cohort. The announcement of the integration provoked much buzz in the blogosphere. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**U6atac minor spliceosomal RNA**
U6atac minor spliceosomal RNA:
U6atac minor spliceosomal RNA is a non-coding RNA which is an essential component of the minor U12-type spliceosome complex. The U12-type spliceosome is required for removal of the rarer class of eukaryotic introns (AT-AC, U12-type).U6atac snRNA is proposed to form a base-paired complex with another spliceosomal RNA U4atac via two stem loop regions. These interacting stem loops have been shown to be required for in vivo splicing. U6atac is the functional analog of U6 spliceosomal RNA in the major U2-type spliceosomal complex. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alpha-Pyrrolidinobutiophenone**
Alpha-Pyrrolidinobutiophenone:
α-Pyrrolidinobutiophenone (α-PBP) is a stimulant compound developed in the 1960s which has been reported as a novel designer drug. It can be thought of as the homologue lying between the two better known drugs α-PPP and α-PVP.
Legality:
In the United States, it is a Schedule I controlled substance.Sweden's public health agency suggested to classify α-PBP as hazardous substance on November 10, 2014.As of October 2015 α-PBP is a controlled substance in China. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ghafara**
Ghafara:
In Islamic context, Ghafara (غفر) (v. past tense) or maghfira (forgiveness) is one of three ways of forgiveness, as written in the Qur'an and one of Allah's characteristics. It is to forgive, to cover up (sins) and to remit (absolution). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chalazion**
Chalazion:
A chalazion (; plural chalazia or chalazions) or meibomian cyst is a cyst in the eyelid usually due to a blocked meibomian gland, typically in the middle of the eyelid, red, and not painful. They tend to come on gradually over a few weeks.A chalazion may occur following a stye or from hardened oils blocking the gland. The blocked gland is usually the meibomian gland, but can also be the gland of Zeis.A stye and cellulitis may appear similar. A stye, however, is usually more sudden in onset, painful, and occurs at the edge of the eyelid. Cellulitis is also typically painful.Treatment is initiated with warm compresses. In addition, antibiotic/corticosteroid eyedrops or ointment may be used. If this is not effective, injecting corticosteroids into the lesion may be tried. If large, incision and drainage may be recommended. While relatively common, the frequency of the condition is unknown. It is most common in people 30–50 years of age, and equally common in males and females. The term is from the Greek khalazion (χαλάζιον) meaning "small hailstone".
Signs and symptoms:
Painless swelling on the eyelid Eyelid tenderness typically none-to-mild Increased tearing Heaviness of the eyelid Redness of conjunctiva Complications A large chalazion can cause astigmatism due to pressure on the cornea.As laser eye surgery involves shaping the cornea by burning parts of it away, weakening its structure, post-operation people can be left predisposed to deformation of the cornea from small chalazia.Complications of corticosteroid injection include hypopigmentation and fat atrophy which is less likely to occur in conjunctival approach of injection.A chalazion that reoccurs in the same area may rarely be a symptom of sebaceous carcinoma.
Diagnosis:
A chalazion or meibomian cyst can sometimes be mistaken for a stye.
Differential diagnosis Sebaceous gland adenoma Sebaceous gland carcinoma Sarcoid granuloma Foreign body granuloma
Treatment:
General treatment Chalazia will often disappear without further treatment within a few months, and virtually all will resorb within two years.Healing can be facilitated by applying a moist warm compress to the affected eye for approximately 10–15 minutes, 4 times per day. This promotes opening, drainage, and healing by softening the hardened oil that is occluding the duct. In addition, it is helpful to scrub the lid margin (at the base of the eyelashes) with a washcloth and mild (baby) shampoo, which removes oily debris.Topical antibiotic eye drops or ointment (e.g., chloramphenicol or fusidic acid) are sometimes used for the initial acute infection, but are otherwise of little value in treating a chalazion.If they continue to enlarge or fail to settle within a few months, smaller lesions can be injected with a corticosteroid.
Treatment:
Larger ones can be surgically removed using local anesthesia. This is usually done from underneath the eyelid to avoid a scar on the skin. If the chalazion is located directly under the eyelid's outer tissue, however, an excision from above may be more advisable so as not to inflict any unnecessary damage on the lid itself. Eyelid epidermis usually mends well, without leaving any visible scar. Depending on the chalazion's texture, the excision procedure varies: while fluid matter can easily be removed under minimal invasion, by merely puncturing the chalazion and exerting pressure upon the surrounding tissue, hardened matter usually necessitates a larger incision, through which it can be scraped out. Any residual matter should be metabolized in the course of the subsequent healing process, generally aided by regular appliance of dry heat. The excision of larger chalazia may result in visible hematoma around the lid, which will wear off within three or four days, whereas the swelling may persist for longer. Chalazion excision is an ambulant treatment and normally does not take longer than fifteen minutes. Nevertheless, owing to the risks of infection and severe damage to the eyelid, such procedures should only be performed by a medical professional.
Treatment:
Chalazia may recur, and they will usually be biopsied to rule out the possibility of a tumour.
Antibiotic/corticosteroid eyedrops or ointment A limited course of topical antibiotic/corticosteroid combination eyedrops or ointment such as tobramycin/dexamethasone may be effective in treating a chalazion.
Treatment:
Surgery Chalazion surgery is a simple procedure that is generally performed as a day operation, and the person does not need to remain in the hospital for further medical care. The eyelid is injected with a local anesthetic, a clamp is put on the eyelid, then the eyelid is turned over, an incision is made on the inside of the eyelid, and the chalazion is drained and scraped out with a curette. A scar on the upper lid can cause discomfort as some people feel the scar as they blink. As surgery damages healthy tissue (e.g., by scarring tissue or possibly even causing blepharitis), given other options, less invasive treatment is preferable.Chalazion removal surgery is performed under local or general anesthesia. Commonly, general anesthesia is administered in children to make sure they stay still and no injury to the eye occurs. Local anesthesia is used in adults and it is applied with a small injection into the eyelid. The discomfort of the injection is minimized with the help of an anesthetic cream, which is applied locally.
Treatment:
The chalazion can be removed in two ways, depending on the size of cyst. Relatively small chalazia are removed through a small cut at the back of the eyelid. The surgeon lifts the eyelid to access the back of its surface and makes an incision of approximately 3mm just on top of the chalazion. The lump is then removed, and pressure is applied for a few minutes to stop any oozing of blood that may occur because of the operation. Surgery of small chalazia does not require stitches, as the cut is at the back of the eyelid and therefore the cut cannot be seen, and the cosmetic result is excellent.
Treatment:
Larger chalazia are removed through an incision in front of the eyelid. Larger chalazia usually push on the skin of the eyelid, and this is the main reason why doctors prefer removing them this way. The incision is not usually larger than 3mm and it is made on top of the chalazion. The lump is removed and then pressure is applied to the incision to prevent oozing. This type of surgery is closed with very fine stitches. They are hardly visible and are usually removed within a week after the surgery has been performed. Although chalazia are rarely dangerous, it is common to send the chalazion or part of it to a laboratory to screen for cancer.When surgery for a chalazion is considered, people who take aspirin or any other blood-thinning medications are advised to stop taking them one week prior to the procedure as they may lead to uncontrollable bleeding.In rare cases, people are kept overnight in the hospital after chalazion surgery. This includes cases in which complications occurred and the person needs to be closely monitored. In most cases, however, people are able to go home after the operation has ended.
Treatment:
The recovery process is easy and quite fast. Most people with chalazion experience some very minor discomfort in the eye, which can be easily controlled by taking painkilling medication. People are, however, recommended to avoid getting water in the eye for up to 10 days after surgery. They may wash, bathe, or shower, but they must be careful to keep the area dry and clean. Makeup may be worn after at least one month post-operatively. People are recommended to not wear contact lenses in the affected eye for at least eight weeks to prevent infections and potential complications.Commonly, people receive eye drops to prevent infection and swelling in the eye and pain medication to help them cope with the pain and discomfort in the eyelid and eye. One can use paracetamol (acetominophen) rather than aspirin to control the pain. Also, after surgery, a pad and protective plastic shield are used to apply pressure on the eye in order to prevent leakage of blood after the operation; this may be removed 6 to 8 hours after the procedure.People who undergo chalazion surgery are normally asked to visit their eye surgeon for post-op follow-up three to four weeks after surgery has been performed.Chalazion surgery is a safe procedure and complications seldom occur. Serious complications that require another operation are also very rare. Among potential complications, there are infection, bleeding, or the recurrence of the chalazion.
Treatment:
Steroid injection Because the inflammatory cells of chalazia are sensitive to steroids, intralesional or subcutaneous injection of soluble steroids, commonly 0.1 to 0.2 ml of triamcinolone acetonide (TA) into the lesion's center one or two times, is one option. The success rate is in the 77% to 93% range. It carries a quite small risk of central retinal artery obstruction, focal depigmentation in dark-skinned patients, and inadvertent ocular penetration. It is considered a simple and effective treatment option, one with high success rates. It may give the same results as surgical treatment (I&C). Larger, long-standing lesions are best treated surgically. Considering the surgical risks, steroid injection is believed to be a safer procedure in marginal lesions and lesions close to lacrimal punctum.One injection alone has a success rate of about 80%; if requested, a second injection can be given 1–2 weeks later.
Treatment:
Carbon dioxide laser Chalazion excision using a CO2 laser is also a safer procedure, with minimal bleeding and no eye patching required. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SoundStorm**
SoundStorm:
SoundStorm is a brand by Nvidia regarding to a SIP block integrating 5.1 surround sound technology found on the die of their nForce and nForce2 chipsets for x86 CPUs. It is also the name of a certification to be obtained by Nvidia when complying with their specifications.
Certification:
The SoundStorm certification ensured that many manufacturers produced solutions with high quality sound output. To achieve SoundStorm certification, a motherboard had to use the nForce or nForce2 chipsets and also include the specified discrete outputs. It was also necessary to meet certain sound quality levels as tested by Dolby Digital sound labs.
At the time SoundStorm was the only available solution capable of outputting Dolby Digital Live, coveted in home theater PCs.
Hardware:
The SoundStorm SIP block is said to consist of a series of fixed-function and general-purpose processing units providing a combined total of reportedly 4 billion operations per second. A fully programmable, Motorola 56300-based digital signal processor (DSP) is provided for effects processing but with very limited support under DirectX on the PC.
Hardware:
The DSP on the APU was normally driven by code largely derived from the 3D audio middleware company Sensaura. The Sensaura middleware was also used by the Windows drivers of nearly every sound card and audio codec other than those by Creative. Unlike the usual software implementations of the Sensaura code, the SoundStorm solution ran the same code on a hardware DSP, which resulted in extremely low CPU usage. It was also capable of realtime Dolby Digital 5.1 encoding. Compared to other audio solutions of the day, the difference in CPU usage when running popular multimedia applications was as much as 10-20%. While the Audigy offers similar performance, it does so at a much higher price point, and only as a discrete add-in solution.The nForce2 APU was a purely digital component, and that motherboard manufacturers still had to use codec chips such as the 650 from Realtek for the audio output functions, including the necessary digital to analog conversion (DAC). After the demise of SoundStorm, codec chips such as the Realtek 850 have become standard integrated audio solutions, with audio processing functions offloaded on the host processor. As such, the quality of the device drivers is very important to ensure reasonably low host processor usage, without audio quality issues.
Drivers:
Since the SoundStorm solution was a general-purpose DSP where code was uploaded to the card by the device drivers at boot time, this made it easy to add new functionality to SoundStorm. However, it also meant that it was not possible to create third-party device drivers for the SoundStorm, since they did not have access to the DSP code. Linux drivers for the SoundStorm actually talk directly to the audio codec (like a RealTek ALC650), bypassing the APU completely and doing all audio calculations on the CPU and leaving the SoundStorm DSP idle.
History:
Video game consoles Reportedly SoundStorm development was originally funded by Microsoft for use in the Xbox gaming console. At time of writing reportedly a second generation chip has been developed, this time with funding from Sony, as part of the PlayStation 3 project. It was hinted that SoundStorm may make return to the PC scene, possibly as part of a multimedia graphics card, along the lines of the original NV1 card, rather than as a discrete or onboard solution. While there did appear to be plans for a discrete product at one point, this never materialised.
History:
Discontinuation Nvidia decided the cost of including the SoundStorm SIP block on the dies of their chipsets was too high and was not included in nForce3 and beyond.
Alternatives:
Other manufacturers have since produced standalone sound cards based on C-Media chips such as the CMI8788 which also provide Dolby Digital and DTS encoding features. These manufacturers include Turtle Beach and Auzentech. A software alternative is redocneXk, which provides real-time AC3 encoding comparable to SoundStorm or Creative's Audigy2 and later sound cards. However, early versions of these alternatives may still be lagging behind the SoundStorm in terms of reliability, ease of use, and CPU usage.
Alternatives:
In October 2013 AMD presented products with AMD TrueAudio. A block of DSPs to be used to offload calculations for 3D sound. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Leftovers**
Leftovers:
Leftovers are surplus foods remaining unconsumed at the end of a meal, which may be put in containers with the intention of eating later. Inedible remains like bones are considered waste, not leftovers. Depending on the situation, the amount of food, and the type of food, leftovers may be saved or thrown away.
Leftovers:
The use of leftovers depends on where the meal was eaten, the preferences of the diner, and the local culture. Leftovers from meals at home are often eaten later. This is facilitated by the private environment and convenience of airtight containers and refrigeration. People may eat leftovers directly from the refrigerator, reheat them, or use them as ingredients to make a new dish.
Leftovers:
At restaurants, uneaten food from meals is sometimes taken by diners for later consumption. In the United States, such food is put in a so-called "doggy bag", notionally to feed to pets, whether or not it is in actuality.
Leftover cuisine:
New dishes made from leftovers are common in world cuisine. People invented many such dishes before refrigeration and reliable airtight containers existed. Besides capturing nutrition from otherwise inedible bones, stocks and broths provide a base for leftover scraps too small to be a meal themselves. Casseroles, paella, fried rice, Shepherd pies, and pizza can also be used for this purpose, and may even have been invented as a means of reusing leftovers. Among American university students, leftover pizza itself has acquired particular in-group significance, to the extent that the USDA's Food Safety and Inspection Service offers, as its first tip under "Food Safety Tips for College Students" by Louisa Graham, a discussion of the considerable risks of eating unrefrigerated pizza.At some holiday meals, such as Christmas and Thanksgiving in the United States, it is customary to prepare much more food than necessary, specifically so the host can send leftovers home with guests. Cold turkey is archetypal in the United States as a Thanksgiving leftover, with turkey meat often reappearing in sandwiches, soups, and casseroles for several days after the feast.
Leftover portions:
Leftovers have had a major impact on the consumption of food, particularly the size of portions. Portion sizes have increased greatly. In general, food leftovers have both positive and negative impacts, depending on the person's eating habits involved with leftovers. With an increase in portion size comes the perception of the amount of intake a particular person considers. For example, a smaller portion usually leads to smaller consumption, making a person believe they have not eaten enough and negatively impacting their eating habits. In turn, a larger portion leads to a greater amount of leftovers, whereas a smaller portion leads to a small amount of leftovers. Through extensive research, one of the most influential factors of weight gain is leftover food and the increased amount of consumption because of it.
Chop suey:
The name of the Chinese-American dish chop suey is sometimes translated as "miscellaneous leftovers", although it is unlikely that actual leftovers were served at chop suey restaurants.
Doggy bag:
Diners in a restaurant may leave uneaten food for the restaurant to discard, or take it away for later consumption. To take the food away, the diner might request a container, or ask a server to package it. Such a container is colloquially called a doggy bag or doggie bag. This most likely derives from a pretense that the diner plans to give the food to a pet, rather than eat it themselves, and so may be a euphemism. The modern doggie bag came about in the 1940s. Some also speculate the name was born during World War II when food shortages encouraged people to limit waste, and pet food was scarce. In 1943, San Francisco cafés, in an initiative to prevent animal cruelty, offered patrons Pet Pakits, cartons that patrons could readily request to carry home leftovers. The term doggy bag was popularized in the 1970s etiquette columns of many newspapers. Doggy bags are most common in restaurants that offer a take-out food service as well as sit-down meals, and their prevalence as an accepted social custom varies widely by location. In some countries, especially in Europe, people would frown upon a diner asking for a doggy bag.Some restaurants wrap leftovers in tin foil, creating shapes such as swans or sea horses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Third-wave coffee**
Third-wave coffee:
Third-wave coffee is a movement in coffee marketing emphasizing high quality. Beans are typically sourced from individual farms and are roasted more lightly to bring out their distinctive flavors. Though the term was coined in 1999, the approach originates in the 1970s, with roasters such as the Coffee Connection.
History:
The term "third-wave coffee" is generally attributed to the coffee professional Trish Rothgeb, who used the term in a 2003 article, alluding to the three waves of feminism. However, the specialty coffee broker and author, Timothy J. Castle, had already used the term in an article (Coffee's Third Wave) that he wrote for the Dec1999 / Jan 2000 issue of Tea & Coffee Asia, a magazine which is no longer in publication. Some opinionated background and a link to a pdf of the article can be found on Castle's blog, Coffee Curmudgeon. The first mention in the mainstream media was in 2005, in a National Public Radio piece about barista competitions.
History:
United States In the first wave of coffee, coffee consumers generally did not differentiate by origin or beverage type. Instant coffee, grocery store canned coffee, and diner coffee were all hallmarks of first wave coffee. First wave coffee focuses on low price and consistent taste. Many restaurants offered free refills.
History:
The second wave of coffee is generally credited to Peet's Coffee & Tea of Berkeley, California, which in the late 1960s began artisanal sourcing, roasting, and blending with a focus on highlighting countries of origin and their signature dark roast profile. Peet's Coffee inspired the founders of Starbucks of Seattle, Washington. The second wave of coffee introduced the concept of different origin countries to coffee consumption, beyond a generic cup of coffee. Fueled in large part by market competition between Colombian coffee producers and coffee producers from Brazil through the 1960s, coffee roasters highlighted flavor characteristics that varied depending on what countries coffees came from. While certain origin countries grew to be prized among coffee enthusiasts and professionals, the world's production of high-altitude grown arabica coffee, grown in countries within the tropical zone, became sought-after as each country had particular flavor profiles that were considered interesting and desirable. In addition to country of origin, the second wave of coffee introduced coffee-based beverages to the wider coffee-consuming world, particularly those traditional to Italy made with espresso.
History:
Third-wave coffee is often associated with the concept of 'specialty coffee,' referring either to specialty grades of green (raw and unroasted) coffee beans (distinct from commercial grade coffee), or specialty coffee beverages of high quality and craft.
History:
United Kingdom In the late twentieth century, instant coffee dominated the UK market. Inspired by the example of Starbucks, Seattle Coffee Company opened in London in 1995, opening over 50 stores before being taken over by Starbucks in 1998. Flat White, an early third-wave café, opened in 2005 and James Hoffmann's third-wave roastery Square Mile opened in 2008.From 2007 to 2009, the World Barista Championship was won by Londoners, starting with Hoffmann, and the 2010 edition of the competition was hosted in London. Hoffmann has since come to be regarded as a pioneer in the third-wave coffee movement in the UK, with The Globe and Mail describing him as "the godfather of London's coffee revolution".
Use of the term:
The third-wave of coffee has been chronicled by publications such as The New York Times, LA Weekly, Los Angeles Times, La Opinión and The Guardian.In March 2008, the food critic Jonathan Gold of LA Weekly defined the third wave of coffee: The first wave of American coffee culture was probably the 19th-century surge that put Folgers on every table, and the second was the proliferation, starting in the 1960s at Peet's and moving smartly through the Starbucks grande decaf latte, of espresso drinks and regionally labeled coffee. We are now in the third wave of coffee connoisseurship, where beans are sourced from farms instead of countries, roasting is about bringing out rather than incinerating the unique characteristics of each bean, and the flavor is clean and hard and pure.
Use of the term:
The earlier term "specialty coffee" was coined in 1974, and refers narrowly to high-quality beans scoring 80 points or more on a 100-point scale.
Australia:
The third wave of coffee has been popular in Australia. Melbourne is known as the "capital of coffee" with its many cafes.
Current status:
Across the US and Canada, there are many third-wave roasters, and some stand-alone coffee shops or small chains that roast their own coffee. There are a few larger businesses, more prominent in roasting than in operating – the "Big Three of Third Wave Coffee" are Intelligentsia Coffee & Tea of Chicago; Stumptown Coffee Roasters of Portland, Oregon; and Counter Culture Coffee of Durham, North Carolina, all of which engage in direct trade sourcing. Intelligentsia has seven bars – four in Chicago, three in Los Angeles, together with one "lab" in New York. Stumptown has 11 bars – five bars in Portland, one in Seattle, two in New York, one in Los Angeles, one in Chicago, and one in New Orleans. Counter Culture has eight regional training centers – that do not function as retail stores – one in each of: Chicago, Atlanta, Asheville, Durham, Washington, D.C., Philadelphia, New York, and Boston. By comparison, Starbucks has over 23,000 cafes worldwide as of 2015.Both Intelligentsia Coffee & Tea and Stumptown Coffee Roasters were acquired by Peet's Coffee & Tea (itself part of JAB Holding Company) in 2015. At that time, Philz Coffee (headquartered in San Francisco), Verve Coffee Roasters (headquartered in Santa Cruz, California) and Blue Bottle Coffee (headquartered in Oakland, California) were also considered major players in third-wave coffee.In 2014, Starbucks invested around $20 million in a coffee roastery and tasting room in Seattle, targeting the third-wave market. Starbucks' standard cafes use automated espresso machines which are faster and require less training than conventional espresso machines used by third-wave competitors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pocket mask**
Pocket mask:
A pocket mask, or pocket face mask or CPR mask, is a device used to safely deliver rescue breaths during a cardiac arrest or respiratory arrest. The specific term "Pocket Mask" is the trademarked name for the product manufactured by Laerdal Medical AS. It is not to be confused with a bag valve mask (BVM).
Purpose:
A pocket mask is a small portable device used in the pre-hospital setting to provide adequate ventilation to a patient who is either in respiratory failure or cardiac arrest. The pocket mask is designed to be placed over the face of the patient, thus creating a seal enclosing both the mouth and nose. Air is then administered to the patient by an emergency responder. The emergency responder exhales through a one-way filter valve, providing adequate ventilation to the patient. The emergency responder is capable of delivering up to 16% oxygen with his/her breath.
Purpose:
Modern pocket masks have either a built in one-way valve or an attachable, disposable filter to protect the emergency responder from the patient's potentially infectious bodily substances, such as vomit or blood.Many masks also have a built-in oxygen intake tube, allowing for administration of 50-60% oxygen. Without being hooked up to an external line, exhaled air from the provider can still provide sufficient oxygen to live, up to 16%. Earth's atmosphere consists of approximately 21% oxygen.
Usage:
While a pocket mask is not as efficient as a bag valve mask, it does have its advantages when only one rescuer is available. As suggested by its name, the pocket mask benefits from a somewhat easier portability when compared to the bag valve mask. Also, in contrast to the bag valve mask, which requires two hands to operate (one to form a seal and the other to squeeze the bag), the pocket mask allows for both of the rescuer's hands to be on the patients head. This hand placement provides a superior seal on the patient's face, and allows the responder to perform a jaw thrust on patients who may have a spinal injury. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cooperative coevolution**
Cooperative coevolution:
Cooperative Coevolution (CC) in the field of biological evolution is an evolutionary computation method. It divides a large problem into subcomponents, and solves them independently in order to solve the large problem.The subcomponents are also called species. The subcomponents are implemented as subpopulations and the only interaction between subpopulations is in the cooperative evaluation of each individual of the subpopulations. The general CC framework is nature inspired where the individuals of a particular group of species mate amongst themselves, however, mating in between different species is not feasible. The cooperative evaluation of each individual in a subpopulation is done by concatenating the current individual with the best individuals from the rest of the subpopulations as described by M. Potter.The cooperative coevolution framework has been applied to real world problems such as pedestrian detection systems, large-scale function optimization and neural network training.
Cooperative coevolution:
It has also be further extended into another method, called Constructive cooperative coevolution.
Pseudocode:
i := 0 for each subproblem S do Initialise a subpopulation Pop0(S) calculate fitness of each member in Pop0(S) while termination criteria not satisfied do i := i + 1 for each subproblem S do select Popi(S) from Popi-1(S) apply genetic operators to Popi(S) calculate fitness of each member in Popi(S) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CD ripper**
CD ripper:
A CD ripper, CD grabber, or CD extractor is software that rips raw digital audio in Compact Disc Digital Audio (CD-DA) format tracks on a compact disc to standard computer sound files, such as WAV or MP3.
A more formal term used for the process of ripping audio CDs is digital audio extraction (DAE).
History:
In the early days of computer CD-ROM drives and audio compression mechanisms (such as MP2), CD ripping was considered undesirable by copyright holders, with some attempting to retrofit copy protection into the simple ISO9660 standard. As time progressed, most music publishers became more open to the idea that since individuals had bought the music, they should be able to create a copy for their own personal use on their own computer. This is not yet entirely true; even with some current digital music delivery mechanisms, there are considerable restrictions on what an end user can do with their paid for (and therefore personally licensed) audio. Windows Media Player's default behavior is to add copy protection measures to ripped music, with a disclaimer that if this is not done, the end user is held entirely accountable for what is done with their music. This suits most users who simply want to store their music on a memory stick, MP3 player or portable hard disk and listen to it on any PC or compatible device.
Etymology:
The Jargon File entry for rip notes that the term originated in Amiga slang, where it referred to the extraction of multimedia content from program data.
Design:
As an intermediate step, some ripping programs save the extracted audio in a lossless format such as WAV, FLAC, or even raw PCM audio. The extracted audio can then be encoded with a lossy codec like MP3, Vorbis, WMA or AAC. The encoded files are more compact and are suitable for playback on digital audio players. They may also be played back in a media player program on a computer.
Design:
Most ripping programs will assist in tagging the encoded files with metadata. The MP3 file format, for example, allows tags with title, artist, album and track number information. Some will try to identify the disc being ripped by looking up network services like AMG's LASSO, FreeDB, Gracenote's CDDB, GD3 [1] or MusicBrainz, or attempt text extraction if CD-Text has been stored.
Design:
Some all-in-one ripping programs can simplify the entire process by ripping and burning the audio to disc in one step, possibly re-encoding the audio on-the-fly in the process.
Some CD ripping software is specifically intended to provide an especially accurate or "secure" rip, including Exact Audio Copy, cdda2wav, CDex and cdparanoia.
Compact disc seek jitter:
In the context of digital audio extraction from compact discs, seek jitter causes extracted audio samples to be doubled-up or skipped entirely if the Compact Disc drive re-seeks. The problem occurs because the Red Book does not require block-accurate addressing during seeking. As a result, the extraction process may restart a few samples early or late, resulting in doubled or omitted samples. These glitches often sound like tiny repeating clicks during playback. A successful approach to correction in software involves performing overlapping reads and fitting the data to find overlaps at the edges. Most extraction programs perform seek jitter correction. CD manufacturers avoid seek jitter by extracting the entire disc in one continuous read operation, using special CD drive models at slower speeds so the drive does not re-seek.
Optical drive properties:
Properties of an optical drive helping in achieving a perfect rip are a small sample offset (at best zero), no jitter, no or deactivateable caching, and a correct implementation and feed-back of the C1 and C2 error states. There are databases listing these features for multiple brands and versions of optical drives. Also, EAC has the ability to autodetect some of these features by a test-rip of a known reference CD.
Examples:
Notable CD ripper applications include the following ones: BSD and Linux Mac OS X Windows | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reaction step**
Reaction step:
A reaction step of a chemical reaction is defined as: "An elementary reaction, constituting one of the stages of a stepwise reaction in which a reaction intermediate (or, for the first step, the reactants) is converted into the next reaction intermediate (or, for the last step, the products) in the sequence of intermediates between reactants and products". To put it simply, it is an elementary reaction which goes from one reaction intermediate to another or to the final product. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IBIS Interconnect Modeling Specification**
IBIS Interconnect Modeling Specification:
The IBIS Interconnect Modeling Specification (ICM) in electronic circuit simulation is a behavioral, ASCII-based file format. The ICM is used for distributing passive interconnect modeling information. The format and style of ICM are highly similar to the Input Output Buffer Information Specification (IBIS), and both specifications are managed by the same organization, the IBIS Open Forum.
IBIS Interconnect Modeling Specification:
Interconnects under ICM may be represented through tabular frequency-dependent RLGC matrices or through S-parameters in separate Touchstone files. ICM models define interconnects as consisting of one or more segments. Segment topologies are described in terms of the arrangements of their nodes relative to pin or port lists. The electrical behaviors for each segment are then defined. Interconnects may be grouped into families with similar characteristics or sharing identical segment definitions. As of 2006, ICM version 1.1 has been standardized in the US through both the GEIA and ANSI as ANSI GEIA-STD-0001. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Temporal Process Language**
Temporal Process Language:
In theoretical computer science, Temporal Process Language (TPL) is a process calculus which extends Robin Milner's CCS with the notion of multi-party synchronization, which allows multiple process to synchronize on a global 'clock'. This clock measures time, though not concretely, but rather as an abstract signal which defines when the entire process can step onward.
Informal definition:
TPL is a conservative extension of CCS, with the addition of a special action called σ representing the passage of time by a process - the ticking of an abstract clock. As in CCS, TPL features action prefix and it can be described as being patient, that is to say a process a.P will idly accept the ticking of the clock, written as a.P→aa.P Key to the use of abstract time is the timeout operator, which presents two processes, one to behave as if the clock ticks, one to behave as if it can't, i.e.
Informal definition:
⌊E⌋(F)→σF provided process E does not prevent the clock from ticking.
⌊E⌋(F)→σE′ provided E can perform action a to become E'.
In TPL, there are two ways to prevent the clock from ticking. First is via the presence of the ω operator, for example in process a.P+Ω the clock is prevented from ticking. It can be said that the action a is insistent, i.e. it insists on acting before the clock can tick again.
The second way in which ticking can be prevented is via the concept of maximal-progress, which states that silent actions (i.e. τ actions) always take precedence over and thus suppress σ actions. Thus is two parallel processes are capable of synchronizing at a given instant, it is not possible for the clock to tick.
Thus a simple way of viewing multi-party synchronization is that a group of composed processes will allow time to pass provided none of them prevent it, i.e. the system agrees that it is time to move on.
Formal definition:
Syntax Let a be a non-silent action name, α be any action name (including τ, the silent action) and X be a process label used for recursion.
::= α.Proc|⌊Proc⌋(Proc)|Proc+Proc|Proc|Proc|recX.Proc|X|Ω|Proc∖a|0 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Isopullulanase**
Isopullulanase:
The enzyme isopullulanase (EC 3.2.1.57) has systematic name pullulan 4-glucanohydrolase (isopanose-forming), and catalyses the hydrolysis of pullulan to isopanose (6-α-maltosylglucose). It has no activity on starch. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wine Grapes**
Wine Grapes:
Wine Grapes - A complete guide to 1,368 vine varieties, including their origins and flavours is a reference book about varieties of wine grapes. The book covers all grape varieties that were known to produce commercial quantities of wine at the time of writing, which meant 1,368 of the known 10,000 varieties. It is written by British Masters of Wine Jancis Robinson and Julia Harding in collaboration with Swiss grape geneticist Dr. José Vouillamoz.
Description:
Wine Grapes is not an update of Robinson's earlier book Vines, Grapes & Wines, first published in 1986. Wine Grapes is a brand new and original work that is much more extensive in coverage, and also incorporates recent findings from ampelographic research, including DNA profiling of grape varieties.Wine Grapes include a catalog listing of 1,368 grape varieties from across the globe including 377 Italian, 204 French and 77 Portuguese wine grape varieties. Coverage includes a listing of synonyms as well as the genetic relationship between varieties derived from DNA analysis.
Description:
DNA findings According to Jancis Robinson and José Vouillamoz, Wine Grapes includes details about almost 300 previously unpublished relationship between different grape varieties including the extensive family tree of the Pinot grape that includes a genetic link between the Burgundian wine grape Pinot noir to the Rhône grape Syrah and the Bordeaux varieties Cabernet Sauvignon, Cabernet franc, Merlot and Malbec. The book also details how the Jura wine grape Savagnin blanc descended from the Pinot grape to go on and be the parent vine of several white wine varieties including Chenin blanc, Grüner Veltliner, Sauvignon blanc, Petit Manseng and Verdelho.Along with research assistant and Master of wine Julia Harding, Robinson and Vouillamoz go on to detail the long history of discovering the parentage of the American wine variety Zinfandel including past research that pointed to a connection between the Italian variety Primitivo and the Croatian wine grape Crljenak Kaštelanski. After years of research and DNA testing of vines from vineyards across the globe, a single 90-year-old grape vine from the garden of an elderly lady in Split, Croatia provided the evidence to show that Zinfandel was originally a Croatian grape known as Tribidrag that had been cultivated in Croatia since the 15th century.
Description:
Other findings reported in Wine Grapes include the discovery of several species of Malvasia grapes growing in central and southern Italy are actually the Spanish wine grape Tempranillo and that Cabernet franc may have originated in Basque country of Spain. Additionally the origins of the assumed to be French varieties of Mourvèdre, Grenache and Carignan are shown to likely be Spanish instead.Another detail revealed by DNA testing and reported in Wine Grapes is that several northern Italian white wine grape varieties including Favorita, Pigato and Vermentino are genetically the same grape vine which is known under the synonym of Rolle in France but has no genetic relationship to the Rollo grape of Liguria.
Colour plates:
A number of grape varieties are illustrated by reproductions of colour plates from Ampélographie. Traité général de viticulture written by Pierre Viala and Victor Vermorel and published in 1901–1910. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Personal information manager**
Personal information manager:
A personal information manager (often referred to as a PIM tool or, more simply, a PIM) is a type of application software that functions as a personal organizer. The acronym PIM is now, more commonly, used in reference to personal information management as a field of study. As an information management tool, a PIM tool's purpose is to facilitate the recording, tracking, and management of certain types of "personal information".
Scope:
Personal information can include any of the following: Address books Alerts A digital calendar with calendar dates, such as: Anniversaries Appointments Birthdays Events Meetings Education records Email addresses Fax communications Itineraries Instant message archives Legal documents Lists (such as reading lists, task lists) Medical information, such as healthcare provider contact information, medical history, prescriptions Passwords and login credentials Personal file collections (digital and physical): documents, music, photos, videos and similar Personal diary/journal/memos/notes Project management features Recipes Reference materials (including scientific references, websites of interest) RSS/Atom feeds Reminders Voicemail communications
Synchronization:
Some PIM/PDM software products are capable of synchronizing data over a computer network, including mobile ad hoc networks (MANETs). This feature typically stores the personal data on cloud drives allowing for continuous concurrent data updates/access, on the user's computers, including desktop computers, laptop computers, and mobile devices, such a personal digital assistants or smartphones.)
History:
Prior to the introduction of the term "Personal digital assistant" ("PDA") by Apple in 1992, handheld personal organizers such as the Psion Organiser and the Sharp Wizard were also referred to as "PIMs".The time management and communications functions of PIMs largely migrated from PDAs to smartphones, with Apple, RIM (Research In Motion, now BlackBerry), and others all manufacturing smartphones that offer most of the functions of earlier PDAs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Practice of Programming**
The Practice of Programming:
The Practice of Programming (ISBN 0-201-61586-X) by Brian W. Kernighan and Rob Pike is a 1999 book about computer programming and software engineering, published by Addison-Wesley.According to the preface, the book is about "topics like testing, debugging, portability, performance, design alternatives, and style", which, according to the authors, "are not usually the focus of computer science or programming courses". It treats these topics in case studies, featuring implementations in several programming languages (mostly C, but also C++, AWK, Perl, Tcl and Java).
The Practice of Programming:
The Practice of Programming has been translated into twelve languages. Eric S. Raymond, in The Art of Unix Programming, calls it "recommended reading for all C programmers (indeed for all programmers in any language)". A 2008 review on LWN.net found that TPOP "has aged well due to its focus on general principles" and that "beginners will benefit most but experienced developers will appreciate [...] the later chapters". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SpeedStep**
SpeedStep:
Enhanced SpeedStep is a series of dynamic frequency scaling technologies (codenamed Geyserville and including SpeedStep, SpeedStep II, and SpeedStep III) built into some Intel microprocessors that allow the clock speed of the processor to be dynamically changed (to different P-states) by software. This allows the processor to meet the instantaneous performance needs of the operation being performed, while minimizing power draw and heat generation. EIST (SpeedStep III) was introduced in several Prescott 6 series in the first quarter of 2005, namely the Pentium 4 660. Intel Speed Shift Technology (SST) was introduced in Intel Skylake Processor.Enhanced Intel SpeedStep Technology is sometimes abbreviated as EIST. Intel's trademark of "INTEL SPEEDSTEP" was cancelled due to the trademark being invalidated in 2012.
Explanation:
Running a processor at high clock speeds allows for better performance. However, when the same processor is run at a lower frequency (speed), it generates less heat and consumes less power. In many cases, the core voltage can also be reduced, further reducing power consumption and heat generation. By using SpeedStep, users can select the balance of power conservation and performance that best suits them, or even change the clock speed dynamically as the processor burden changes.
Explanation:
The power consumed by a CPU with a capacitance C, running at frequency f and voltage V is approximately: P=CV2f For a given processor, C is a fixed value. However, V and f can vary considerably. For example, for a 1.6 GHz Pentium M, the clock frequency can be stepped down in 200 MHz decrements over the range from 1.6 to 0.6 GHz. At the same time, the voltage requirement decreases from 1.484 to 0.956 V. The result is that the power consumption theoretically goes down by a factor of 6.4. In practice, the effect may be smaller because some CPU instructions use less energy per tick of the CPU clock than others. For example, when an operating system is not busy, it tends to issue x86 halt (HLT) instructions, which suspend operation of parts of the CPU for a time period, so it uses less energy per tick of the CPU clock than when executing productive instructions in its normal state. For a given rate of work, a CPU running at a higher clock rate will execute a greater proportion of HLT instructions. The simple equation which relates power, voltage and frequency above also does not take into account the static power consumption of the CPU. This tends not to change with frequency, but does change with temperature and voltage. Hot electrons, and electrons exposed to a stronger electric field are more likely to migrate across a gate as "gate leakage" current, leading to an increase in static power consumption.
Explanation:
Older processors such as the Pentium 4-M, which use older versions of SpeedStep, have fewer clock-speed increments. SpeedStep technology is partly responsible for the reduced power consumption of Intel's Pentium M processor, part of the Centrino brand.
Known issues:
Microsoft has reported that there may be problems previewing video files when SpeedStep (or the AMD equivalent PowerNow!) is enabled under Windows 2000 or Windows XP.
Operating system support:
Solaris has supported SpeedStep since OpenSolaris SXDE 9/07.
Older versions of Microsoft Windows, Windows 2000 and earlier, need a special driver and dashboard application to access the SpeedStep feature. Intel's website specifically states that such drivers must come from the computer manufacturer; there are no generic drivers supplied by Intel which will enable SpeedStep for older Windows versions if one cannot obtain a manufacturer's driver.
Operating system support:
Under Microsoft Windows XP, SpeedStep support is built into the power management console under the control panel. In Windows XP a user can regulate processor speed indirectly by changing power schemes. The "Home/Office Desk" setting disables SpeedStep, the "Portable/Laptop" power scheme enables SpeedStep, and the "Max Battery" uses SpeedStep to slow the processor to minimal power levels as the battery weakens. The SpeedStep settings for power schemes, either built-in or custom, cannot be modified from the control panel's GUI, but can be modified using the POWERCFG.EXE command-line utility.
Operating system support:
The Linux kernel has a subsystem called "cpufreq", tunable by power-scheme and command line, devoted to the control of the operating frequency and voltage of a CPU. Linux runs on Intel, AMD, and other makes of CPU.
Newer version Windows 10 and Linux kernel support Intel Speed Shift Technology.In contrast, AMD has supplied and supported drivers for its competing PowerNow! technology that work on Windows 2000, ME, 98, and NT. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Discrete differential geometry**
Discrete differential geometry:
Discrete differential geometry is the study of discrete counterparts of notions in differential geometry. Instead of smooth curves and surfaces, there are polygons, meshes, and simplicial complexes. It is used in the study of computer graphics, geometry processing and topological combinatorics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nuts and bolts (general relativity)**
Nuts and bolts (general relativity):
In physics, in the theory of general relativity, spacetimes with at least a 1-parameter group of isometries can be classified according to the fixed point-sets of the action. Isolated fixed points are called nuts. The other possibility is that the fixed point set is a metric 2-sphere, called bolt. The number of nuts and bolts can also be related to topological invariants, such as the Euler characteristic. This classification is widely used in the analysis of gravitational instantons. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sodium ethoxide**
Sodium ethoxide:
Sodium ethoxide, also referred to as sodium ethylate, is the ionic, organic compound with the formula C2H5ONa, or NaOEt (Et = ethane). It is a white solid, although impure samples appear yellow or brown. It dissolves in polar solvents such as ethanol. It is commonly used as a strong base.
Preparation:
Few procedures have been reported to prepare the anhydrous solid. Instead the material is typically prepared in a solution with ethanol. It is commercially available and as a solution in ethanol. It is easily prepared in the laboratory by treating sodium metal with absolute ethanol: 2C2H5OH + 2Na → 2C2H5ONa + H2The reaction of sodium hydroxide with anhydrous ethanol suffers from incomplete conversion to the alkoxide.
Structure:
The crystal structure of sodium ethoxide has been determined by X-ray crystallography. It consists of layers of alternating Na+ and O− centres with disordered ethyl groups covering the top and bottom of each layer. The ethyl layers pack back-to-back resulting in a lamellar structure. The reaction of sodium and ethanol sometimes forms other products such as the disolvate NaOEt·2EtOH. Its crystal structure has been determined, although the structure of other phases in the Na/EtOH system remain unknown.
Reactions:
Sodium ethoxide is commonly used as a base in the Claisen condensation and malonic ester synthesis. Sodium ethoxide may either deprotonate the α-position of an ester molecule, forming an enolate, or the ester molecule may undergo a nucleophilic substitution called transesterification. If the starting material is an ethyl ester, trans-esterification is irrelevant since the product is identical to the starting material. In practice, the alcohol/alkoxide solvating mixture must match the alkoxy components of the reacting esters to minimize the number of different products.
Reactions:
Many alkoxides are prepared by salt metathesis from sodium ethoxide.
Stability:
Sodium ethoxide is prone to reaction with both water and carbon dioxide in the air. This leads to degradation of stored samples over time, even in solid form. The physical appearance of degraded samples may not be obvious, but samples of sodium ethoxide gradually turn dark on storage. It has been reported that even newly-obtained commercial batches of sodium ethoxide show variable levels of degradation, and responsible as a major source of irreproducibility when used in Suzuki reactions.
Stability:
In moist air, NaOEt hydrolyses rapidly to sodium hydroxide (NaOH). The conversion is not obvious and typical samples of NaOEt are contaminated with NaOH. In moisture-free air, solid sodium ethoxide can form sodium ethyl carbonate from fixation of carbon dioxide from the air. Further reactions lead to degradation into a variety of other sodium salts and diethyl ether.This instability can be prevented by storing sodium ethoxide under an inert (N2) atmosphere.
Safety:
Sodium ethoxide is a strong base, and is therefore corrosive. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Speed garage**
Speed garage:
Speed garage (occasionally known as plus-8) is a genre of electronic dance music, associated with the UK garage scene, of which it is regarded as one of its subgenres.
Characteristics:
Speed garage features sped-up NY garage 4-to-the-floor rhythms that are combined with breakbeats. Snares are placed as over the 2nd and the 4th kickdrums, so in other places of the drum pattern. Speed garage tunes have warped, heavy basslines, influenced by jungle and reggae. Sweeping bass is typical for speed garage. It is also typical for speed garage tunes to have a breakdown. Speed garage tunes sometimes featured timestretched vocals. As it is heavily influenced by jungle, speed garage makes heavy use of jungle and dub sound effects, such as gunshots and sirens.
Characteristics:
A widely regarded pioneer of the speed garage sound is record producer, DJ and remixer Armand van Helden, whose Dark Garage remix of the Sneaker Pimps' "Spin Spin Sugar" in 1996 helped bring the style of speed garage into the mainstream arena.
Notable songs/remixes:
The following is a list of notable songs and official remixes which not only charted but were popular within the speed garage scene: "Sugar Is Sweeter (Armand's Drum 'n' Bass Mix)" (1996) / "Spin Spin Sugar (Armand's Dark Garage Mix)" (1997) / "Digital (Armand Van Helden's Speed Garage Mix)" (1997) – Armand van Helden "Dancing for Heaven" (1995) / "Saved My Life" (1996) – Todd Edwards "Gunman" (1997) / "Kung-Fu" (1998) – 187 Lockdown "Deeper" (1997) / "God Is a DJ (Serious Danger Remix)" (1998) – Serious Danger "Hype Funk (Dub)" (1997) – Reach & Spin "RipGroove" (1997) – Double 99 "Vol. 1 (What You Want What You Need)" (1997) – Industry Standard "I Refuse (What You Want)" (1997) – Somore featuring Damon Trueitt "Oh Boy" (1997) – The Fabulous Baker Boys "Ripped in 2 Minutes" (1998) – A vs B "A London Thing" (1997) – Scott Garcia "Something Goin' On (Loop Da Loop Uptown / Downtown Mix)" (1997) – Loop Da Loop "Superstylin'" (2001) – Groove Armada | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Surface plasmon resonance microscopy**
Surface plasmon resonance microscopy:
Surface plasmon resonance microscopy (SPRM), also called surface plasmon resonance imaging (SPRI), is a label free analytical tool that combines the surface plasmon resonance of metallic surfaces with imaging of the metallic surface.
Surface plasmon resonance microscopy:
The heterogeneity of the refractive index of the metallic surface imparts high contrast images, caused by the shift in the resonance angle. SPRM can achieve a sub-nanometer thickness sensitivity and lateral resolution achieves values of micrometer scale. SPRM is used to characterize surfaces such as self-assembled monolayers, multilayer films, metal nanoparticles, oligonucleotide arrays, and binding and reduction reactions. Surface plasmon polaritons are surface electromagnetic waves coupled to oscillating free electrons of a metallic surface that propagate along a metal/dielectric interface. Since polaritons are highly sensitive to small changes in the refractive index of the metallic material, it can be used as a biosensing tool that does not require labeling. SPRM measurements can be made in real-time, such as measuring binding kinetics of membrane proteins in single cells, or DNA hybridization.
History:
The concept of classical SPR has been since 1968 but the SPR imaging technique was introduced in 1988 by Rothenhäusler and Knoll. Capturing a high resolution image of low contrast samples for optical measuring techniques is a near impossible task until the introduction of SPRM technique that came into existence in the year 1988. In SPRM technique, plasmon surface polariton (PSP) waves are used for illumination. In simple words, SPRI technology is an advanced version of classical SPR analysis, where the sample is monitored without label through the use of a CCD camera. The SPRI technology with the aid of CCD camera gives advantage of recording the sensograms and SPR images, and simultaneously analyzes hundreds of interactions.
Principles:
Surface plasmons or surface plasmon polaritons are generated by coupling of electrical field with free electrons in a metal. SPR waves propagate along the interface between dielectrics and a conducting layer rich in free electrons.As shown in Figure 2, when light passes from a medium of high refractive index to a second medium with a lower refractive index, the light is totally reflected under certain conditions.In order to get Total Internal Reflection (TIR), the θ1 and θ2 should be within a certain range that can be explained through the Snell's law. When light passes through a high refractive index media to a lower refractive media, it is reflected at an angle θ2, which is defined in Equation 1.
Principles:
In the TIR process some portion of the reflected light leaks a small portion of electrical field intensity into medium 2 (η1 > η2). The light leaked into the medium 2 penetrates as an evanescent wave. The intensity and penetration depth of the evanescent wave can be calculated according to Equations 2 and 3, respectively.
Figure 3 shows a schematic representation of surface plasmons coupled to electron density oscillations. The light wave is trapped on the surface of the metal layer by collective coupling to the electrons of the metal surface. When the electron's plasma and the electric field of the wave light couple their frequency oscillations they enters into resonance.
Recently, the leakage light inside of the metal surface had been imaged.
Principles:
Radiation of different wavelengths (green, red and blue) was converted into surface plasmon polaritons, through the interaction of the photons at the metal/dielectric interface. Two different metal surfaces were used; gold and silver. The propagation length of the SPP along the x-y plane (metal plane) in each metal and photon wavelength were compared. The propagation length is defined as the distance traveled by the SPP along the metal before its intensity decreases by a factor of 1/e, as defined in Equation 4.Figure 4 shows the leakage light captured by a color CCD camera, of the green, red and blue photons in gold (a) and silver (b) films. In part c) of Figure 4, the intensity of the surface plasmon polaritons with the distance is shown. It was determined that the leakage light intensity is proportional to the intensity in the waveguide.
Principles:
where δSPP is the propagation length; ε’m and ε’’m are the relative permittivity of the metal and λ0 is the free space wavelength.The metallic film is capable of absorbing light due to the coherent oscillation of the conduction band electrons induced by the interaction with an electromagnetic field.
Electrons in the conduction band induce polarization after interaction with the electric field of the radiation. A net charge difference is created in the surface of the metal film, creating a collective dipolar oscillation of electrons with the same phase.
When the electron motion matches the frequency of the electromagnetic field, the absorption of incident radiation occurs. The oscillation frequency of gold surface plasmons is found in the visible region of the electromagnetic spectrum, giving a red color while silver gives yellow color.
Principles:
Nanorods exhibit two absorption peaks in the UV-vis region due to longitudinal and transversal oscillation, for gold nanorods the transverse oscillation generates a peak at 520 nm, while the longitudinal oscillation generates absorption at longer wavelengths, within a range of 600 to 800 nm.Silver nanoparticles shift their light absorption wavelengths to higher energy levels, where the blue shifting goes from 408 nm to 380 nm, and 372 nm, when they change from sphere to rod and wire, respectively.
Principles:
The absorption intensity and wavelength of gold and silver depends on the size and shape of the particles.In Figure 5, the size and shape of silver nanoparticles influenced the intensity of the scattered light and maximum wavelength of silver nanoparticles. The triangular shaped particles appear red with a maximum scattered light at 670–680 nm, the pentagonal particles appear in green (620–630 nm) and the spherical particles have higher absorption energies (440–450 nm), appear in blue.
Principles:
Plasmon excitation methods Surface plasmon polaritons are quasiparticles, composed by electromagnetic waves-coupled to free electrons of the conduction band of metals.
One of widely used methods uses to couple p-polarized light with the metal-dielectric interface is prism-based coupling.
Principles:
Prism couplers are the most widely used to excite surface plasmon polaritons. This method is also called Kretschmann–Raether configuration, where TIR creates an evanescent wave that couples the free electrons of the metal surface. High numerical aperture objective lenses have been explored as a variant of prism-coupling to excite surface plasmon polaritons. Waveguide coupling is also used to create surface plasmons.
Principles:
Prism coupling Kretschmann–Raether configuration is used to achieve resonance between light and free electrons of the metal surface. In this configuration a prism with high refractive index is interfaced with a metal film. Light from a source propagates through the prism is made incident on the metal film. As a consequence of the TIR, some leaked through metal film, forming evanescent wave in the dielectric medium as in Figure 6.
Principles:
The evanescent wave penetrates a characteristic distance into the less optically dense medium where it is attenuated.Figure 6 shows the Kretschmann–Raether configuration, where a prism with refractive index of η1 is coupled to a dielectric surface with a refractive index η2, the incidence angle of the light is θ.
The interaction between the light and the surface polaritons in the TIR can be explained by using the Fresnel multilayer reflection; the amplitude reflection coefficient (rpmd) is expressed as follows in Equation 5.
The power reflection coefficient R is defined as follows: In Figure 7, a schematic representation of the Otto prism coupling prism is shown. In the Figure 7, the air gap was shown a little thick just to explain the schematic although in reality, the air gap is so thin between prism and metal layer.
Principles:
Waveguide coupling The electromagnetic waves are conducted through an optical waveguide. When light enters to the region with a thin metal layer, it evanescently penetrates through the metal layer exciting a Surface Plasmon Wave (SPW). In waveguide coupling configuration, the waveguide is created when the refraction index of the grating is greater than that of substrate. Incident radiation propagates along the waveguide layer with high refractive index.
Principles:
In Figure 8, electromagnetic waves are guided through a wave-guiding layer, once the optical waves reached the interface wave-guiding layer metal an evanescent wave is created. The evanescent wave excites the surface plasmon at the metal-dielectric interface.
Grating coupling Due to the periodic grating, the phase matching between the incident light and the guide mode is easy to obtain.
According to Equation 7, the propagation vector (Kz) in the z direction can be tuned by changing the periodicity Λ. The grating vector can be modified, and the angle of resonant excitation can be controlled.
In Figure 9, q is the diffraction order it can have values of any integer (positive, negative or zero).
Resonance measurement methods The propagation constant of a monochromatic beam of light parallel to the surface is defined by Equation 8.
Principles:
where θ is the angle of incidence, ksp is the propagation constant of the surface plasmon, and n(p) is the refractive index of the prism. When the wave vector of the SPW, ksp matches the wave vector of the incident light k x {\textstyle k_{x}} , SPW is expressed as: Here εd and εm represent the dielectric constant of dielectrics and the metal while the wavelength of the incident light corresponds to λ. kx and ksp can be represented as: The surface plasmons are evanescent waves that have their maximum intensity at the interface and decay exponentially away from the phase boundary to a penetration depth.
Principles:
The propagation of the surface plasmons is intensely affected by a thin film coating on the conducting layer. The resonance angle θ shifts, when the metal surface is coated with a dielectric material, due to the change of the propagation vector k of the surface plasmon.
This sensitivity is due to the shallow penetration depth of the evanescent wave. Materials with a high amount of free electrons are used. Metal films of roughly 50 nm made of copper, titanium, chromium and gold are used. However, Au is the most common metal used in SPR as well as in SPRM.
Scanning angle SPR is the most widely used method for detecting biomolecular interactions.
Principles:
It measures the reflectance percentage (%R) from a prism/metal film assembly as a function of the incident angle at a fixed excitation wavelength. When the angle of incidence matches the propagation constant of the interface, this mode is excited at expenditure of the reflected light. As a consequence, the reflectivity value at the resonance angle is dumped.The propagation constant of the polaritons can be modified by varying the dielectric material. This modification causes resonance angle shifting as in the example shown in Figure 10, from θ1 to θ2 due to the change on the surface plasmon propagation constant.
Principles:
The resonance angle can be found by using Equation 11.
Principles:
where n1 is n2 and ng are the refractive index of medium 1, 2 and the metal layer, respectively.Using TIR two-dimensional imaging is possible to achieve spatial differences in %R at a fixed angle θ. A beam of monochromatic light is used to irradiate the sample at a fixed incident angle. The SPR image is created from the reflected light detected by a CCD camera.
Principles:
The minimum value of %R at the resonance angle provides SPRM.Huang and collaborators developed a microscope with an objective with high numerical aperture (NA), which improve the lateral resolution at expense of the longitudinal resolution.
Principles:
Lateral resolution The resolution of a conventional light microscopy is limited by the light diffraction limit. In SPRM, the excited surface plasmons adopt a horizontal configuration from the incident beam light. The polaritons will travel along the metal-dielectric interface, for a determined period, until they decay back into photons. Therefore, the resolution achieved by SPRM is determined by the propagation length ksp of the surface plasmons parallel to the incident plane.
Principles:
The separation between two areas should be approximately the magnitude of ksp in order to be resolved. Berger, Kooyman and Greve showed that the lateral resolution can be tuned by changing the excitation wavelength, the better resolution is achieved when the excitation energy increases. Equations 4 and 12 defines the magnitude of the wave vector of the surface plasmons.
where n2 is the refractive index of medium 2, ng is the refractive index of the metal film, and λ is the excitation wavelength.
Instrumentation:
The surface plasmon resonance microscopy is based on surface plasmon resonance and recording desired images of the structures present on the substrate using an instrument equipped with a CCD camera. In the past decade, SPR sensing has been demonstrated to be an exceedingly powerful technique and used quite extensively in the research and development of materials, biochemistry and pharmaceutical sciences.The SPRM instrument works with the combination of the following main components: source light (typically He-Ne laser), that further travels through a prism that is attached to a glass side, coated with a thin metal film (typically gold or silver), where the light beam reflects at the gold/solution interface at an angle greater than the critical angle. The reflected light from the interface surface area is recorded by a CCD detector, and an image is recorded. Although the above-mentioned components are some important for SPRM, additional accessories such as polarizers, filters, beam expanders, focusing lenses, rotating stage, etc., similar to several imaging methods are installed and used in the instrumentation for an effective microscopic technique as demanded by the application. Figure 12 shows a typical SPRM. Depending on the applications, and to optimize the imaging technique, the researchers modify this basic instrumentation with some design changes that even include altering the source beam. One of such design changes that resulted in a different SPRM is an objective-type as shown in Figure 11 with some modification in the optical configuration.The SPRi systems are currently manufactured by well known biomedical instrumentation manufacturers such as GE Life Sciences, HORIBA, Biosensing USA, etc. The cost of SPRi's ranger from, USD 100k-250k, although simple demonstration prototypes can be made for USD2000.
Sample preparation:
To perform measurements for SPRM, the sample preparation is a critical step. There are two factors that can be affected by the immobilization step: one is the reliability and reproducibility of the acquire data. It is important to ensure stability to the recognition element; such as antibodies, proteins, enzymes, under the experiment conditions. Moreover, the stability of the immobilized specimens will affect the sensitivity, and/or the limit of detection (LOD).One of the most popular immobilization methods used is Self-Assembled Monolayer (SAM) on gold surface. Jenkins and collaborators 2001, used mercaptoethanol patches surrounded by SAM composed of octadecanethiol (ODT) to study the adsorption of egg-phosphatidylcholine on the ODT SAM. A pattern of ODT-mercaptoethanol was made onto a 50 nm gold film. The gold film was obtained through thermal evaporation on a LaSFN 9 glass. The lipid vesicles were deposited on the ODT SAM through adsorption, giving a final multilayer thickness greater than 80 Å.11-Mercaptoundecanoic acid-Self assembled monolayer (MUA-SAM) were formed on Gold coated BK7 slides. A PDMS plate was masked on the MUA-SAM chip. Clenbuterol (CLEN) was attached to BSA molecules through amide bond, between the carboxylic group of BSA and the amine group of CLEN molecules. In order to immobilize BSA on the gold surface, the spots created through PDMS making were functionalized with sulfo-NHS and EDC, subsequently 1% BSA solution was poured in the spots and incubated for 1 hour. Non-immobilized BSA was rinsed out with PBS and CLEN solution was poured on the spots, unimmobilized CLEN was removed through PBS rinse.An alkanethiol-SAM was prepared in order to simultaneously measure the concentration of horseradish peroxidase (Px), Human Immunoglobulin E (IgE), Human choriogonadotropin (hCG) and Human immunoglobulin G (IgG), through SPR. The alkanethiols made of carbon chains composed by 11 and 16 carbons were self-assembled on the sensor chip. The antibodies were attached to the C16 alkanethiol, which had a terminal carboxylic group.The micro patterned electrode was fabricated by gold deposition on microscope slides. PDMS stamping was used to produce an array of hydrophilic/hydrophobic surface; ODT treatment followed by immersion in 2-mercaptoethanol solutions rendered a functionalized surface for lipid membranes deposition. The patterned electrode was characterized through SPRM. In the Figure 14 B, the SPRM image reveals the size of the pockets, which was 100 um x 100 um, and they were 200 um apart. As is seen in the image the remarkable contrast of the image is due to the high sensitivity of the technique.
Applications:
SPRM is a useful technique for measuring concentration of biomolecules in the solution, detection of binding molecules and real time monitoring of molecular interactions. It can be used as biosensor for surface interactions of biological molecules: antigen-antibody binding, mapping and sorption kinetics. For example, one of the possible reason of Type 1 diabetes of children is the high-level presence of Cow's milk antibodies IgG, IgA, IgM (mainly due to IgA) in their serum. Cow's milk antibodies can be detected in the milk and serum sample using SPRM.
Applications:
SPRM is also advantageous to detect the site-specific attachment of lymphocyte B or T on antibody array. This technique is convenient to study the label free and real time interactions of cells on the surface. So SPRM can be served as diagnostic tool for cell surface adhesion kinetics.
Besides its merits, there are limitations of SPRM though. It's not applicable for detecting low molecular weight molecules. Although it's label free but will need to have crystal clean experimental conditions. Sensitivity of SPRM can be improved with coupling of MALDI-MS.
There are a number of applications of SPRM from which some of them are being described here.
Membrane proteins Membrane proteins are responsible for the regulation of cellular responses to extracellular signals. It has been the challenging thing to investigate the involvement of membrane proteins in disease biomarkers and therapeutic targets and its binding kinetics with their ligands. Traditional approaches could not reflect clear structures and functions of membrane proteins.
Applications:
In order to understand the structural details of membrane proteins, there is a need of alternate analytical tool, which can provide three-dimensional and sequential resolutions that can monitor membrane proteins. Atomic force microscopy (AFM) is an excellent method for obtaining high spatial resolution images of membrane proteins, but it might not be helpful to investigate its binding kinetics. Fluorescence-based microscopy (FLM) can be used to study the interactions of membrane proteins in individual cells but it requires development of proper labels and needs tactics for different target proteins.
Applications:
Furthermore, host protein may be affected by the labeling.Binding kinetics of MP's in the single living cells can be studied via label free imaging method based on SPR Microscopy without extracting the proteins from the cell membranes, which help scientists to work with the actual conformations of the membrane proteins. Furthermore, distribution and local binding activities of membrane proteins in each cell can be mapped and calculated. SPR microscopy (SPRM) makes possible to simultaneously optical and fluorescence imaging of the same sample, which prove to get the advantages of both label-based and label-free detection methods in the single setup.
Applications:
Detection of DNA hybridization SPR imaging is used to study the multiple adsorption interactions in an array format under same experimental conditions. Nelson and his coworkers introduced a multistep procedure to create DNA arrays on gold surfaces for use with SPR imaging. Affinity interactions can be studied for a variety of target molecules e.g. proteins and nucleic acids. Mismatching of bases in the DNA sequence leads to the number of lethal diseases like lynch syndrome which has high risk of colon cancer.SPR imaging is useful to monitor adsorption of molecules on the gold surface which is possible because of the change in the reflectivity from the surface. First G-G mismatch pair is stabilized by attaching it with the ligand, naphthyridine dimer, through hydrogen bonding which make the hairpin structures in double stranded DNA on gold surface. Binding of Dimer with DNA enhances the free energy of hybridization, which causes change in index of refraction.
Applications:
DNA array is fabricated to test the G–G mismatch stabilizing properties of the naphthyridine dimer. Each of the four immobilized sequences in the array differed by one base. The position of this base is indicated by an X in sequence 1 as shown in Figure 16. The SPR difference image is only detected for the sequence having cytosine (C) base at the X position in sequence 1, the complementary sequence to sequence 2. However, the SPR difference image corresponding to the addition of sequence 2 in the presence of the naphthyridine dimer shows that, in addition to its complement, sequence 2 also hybridizes to the sequence that forms a G–G mismatch. These results demonstrate that SPR imaging is a promising tool for monitoring single base mismatches and screen out the hybridized molecules.
Applications:
Antibody binding to protein arrays SPR imaging can be used to study the binding of antibodies to protein array. Amine functionalities on the gold surface with proteins array, is used to study binding of antibodies. Immobilization of the protein was done by flowing protein solutions through the PDMS micro channels. Then PDMS was removed from the surface and solutions of antibody were flowed over the array. Three-component protein array containing the proteins human fibrinogen, ovalbumin, and bovine IgG is shown in Figure 17, SPR images obtained by Kariuki and co-workers. This contrast in the array is due to difference of refractive index which is outcome of local binding of antibodies. These images show that there is a high degree of antibody binding specificity and a small degree of non-specific adsorption of the antibody to the array background, which can be improved to modify the array background. Based on these results, SPR imaging technique can be opted as diagnostic tool for studying the antibody interactions to protein arrays.
Applications:
Coupled with mass spectrometry Discovery and validation of protein biomarkers are crucial for diseases diagnosis. Coupling of SPRM with MALDI-mass spectrometer (SUPRA-MS) enables the multiplex quantification of binding and molecular characterization on the basis of different masses. SUPRA-MS is used to detect, identify and characterize the potential breast cancer biomarker, LAG3 protein, introduced in the human plasma. Glass slides were taken to prepare gold chips via coating with thin layers of chromium and gold by sputtering process. Gold surface was functionalized using solution of 11-Mercapto-1-undecanol (11-MUOH) and 16-mercapto-1-hexadecanoic acid (16-MHA). This self-assembled monolayer was activated with sulfo-NHS and EDC. Pattern of sixteen droplets was deposited on the macroarray. Immunoglobin G antibodies were spotted against Lymphocyte activation gene 3 (α-LAG3) and rat serum albumin (α-RSA). After placing biochip in the SPRi and running buffer solution in the flow cell, α-LAG3 was injected. Special image station was used on the proteins that are attached. This station can also be placed on the MALDI. Before placing on the MALDI, captured proteins were reduced, digested and loaded with matrix in order to avoid contamination.Antigen density is directly proportional to change in reflectivity ΔR because evanescent wave penetration depth Lzc is larger than thickness of immobilized antigen layer.
Applications:
where ∂n∂c is the index increment of the molecule and SPR is the sensitivity prism, reflectivity.
Applications:
Clean mass spectrum was obtained for LAG3 protein due to good tryptic digestion and homogeneity of the matrix (α-cyano-4-hydroxycinnamic acid). Relatively high intensity m/z peak of LAG3 protein was found at 1,422.70amu with average mascot score of 87.9 ± 2.4. Validation of MS results was further confirmed by MS-MS analysis. These results are similar to classical analytical method in-gel digestion.Greater S/N > 10, 100% reliability and detection at femtomole level on chip proves the credibility of this coupling technique. One can find protein-protein interaction and on-chip peptide distribution with high spatial resolution using subjected technique.
Applications:
DNA aptamers Aptamers are particular DNA ligands that target biomolecules such as proteins. SPR imaging platform would be a good choice to characterize aptamer -protein interactions. To study the aptamer-protein interaction, first oligonucleotides are grafted through formation of thiol Self Assembling Monolayer (SAM) on gold substrate using piezoelectric dispensing system. Thiol groups are introduced on DNA nucleotides by N-hydroxysuccinimide (NHS). Target oligonucleotides having a primary amine group at their 59th end are conjugated to HS-C (11)-NHS in phosphate buffer solution at pH 8.0 for one hour at room temperature. Aptamer grafting biosensor is placed on SPRM after rinsing. Then Thrombin is co-injected with excess of cytochrome C for signal specificity. Concentration of free thrombin is determined by calibration curve obtained by plotting initial slope of the signal at the beginning of injection against concentration. The interaction of thrombin and the aptamer can be monitored on microarray in real-time during injections of thrombin at different concentrations. Solution phase dissociation constant KDsol (3.16 ± 1.16 nM) is calculated from the measured concentrations of free thrombin.
Applications:
[THR---APT] = cTHR – [THR], the equilibrium concentration of thrombin attached to aptamers in solution and [APT] = cAPT – [THR---APT], the concentration of free aptamers in solution.
Applications:
Surface phase dissociation constant KDsurf (3.84 ± 0.68) is obtained by fitting Langmuir adsorption isotherm on equilibrium signals. Both dissociation constants are significantly different because KDsurf is dependent on the surface grafting density as shown in Figure 19. This dependence extrapolates linearly at low sigma to solution-phase affinity.The difference in SPRi image can gives us information regarding the presence of binding and specificity but not suitable for quantification of free protein in case of multiple affinity sites. The real time monitoring of the interaction is possible by using SPRM to study the kinetics and the affinity of the interactions.
Applications:
Detection of polymer interaction Despite of using surface plasmon resonance imaging (SPRi) in biology to characterize interactions between two biological molecules, it is also useful to monitor the interactions between two polymers. In this approach, one polymer, called as host protein HP, is immobilized on the surface of a biochip and the other polymer designated as guest polymer GP is inserted on the SPRi-Biochip to study the interactions. For example, a host protein of amine-functionalized poly(β-cyclodextrin) and guest protein of PEG (ada)4.SPRi biochip was used for immobilization of HP of different concentrations. An array of HP active sites was produced on the chip. The attachment of HP was done through its amino groups to N-hydroxy succinimide functionalities on the gold surface. First SPRi system was filled running buffer solution followed by placing of SPRi –biochip into the analysis chamber. Two solutions of different concentrations of GP was 1g/L and 0.1 g/L were injected in the flow cell. The association and the dissociation of both polymers can be monitored in real-time on the basis of change in reflectivity and images from SPRM can be differentiated on the basis of white spots (association phase) and black spots (dissociation phase). PEG without adamantyl groups didn’t show adsorption on β-cyclodextrin cavities. On the other hand, there wasn’t any adsorption of GP without HP on the chip. Change in SPRi response on the reaction sites is provided by the capturing of kinetic curves and real time images from the CCD camera. Local changes in light reflectivity are directly related to quantity of target molecules on each point. Variation at the surface of the chip provide comprehensive knowledge on molecular binding and kinetic processes.
Applications:
Bio-mineralization One of the important class of biomaterials is polymer hydroxyapatite that is remarkably useful in the field of bone regeneration because of its resemblance with natural bone material. The advantage of hydroxyapatite, (Ca10(PO4)6(OH)2, is being started to form inside the bone tissue through mineralization which also advocate the enhancement of osteointegration. Biomineralization is also called calcification, in which calcium cations come from cells and physiological fluids while phosphate anions are produced from hydrolysis of phosphoesters and phosphoproteins as well as from the body fluids. This phenomenon is also tested in vitro studies.For in vitro studies, Polyamidoamine (PAMAM) dendrimers with amino- and carboxylic-acid external reactive shells are considered as sensing phase. These dendrimers are required to immobilized on the gold surface and inactive to gold surface. Hence, thiols groups have to be introduced at the terminals of dendrimers so that dendrimers can be attached on the gold surface. Carboxylic groups are functionalized by N,N-(3-dimethylaminopropyl)-N’-ethyl-carbodiimide hydrochloride (EDC) and N-hydroxysuccinimide (NHS) solutions in phosphate buffer. Functional groups (amide, amino and carboxyl) act as ionic pumps capturing calcium ions from the test fluids; then calcium cations bind with phosphate anions to generate calcium-phosphate mineral nuclei on the dendrimer surface.SPRM is expected to be sensitive enough to provide important quantitative information on mineralization's occurrence and kinetics. This detection of the mineralization is based on the specific mass change induced by the mineral nuclei formation and growth. Nucleation and progress in mineralization can be monitored by SPRM as shown in Figure 20. PAMAM-containing sensors are fixed on the SPRi analysis platform and then exposed to experimental fluids in the flow cell as shown in Figure 21. SPRM is not adapted to sense the origin and nature of mass change but it detects the modification of refractive index due to mineral precipitation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Form grabbing**
Form grabbing:
Form grabbing is a form of malware that works by retrieving authorization and log-in credentials from a web data form before it is passed over the Internet to a secure server. This allows the malware to avoid HTTPS encryption. This method is more effective than keylogger software because it will acquire the user’s credentials even if they are input using virtual keyboard, auto-fill, or copy and paste. It can then sort the information based on its variable names, such as email, account name, and password. Additionally, the form grabber will log the URL and title of the website the data was gathered from.
History:
The method was invented in 2003 by the developer of a variant of a trojan horse called Downloader.Barbew, which attempts to download Backdoor.Barbew from the Internet and bring it over to the local system for execution. However, it was not popularized as a well known type of malware attack until the emergence of the infamous banking trojan Zeus in 2007. Zeus was used to steal banking information by man-in-the-browser keystroke logging and form grabbing. Like Zeus, the Barbew trojan was initially spammed to large numbers of individuals through e-mails masquerading as big-name banking companies. Form grabbing as a method first advanced through iterations of Zeus that allowed the module to not only detect the grabbed form data but to also determine how useful the information taken was. In later versions, the form grabber was also privy to the website where the actual data was submitted, leaving sensitive information more vulnerable than before.
Known occurrences:
A trojan known as Tinba (Tiny Banker Trojan) has been built with form grabbing and is able to steal online banking credentials and was first discovered in 2012. Another program called Weyland-Yutani BOT was the first software designed to attack the macOS platform and can work on Firefox. The web injects templates in Weyland-Yutani BOT were different from existing ones such as Zeus and SpyEye.Another known version is British Airways breach in September 2018. In the British Airways’ case, the organizations’ servers appeared to have been compromised directly, with the attackers modifying one of the JavaScript files (Modernizr JavaScript library, version 2.6.2) to include a PII/credit card logging script that would grab the payment information and send the information to the server controlled by the attacker hosted on “baways[.]com” domain with an SSL certificate issued by “Comodo” Certificate Authority.
Known occurrences:
The British Airways mobile application also loads a webpage built with the same CSS and JavaScript components as the main website, including the malicious script installed by Magecart. Thus, the payments made using the British Airways mobile app were also affected.
Countermeasures:
Due to the recent increase in keylogging and form grabbing, antivirus companies are adding additional protection to counter the efforts of key-loggers and prevent collecting passwords. These efforts have taken different forms varying from antivirus companies, such as safepay, password manager, and others. To further counter form grabbing, users' privileges can become limited which would prevent them from installing Browser Helper Objects (BHOs) and other form grabbing software. Administrators should create a list of malicious servers to their firewalls.New countermeasures, such as using Out-of-band communication, to circumvent form grabbers and Man-in-the-browser are also emerging; examples include FormL3SS.; those that circumvent the threat use a different communication channel to send the sensitive data to the trusted server. Thus, no information is entered on the compromised device. Alternative Initiatives such as Fidelius use added hardware to protect the input/output to the compromised or believed compromised device. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conductive deafness-ptosis-skeletal anomalies syndrome**
Conductive deafness-ptosis-skeletal anomalies syndrome:
Conductive deafness-ptosis-skeletal anomalies syndrome, also known as Jackson Barr syndrome is a rare presumably autosomal recessive genetic disorder characterized by conductive hearing loss associated with external auditory canal-middle ear atresia which aggravates during ear infections, ptosis, and skeletal anomalies which consist of clinodactyly of the fifth fingers, radial head dislocation and internal rotation of the hips). Additional findings include thin nose, hair growth delays, and teeth dysplasia. It has been described in two American sisters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pudendal canal**
Pudendal canal:
The pudendal canal (also called Alcock's canal) is an anatomical structure formed by the obturator fascia (fascia of the obturator internus muscle) lining the lateral wall of the ischioanal fossa. The internal pudendal artery and veins, and pudendal nerve pass through the pudendal canal, and the perineal nerve arises within it.
Clinical significance:
Pudendal nerve entrapment can occur when the pudendal nerve is compressed while it passes through the pudendal canal.
History:
The pudendal canal is also known as Alcock's canal, named after Benjamin Alcock. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Atlanto-occipital joint**
Atlanto-occipital joint:
The atlanto-occipital joint (Capsula articularis atlantooccipitalis) is an articulation between the atlas bone and the occipital bone. It consists of a pair of condyloid joints. It is a synovial joint.
Structure:
The atlanto-occipital joint is an articulation between the atlas bone and the occipital bone. It consists of a pair of condyloid joints. It is a synovial joint.
Ligaments The ligaments connecting the bones are: Two articular capsules Posterior atlanto-occipital membrane Anterior atlanto-occipital membrane Capsule The capsules of the atlantooccipital articulation surround the condyles of the occipital bone, and connect them with the articular processes of the atlas: they are thin and loose.
Function:
The movements permitted in this joint are: (a) flexion and extension around the mediolateral axis, which give rise to the ordinary forward and backward nodding of the head.
(b) slight lateral motion, lateroflexion, to one or other side around the anteroposterior axis.Flexion is produced mainly by the action of the longi capitis and recti capitis anteriores; extension by the recti capitis posteriores major and minor, the obliquus capitis superior, the semispinalis capitis, splenius capitis, sternocleidomastoideus, and upper fibers of the trapezius.
The recti laterales are concerned in the lateral movement, assisted by the trapezius, splenius capitis, semispinalis capitis, and the sternocleidomastoideus of the same side, all acting together.
Clinical significance:
Dislocation The atlanto-occipital joint may be dislocated, especially from violent accidents such as traffic collisions. This may be diagnosed using CT scans or magnetic resonance imaging of the head and neck. Surgery may be used to fix the joint and any associated bone fractures. Neck movement may be reduced long after this injury. Such injuries may also lead to hypermobility, which may be diagnosed with radiographs. This is especially true if traction is used during treatment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.