text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Fungisporin**
Fungisporin:
Fungisporin is a antibiotic with the molecular formula C28H36N4O4 which is produced by Aspergillus and Penicillium species. The cyclic peptide is a tetramer, consists of one each of the two enantiomeric forms of phenylalanine and of valine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RDock**
RDock:
rDock (previously RiboDock) is an open-source molecular docking software that be used for docking small molecules against proteins and nucleic acids. It is primarily designed for high-throughput virtual screening and prediction of binding mode.
History:
The development of rDock started in 1998 in RiboTargets (later Vernalis (R&D) Ltd). The software was originally called RiboDock. The development went on until 2006 when the software was licensed to University of York for academic distribution and also maintenance.
History:
Six years later, in 2012, Vernalis and University of York decided to release rDock as open-source software to allow its further development by the wider community. The version that was released as open source is developed and supported by University of Barcelona on SourceForge. The development on SourceForge stalled after June 2014 and the repository is considered deprecated after the migration to GitHub.A fork named RxDock continued the development of rDock from April 2019 until March 2022 on GitLab. As of April 2022, the RxDock project development activity is very low. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Class council**
Class council:
The term class council is derived from the classroom assembly ("Réunion coopérative", "Conseil") of Freinet pedagogy.
Freinet did organize the class like an (agricultural) cooperative society.
Much like the farmers organize cultivation and marketing of their products together in a cooperative the pupils plan and organize learning themselves.
The class council is an agency of self-determination in which all pupils participate with equal rights.
An alternative model of the class council is based on work of Rudolf Dreikurs, a pupil of Alfred Adler.
Dreikurs developed the class council based on individual psychology and without reference to Freinet pedagogy as an instrument of problem solving and mediation but not as an agency of self-determination.
In Freinet pedagogy:
Tasks and fields of activity The class council can: deliberate on excursions.
deliberate on teaching methods.
deliberate on subject matter.The intent is to promote a sense of community, community spirit and a democratic attitude. Communicative competencies and independent study skills are also meant to benefit.In the class council pupils can present self-determined learning projects and tasks which are useful to run a democratic class council.
decide on cooperations.
formulate missions and requests to working groups.
present and discuss learning results.
plan the social life of the class.
plan the expenses of the class.
discuss relations and problems with other classes.
discuss issues of self-determination.
Valuable class institution The class council is a means to promote citizenship education. The individual learning goals and the social interests of the pupils are the focus of the class council. The self-determination of the class includes economic experience as excursions, common property and consumption items require funding.
Differences between curricular educational objectives and individual learning goals of the pupils may lead to a conflict of interests for the teacher and the class. Experiences in class councils have shown that pupils are eager to follow extra-curricular learning objectives.
Administration of a class council During the introduction stage the class teacher can perform the necessary administrative tasks.
In a later stage the pupils can rotate the offices of chairman, keeper of the minutes, rule monitor, time monitor and other tasks.
The pupils can also determine the agenda.
Agenda The agenda for a meeting of the class council can be prepared by pupils. A useful means is a bulletin board/wall newspaper that is used to collect contributions in different categories (e.g. criticism, appreciation, proposals). All pupils in a class can make contributions to the bulletin board.
Role of the teacher during the class council The teacher is meant to actively reduce his influence and to allow for democratic and social processes in the group.
A class left to its own devices may run the risk of encouraging oligarchs.
A successful instrument of self-determination supposedly develops best with the support of the teacher in the role of an advisor.
Rules The self-given rules of the class council should suffice as long as they are not in contradiction with school policy or the law.
Prearranged rules may be helpful but undermine the self-determination of the pupils.
The view what constitutes an orderly class council may be quite different for pupils and for teachers.
Pupils need scope for development and the chance to make their own experiences by experimenting with and inventing their own democratic behaviors and rules.
Criticism on contemporary class councils from the perspective of Freinet pedagogy:
The class council today is often removed from its pedagogical context in Freinet pedagogy and is modified and reduced. Freinet pedagogues criticize that the class council loses its democratic and emancipatory aspects and becomes an extension of class teacher and school administration.
Even in the democratic toolkit building blocks of the BLK the class council is primarily a means for problem solving. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Difference due to memory**
Difference due to memory:
Difference due to memory (Dm) indexes differences in neural activity during the study phase of an experiment for items that subsequently are remembered compared to items that are later forgotten. It is mainly discussed as an event-related potential (ERP) effect that appears in studies employing a subsequent memory paradigm, in which ERPs are recorded when a participant is studying a list of materials and trials are sorted as a function of whether they go on to be remembered or not in the test phase. For meaningful study material, such as words or line drawings, items that are subsequently remembered typically elicit a more positive waveform during the study phase (see Main Paradigms for further information on subsequent memory). This difference typically occurs in the range of 400–800 milliseconds (ms) and is generally greatest over centro-parietal recording sites, although these characteristics are modulated by many factors.
History:
The first report of subsequently remembered items eliciting a more positive ERP waveform than subsequently forgotten items during the study phase was by Sanquist et al., in 1980. This paper looked at a subset of the participants' ERPs at the study phase and found those trials subsequently remembered had a more positive waveform in the time range of the late positive complex (LPC), approximately 450–750 ms after stimulus presentation. In the early and mid 1980s, several studies noted modulation of the P300 (P3b) component due to subsequent memory, with items that are remembered having a larger amplitude. In 1987, Paller, Kutas and Mayes, consistent with previous reports, observed that subsequently remembered items elicited more positivity in the later portions of the waveform compared to items later forgotten; they termed these observed differences at the study phase as "the difference due to memory" or Dm effect. Since this seminal paper by Paller, Kutas and Mayes, a wealth of research using ERPs has been conducted using the Dm effect and detailing the multitude of factors that influence the manifestation of the Dm and, by inference, encoding success. Additionally, the Dm has been studied using intracranial recordings and in a variety of functional magnetic resonance imaging (fMRI) studies.
Main paradigms:
Overwhelmingly, the paradigm used to elicit a Dm effect in ERPs has been the "subsequent memory paradigm." An experiment employing a subsequent memory paradigm generally consists of two phases, a study phase (encoding phase) and a test phase (retrieval phase), with ERPs from scalp electrodes being recorded during each phase, time locked to stimulus onset. In the study phase, a series of items is displayed to the participant, usually one at a time; these items are most often words but pictures and abstract figures have also been used (though with less consistent Dm effects; see "Functional Sensitivity"). The test phase normally mixes together items that were shown during the study phase with others that are being shown for the first time, and the participant must classify each item as being "old" (if it was in the study phase) or "new" (if it is the first time it has been seen).
Main paradigms:
Critically for the Dm effect, the responses a participant makes to the old items in the test phase are used to backsort trials in the study phase as either "subsequently remembered" or "subsequently forgotten." If during the test phase a participant correctly classifies an old item as old, it falls into the "subsequently remembered" trial type for the study phase. On the other hand, if a person incorrectly calls an old item new at the test phase, or fails to respond "old" to an old item, this item becomes classified as "subsequently forgotten." The ERP waveforms, during the study phase, of all subsequently remembered trials are compared with those of all subsequently forgotten trials and a greater positivity is generally seen for the subsequently remembered trials.
Main paradigms:
For example, in the study phase of a subsequent memory paradigm, a participant may see the words "frog," "tree," and "car." Following the study phase the test phase occurs and the participant sees the words "shirt," "car," and "frog," and must say if each word is old or new. If the participant correctly classifies "car" as old, it becomes a subsequently remembered trial; however, if the subject incorrectly says "frog" is new, it is a subsequently forgotten trial. The neural activity elicited by the first presentation of "car" and "frog" at the study phase is then compared and the Dm effect is derived from this comparison.
Main paradigms:
A "continuous recognition paradigm" has also been known to elicit a Dm effect. In the continuous recognition paradigm, study and test phases are not separate entities, but rather, items are continuously presented and the participant is instructed to respond to an item as "old" if it has been seen before (generally presented a second time) in this continual stream of item presentation. Items that were correctly called "old" are the subsequently remembered trials, and items that were "missed" (not called old upon second presentation) make up the subsequently forgotten trials. The neural activity for subsequently remembered and forgotten trials is then compared for the first presentation of the items, and a Dm effect is computed.
Component characteristics:
Broadly speaking, the Dm ERP effect is any difference in neural activity recorded during the study phase of an experiment that differentiates subsequently remembered items and subsequently forgotten items. Typically, this difference is seen in the form of subsequently remembered items eliciting waveforms that are more positive than subsequently forgotten items during encoding of the item. Most often, the difference between subsequently remembered and subsequently forgotten items emerges at approximately 400 ms post stimulus onset and is sustained until 800 or 900 ms, though this can vary depending on the stimuli used and experimental instructions. The timing of this enhanced positivity suggests that the Dm may be a modulation of several ERP components, including the N400 component, with subsequently remembered items eliciting a less negative amplitude, as well as the P300 or an LPC, where items that are later remembered yield a more positive amplitude in this waveform. In terms of scalp topography, the Dm effect is generally largest over centro-parietal recording sites. However, a Dm effect with a more anterior distribution can be observed by varying the instructions participants receive; this is discussed further below.
Functional sensitivity:
The canonical characteristics described above of the Dm effect give a general description of the component; however, the strength, timing, topographical distribution and even whether or not the effect is seen is sensitive to a variety of experimental manipulations.
Functional sensitivity:
Incidental versus intentional encoding A large number of Dm ERP studies employ an incidental encoding approach to the subsequent memory paradigm. In this case the participant pays attention to the items presented during the study phase unaware that a memory test will follow. This was the approach used by Paller, Kutas and Mayes in the first Dm study, and this technique reliably elicits a Dm effect. Experiments wherein the participant is explicitly told to remember the items presented during the study phase (intentional encoding) because a memory test will follow have yielded slightly differing results. Several studies have indeed recorded a Dm effect using intentional encoding instructions, but this effect sometimes differs from the Dm effect from incidental encoding. In a direct comparison of incidental vs. intentional encoding, Munte et al., (1988) found a stronger Dm effect for the incidental encoding condition. Moreover, the Dm effect for the intentional encoding condition appeared later than the Dm for incidental encoding, and also showed a more frontal topography compared to the centro-parietal distribution observed in incidental encoding. This effect of a delayed and more frontal distribution for intentional encoding paradigms was also seen in two other reports.
Functional sensitivity:
Levels of processing and rehearsal at encoding Perhaps the most well known manipulation during the subsequent memory paradigm is how the participant is instructed to encode or process the material during the study phase. Generally speaking, participants may be instructed to observe the items at test and make a judgment regarding each item; crucially, this judgment may be of the "shallow" variety, such as deciding if the word presented contains more than two vowels, or it may be a "deeper" judgment (e.g. is this item edible?) These deeper judgments are more of the semantic variety and typically lead to a better representation of the item. This is also reflected in the Dm effect. In the seminal paper by Paller, Kutas and Mayes (1987), participants made shallow judgments based the physical properties of the word or deeper judgments reflective of more semantic information of the word. The Dm effect for words encoded in a semantic fashion was more positive than the Dm effect observed for words non-semantically encoded. It is important to note that a Dm effect can be seen for shallower processing as well, as was the case in one of the shallow processing tasks in the Paller, Kutas and Mayes (1987) paper, as well as in Friedman, Ritter and Snodgrass (1996).In 1997, Weyerts et al. found that both recognition memory as well as the Dm effect was larger for pairs of words that were relationally encoded (e.g. are these two words semantically related) versus non-relationally encoded (e.g. can the color white be associated with one of these words). This further suggests that the Dm effect may be enhanced when items are encoded on a semantic level.
Functional sensitivity:
Also, the Dm effect seems sensitive to the type of rehearsal strategies a participant performs. Specifically, Fabiani, Karis and Donchin found that P300 modulation at encoding (particularly for "isolates", stimuli presented in a deviant font relative to all other stimuli) correlated with later memory for subjects who engaged in rote rehearsal (such as simply repeating the word in one's head) but not for those who undertook elaborative rehearsal, which emphasizes linking the current word to other words presented and pre-existing knowledge. However, in the 1990 report as well as a report by Karis, Fabiani and Donchin (1984), a later positivity emerged in frontal electrodes corresponding to subsequent memory, and this was greater for those in the elaborative rehearsal condition.
Functional sensitivity:
Type of memory at retrieval The Dm effect has been shown to be sensitive to how participants are asked to display their memory for previous items. In a 1988 paper by Paller, McCarthy and Wood, a greater Dm effect was observed for items that were freely recalled with no external cues, compared to items that were presented and the subject was asked if he or she recognizes the item as old. This is suggestive of the Dm effect being larger for stronger representations, as recall is generally more difficult than recognition.
Functional sensitivity:
In a similar vein, Friedman & Trott (2000) found that young adult participants displayed a robust Dm effect when they not only remembered seeing a word, but could also remember some details of the context of when it was presented. In comparison, a Dm effect for items that were subsequently judged as old, but only from a general sense of familiarity, did not emerge. A Dm effect was found in both conditions for older adults.
Functional sensitivity:
Stimuli A host of studies have found a Dm effect when presenting words as stimuli. However, experiments using pictures or abstract figures have found less consistent Dm effects. Experiments using a continuous recognition paradigm have found a Dm effect for pictures of everyday objects. Van Petten and Senkfor (1996) did not find a Dm effect when they presented participants with abstract drawings; however, a Dm effect was observed in the same group of participants when words were used as stimuli. A similar pattern of results is described by Fox, Michie and Coltheart, (1990). Coupling the results of Dm effects for words and common pictures and the lack of Dm effects for abstract figures suggests the Dm effect may be contingent on using meaningful stimuli or some pre-existing knowledge of the stimuli.
Functional sensitivity:
False memories In an elegant report by Gonsalves and Paller (2000), the Dm effect was found to be greater for false memories compared to correctly classified memories. In the study phase of this subsequent memory paradigm, participants saw a word which was followed either by a picture of that word or a blank box, in which case participants were asked to imagine a picture of the word they just saw. In the test phase, participants were shown a word and asked if it was presented with a picture during the study phase. 30% of the time participants erroneously said a picture accompanied a word when it had only been imagined by the participant. The waveform at the study phase of trials in which the participant falsely recalled studying the word with a picture elicited a more positive going amplitude compared to the trials where the participant correctly said only the word was presented. Gonsalves and Paller (2000) interpreted this as indicating that better imagery at encoding led to greater source confusions at retrieval (“did I actually see this or just imagine it?”). More generally, this study demonstrates that backsorting procedures need not be limited to simply items remembered versus forgotten, but could include a wide range of more complex comparisons as long as test phase behaviors can be linked to specific study phase events.
Sources:
To the extent that greater positivity for subsequently remembered items spans several ERP components (P300, N400, and an LPC), coupled with differing topographical distributions depending on task, it is likely that the neural generators of the Dm effect are widespread in the brain. Pinning down the location in the brain that gives rise to any ERP component is very difficult if not impossible because of the inverse problem.
Sources:
However, evidence from other cognitive neuroscience techniques can help to shed light on this question. Given that the Dm effect seems to be reflective of mnemonic processes at encoding, one brain area likely to play a role is the medial temporal lobe (MTL), as it is well known this brain area gives rise to the type of memory observed in Dm studies.Egler et al. (1997) recorded electrical activity directly from the MTL in patients about to undergo surgery for temporal lobe epilepsy. While recording directly from the MTL, participants were shown novel stimuli and then later had a memory test for those stimuli; it was reported that the magnitude of the electrical activity from the MTL during the initial presentation of the stimuli correlated with subsequent memory performance.
Sources:
Additionally, fMRI studies using subsequent memory paradigms have found evidence suggesting areas of the MTL are involved in the Dm effect, though the precise areas involved and their contributions are unclear. Further, several fMRI studies have reported prefrontal cortex (PFC) activity during study predictive of subsequent memory, as well as activity in fusiform gyrus.Taken together, these findings from complementary cognitive neuroscience methods suggest the neural events at encoding that lead to successful later memory are diffuse in the brain and unfold on multiple time scales. The Dm effect seen in ERPs likely represents a subset of these encoding processes.
Theory:
Considering that the Dm is a comparison of neural activity during encoding, and that this activity is predictive of subsequent memory, it is likely the Dm indexes some difference between subsequently remembered vs. forgotten materials at encoding, presumably reflective of learning. The nature of this difference is not entirely clear though. Van Petten and Senkfor (1996) suggest there may be a "family of Dm effects" that occur dependent on a variety of factors, and this seems quite plausible given the wide range of differences observed in the Dm as a function of stimuli used, encoding instructions, orienting tasks and types of retrieval decisions. Future research using different manipulations of the subsequent memory paradigm, as well as combining methods such as ERPs and fMRI or transcranial magnetic stimulation and fMRI have great potential to lead to further understanding of the Dm effect and, more generally, of the neural and cognitive factors that promote later memory under different circumstances. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fossil word**
Fossil word:
A fossil word is a word that is broadly obsolete but remains in current use due to its presence within an idiom, word sense, or phrase. An example for a word sense is 'ado' in 'much ado'. An example for a phrase is 'in point' (relevant), which is retained in the larger phrases 'case in point' (also 'case on point' in the legal context) and 'in point of fact', but is rarely used outside of a legal context.
English-language examples:
ado, as in "without further ado" or "with no further ado" or "much ado about nothing", although the homologous form "to-do" remains attested ("make a to-do", "a big to-do", etc.) bandy, as in "bandy about" or "bandy-legged" bated, as in "wait with bated breath", although the derived term "abate" remains in non-idiom-specific use beck, as in "at one's beck and call", although the verb form "beckon" is still used in non-idiom-specific use champing, as in "champing at the bit", where "champ" is an obsolete precursor to "chomp", in current use coign, as in "coign of vantage" deserts, as in "just deserts", although singular "desert" in the sense of "state of deserving" occurs in nonidiom-specific contexts including law and philosophy. "Dessert" is a French loanword, meaning "removing what has been served," and has only a distant etymological connection.
English-language examples:
dint, as in "by dint of" dudgeon, as in "in high dudgeon" eke, as in "eke out" fettle, as in "in fine fettle", although the verb, 'to fettle', remains in specialized use in metal casting.
English-language examples:
fro, as in "to and fro" goodly', as in "goodly number" helter skelter, as in "scattered helter skelter about the office", Middle English skelten to hasten inclement, as in "inclement weather” jetsam, as in "flotsam and jetsam", except in legal contexts (especially admiralty, property, and international law) kith, as in "kith and kin" lam, as in “on the lam” lo, as in "lo and behold" loggerheads as in "at loggerheads" or loggerhead turtle muchness as in "much of a muchness" shebang, as in "the whole shebang", although the word is now used as an unrelated common noun in programmers' jargon.
English-language examples:
shrive, preserved only in inflected forms occurring only as part of fixed phrases: 'shrift' in "short shrift" and 'shrove' in "Shrove Tuesday" span and spick, as in "spick and span" turpitude, as in "moral turpitude" vim, as in "vim and vigor" wedlock, as in "out of wedlock" wend, as in "wend your way" yore, as in "of yore", usually "days of yore" "Born fossils" These words were formed from other languages, by elision, or by mincing of other fixed phrases.
English-language examples:
caboodle, as in "kit and caboodle" (evolved from "kit and boodle", itself a fixed phrase borrowed as a unit from Dutch kitte en boedel) druthers, as in "if I had my druthers..." (formed by elision from "would rather" and never occurring outside this phrase to begin with) tarnation, as in "what in tarnation...?" (evolved in the context of fixed phrases formed by mincing of previously fixed phrases that include the term "damnation") nother, as in "a whole nother..." (fixed phrase formed by rebracketing another as a nother, then inserting whole for emphasis; almost never occurs outside this phrase) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meteor shower**
Meteor shower:
A meteor shower is a celestial event in which a number of meteors are observed to radiate, or originate, from one point in the night sky. These meteors are caused by streams of cosmic debris called meteoroids entering Earth's atmosphere at extremely high speeds on parallel trajectories. Most meteors are smaller than a grain of sand, so almost all of them disintegrate and never hit the Earth's surface. Very intense or unusual meteor showers are known as meteor outbursts and meteor storms, which produce at least 1,000 meteors an hour, most notably from the Leonids. The Meteor Data Centre lists over 900 suspected meteor showers of which about 100 are well established. Several organizations point to viewing opportunities on the Internet. NASA maintains a daily map of active meteor showers.
Historical developments:
A meteor shower in August 1583 was recorded in the Timbuktu manuscripts.
Historical developments:
In the modern era, the first great meteor storm was the Leonids of November 1833. One estimate is a peak rate of over one hundred thousand meteors an hour, but another, done as the storm abated, estimated more than two hundred thousand meteors during the 9 hours of the storm, over the entire region of North America east of the Rocky Mountains. American Denison Olmsted (1791–1859) explained the event most accurately. After spending the last weeks of 1833 collecting information, he presented his findings in January 1834 to the American Journal of Science and Arts, published in January–April 1834, and January 1836. He noted the shower was of short duration and was not seen in Europe, and that the meteors radiated from a point in the constellation of Leo. He speculated the meteors had originated from a cloud of particles in space. Work continued, yet coming to understand the annual nature of showers though the occurrences of storms perplexed researchers.The actual nature of meteors was still debated during the 19th century. Meteors were conceived as an atmospheric phenomenon by many scientists (Alexander von Humboldt, Adolphe Quetelet, Julius Schmidt) until the Italian astronomer Giovanni Schiaparelli ascertained the relation between meteors and comets in his work "Notes upon the astronomical theory of the falling stars" (1867). In the 1890s, Irish astronomer George Johnstone Stoney (1826–1911) and British astronomer Arthur Matthew Weld Downing (1850–1917) were the first to attempt to calculate the position of the dust at Earth's orbit. They studied the dust ejected in 1866 by comet 55P/Tempel-Tuttle before the anticipated Leonid shower return of 1898 and 1899. Meteor storms were expected, but the final calculations showed that most of the dust would be far inside Earth's orbit. The same results were independently arrived at by Adolf Berberich of the Königliches Astronomisches Rechen Institut (Royal Astronomical Computation Institute) in Berlin, Germany. Although the absence of meteor storms that season confirmed the calculations, the advance of much better computing tools was needed to arrive at reliable predictions.
Historical developments:
In 1981, Donald K. Yeomans of the Jet Propulsion Laboratory reviewed the history of meteor showers for the Leonids and the history of the dynamic orbit of Comet Tempel-Tuttle. A graph from it was adapted and re-published in Sky and Telescope. It showed relative positions of the Earth and Tempel-Tuttle and marks where Earth encountered dense dust. This showed that the meteoroids are mostly behind and outside the path of the comet, but paths of the Earth through the cloud of particles resulting in powerful storms were very near paths of nearly no activity.
Historical developments:
In 1985, E. D. Kondrat'eva and E. A. Reznikov of Kazan State University first correctly identified the years when dust was released which was responsible for several past Leonid meteor storms. In 1995, Peter Jenniskens predicted the 1995 Alpha Monocerotids outburst from dust trails. In anticipation of the 1999 Leonid storm, Robert H. McNaught, David Asher, and Finland's Esko Lyytinen were the first to apply this method in the West. In 2006 Jenniskens published predictions for future dust trail encounters covering the next 50 years. Jérémie Vaubaillon continues to update predictions based on observations each year for the Institut de Mécanique Céleste et de Calcul des Éphémérides (IMCCE).
Radiant point:
Because meteor shower particles are all traveling in parallel paths and at the same velocity, they will appear to an observer below to radiate away from a single point in the sky. This radiant point is caused by the effect of perspective, similar to parallel railroad tracks converging at a single vanishing point on the horizon. Meteor showers are normally named after the constellation from which the meteors appear to originate. This "fixed point" slowly moves across the sky during the night due to the Earth turning on its axis, the same reason the stars appear to slowly march across the sky. The radiant also moves slightly from night to night against the background stars (radiant drift) due to the Earth moving in its orbit around the Sun. See IMO Meteor Shower Calendar 2017 (International Meteor Organization) for maps of drifting "fixed points." When the moving radiant is at the highest point, it will reach the observer's sky that night. The Sun will be just clearing the eastern horizon. For this reason, the best viewing time for a meteor shower is generally slightly before dawn — a compromise between the maximum number of meteors available for viewing and the brightening sky, which makes them harder to see.
Naming:
Meteor showers are named after the nearest constellation, or bright star with a Greek or Roman letter assigned that is close to the radiant position at the peak of the shower, whereby the grammatical declension of the Latin possessive form is replaced by "id" or "ids." Hence, meteors radiating from near the star Delta Aquarii (declension "-i") are called the Delta Aquariids. The International Astronomical Union's Task Group on Meteor Shower Nomenclature and the IAU's Meteor Data Center keep track of meteor shower nomenclature and which showers are established.
Origin of meteoroid streams:
A meteor shower results from an interaction between a planet, such as Earth, and streams of debris from a comet. Comets can produce debris by water vapor drag, as demonstrated by Fred Whipple in 1951, and by breakup. Whipple envisioned comets as "dirty snowballs," made up of rock embedded in ice, orbiting the Sun. The "ice" may be water, methane, ammonia, or other volatiles, alone or in combination. The "rock" may vary in size from a dust mote to a small boulder. Dust mote sized solids are orders of magnitude more common than those the size of sand grains, which, in turn, are similarly more common than those the size of pebbles, and so on. When the ice warms and sublimates, the vapor can drag along dust, sand, and pebbles.
Origin of meteoroid streams:
Each time a comet swings by the Sun in its orbit, some of its ice vaporizes, and a certain number of meteoroids will be shed. The meteoroids spread out along the entire trajectory of the comet to form a meteoroid stream, also known as a "dust trail" (as opposed to a comet's "gas tail" caused by the tiny particles that are quickly blown away by solar radiation pressure).
Origin of meteoroid streams:
Recently, Peter Jenniskens has argued that most of our short-period meteor showers are not from the normal water vapor drag of active comets, but the product of infrequent disintegrations, when large chunks break off a mostly dormant comet. Examples are the Quadrantids and Geminids, which originated from a breakup of asteroid-looking objects, (196256) 2003 EH1 and 3200 Phaethon, respectively, about 500 and 1000 years ago. The fragments tend to fall apart quickly into dust, sand, and pebbles and spread out along the comet's orbit to form a dense meteoroid stream, which subsequently evolves into Earth's path.
Dynamical evolution of meteoroid streams:
Shortly after Whipple predicted that dust particles traveled at low speeds relative to the comet, Milos Plavec was the first to offer the idea of a dust trail, when he calculated how meteoroids, once freed from the comet, would drift mostly in front of or behind the comet after completing one orbit. The effect is simple celestial mechanics – the material drifts only a little laterally away from the comet while drifting ahead or behind the comet because some particles make a wider orbit than others. These dust trails are sometimes observed in comet images taken at mid infrared wavelengths (heat radiation), where dust particles from the previous return to the Sun are spread along the orbit of the comet (see figures).
Dynamical evolution of meteoroid streams:
The gravitational pull of the planets determines where the dust trail would pass by Earth orbit, much like a gardener directing a hose to water a distant plant. Most years, those trails would miss the Earth altogether, but in some years, the Earth is showered by meteors. This effect was first demonstrated from observations of the 1995 alpha Monocerotids, and from earlier not widely known identifications of past Earth storms.
Dynamical evolution of meteoroid streams:
Over more extended periods, the dust trails can evolve in complicated ways. For example, the orbits of some repeating comets, and meteoroids leaving them, are in resonant orbits with Jupiter or one of the other large planets – so many revolutions of one will equal another number of the other. This creates a shower component called a filament.
Dynamical evolution of meteoroid streams:
A second effect is a close encounter with a planet. When the meteoroids pass by Earth, some are accelerated (making wider orbits around the Sun), others are decelerated (making shorter orbits), resulting in gaps in the dust trail in the next return (like opening a curtain, with grains piling up at the beginning and end of the gap). Also, Jupiter's perturbation can dramatically change sections of the dust trail, especially for a short period comets, when the grains approach the giant planet at their furthest point along the orbit around the Sun, moving most slowly. As a result, the trail has a clumping, a braiding or a tangling of crescents, of each release of material.
Dynamical evolution of meteoroid streams:
The third effect is that of radiation pressure which will push less massive particles into orbits further from the Sun – while more massive objects (responsible for bolides or fireballs) will tend to be affected less by radiation pressure. This makes some dust trail encounters rich in bright meteors, others rich in faint meteors.
Over time, these effects disperse the meteoroids and create a broader stream. The meteors we see from these streams are part of annual showers, because Earth encounters those streams every year at much the same rate.
When the meteoroids collide with other meteoroids in the zodiacal cloud, they lose their stream association and become part of the "sporadic meteors" background. Long since dispersed from any stream or trail, they form isolated meteors, not a part of any shower. These random meteors will not appear to come from the radiant of the leading shower.
Famous meteor showers:
Perseids and Leonids In most years, the most visible meteor shower is the Perseids, which peak on 12 August of each year at over one meteor per minute. NASA has a tool to calculate how many meteors per hour are visible from one's observing location.
Famous meteor showers:
The Leonid meteor shower peaks around 17 November of each year. The Leonid shower produces a meteor storm, peaking at rates of thousands of meteors per hour. Leonid storms gave birth to the term meteor shower when it was first realised that, during the November 1833 storm, the meteors radiated from near the star Gamma Leonis. The last Leonid storms were in 1999, 2001 (two), and 2002 (two). Before that, there were storms in 1767, 1799, 1833, 1866, 1867, and 1966. When the Leonid shower is not storming, it is less active than the Perseids.
Famous meteor showers:
See the Infographics on Meteor Shower Calendar-2021 on the right.
Other meteor showers Established meteor showers Official names are given in the International Astronomical Union's list of meteor showers.
Extraterrestrial meteor showers:
Any other Solar System body with a reasonably transparent atmosphere can also have meteor showers. As the Moon is in the neighborhood of Earth it can experience the same showers, but will have its own phenomena due to its lack of an atmosphere per se, such as vastly increasing its sodium tail. NASA now maintains an ongoing database of observed impacts on the moon maintained by the Marshall Space Flight Center whether from a shower or not.
Extraterrestrial meteor showers:
Many planets and moons have impact craters dating back large spans of time. But new craters, perhaps even related to meteor showers are possible. Mars, and thus its moons, is known to have meteor showers. These have not been observed on other planets as yet but may be presumed to exist. For Mars in particular, although these are different from the ones seen on Earth because of the different orbits of Mars and Earth relative to the orbits of comets. The Martian atmosphere has less than one percent of the density of Earth's at ground level, at their upper edges, where meteoroids strike; the two are more similar. Because of the similar air pressure at altitudes for meteors, the effects are much the same. Only the relatively slower motion of the meteoroids due to increased distance from the sun should marginally decrease meteor brightness. This is somewhat balanced because the slower descent means that Martian meteors have more time to ablate.On March 7, 2004, the panoramic camera on Mars Exploration Rover Spirit recorded a streak which is now believed to have been caused by a meteor from a Martian meteor shower associated with comet 114P/Wiseman-Skiff. A strong display from this shower was expected on December 20, 2007. Other showers speculated about are a "Lambda Geminid" shower associated with the Eta Aquariids of Earth (i.e., both associated with Comet 1P/Halley), a "Beta Canis Major" shower associated with Comet 13P/Olbers, and "Draconids" from 5335 Damocles.Isolated massive impacts have been observed at Jupiter: The 1994 Comet Shoemaker–Levy 9 which formed a brief trail as well, and successive events since then (see List of Jupiter events.) Meteors or meteor showers have been discussed for most of the objects in the Solar System with an atmosphere: Mercury, Venus, Saturn's moon Titan, Neptune's moon Triton, and Pluto. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Software quality assurance**
Software quality assurance:
Software quality assurance (SQA) is a means and practice of monitoring all software engineering processes, methods, and work products to ensure compliance against defined standards. It may include ensuring conformance to standards or models, such as ISO/IEC 9126 (now superseded by ISO 25010), SPICE or CMMI.It includes standards and procedures that managers, administrators or developers may use to review and audit software products and activities to verify that the software meets quality criteria which link to standards. SQA encompasses the entire software development process, including requirements engineering, software design, coding, code reviews, source code control, software configuration management, testing, release management and software integration. It is organized into goals, commitments, abilities, activities, measurements, verification and validation.
Purpose:
SQA involves a three-pronged approach: Organization-wide policies, procedures and standards Project-specific policies, procedures and standards Compliance to appropriate proceduresGuidelines for the application of ISO 9001:2015 to computer software are described in ISO/IEC/IEEE 90003:2018. External entities can be contracted as part of process assessments to verify that projects are standard-compliant. More specifically in case of software, ISO/IEC 9126 (now superseded by ISO 25010) should be considered and applied for software quality.
Activities:
Quality assurance activities take place at each phase of development. Analysts use application technology and techniques to achieve high-quality specifications and designs, such as model-driven design. Engineers and technicians find bugs and problems with related software quality through testing activities. Standards and process deviations are identified and addressed throughout development by project managers or quality managers, who also ensure that changes to functionality, performance, features, architecture and component (in general: changes to product or service scope) are made only after appropriate review, e.g. as part of change control boards. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MODY 3**
MODY 3:
MODY 3 or HNF1A-MODY is a form of maturity-onset diabetes of the young. It is caused by mutations of the HNF1-alpha gene, a homeobox gene on human chromosome 12. This is the most common type of MODY in populations with European ancestry, accounting for about 70% of all cases in Europe. HNF1α is a transcription factor (also known as transcription factor 1, TCF1) that is thought to control a regulatory network (including, among other genes, HNF1α) important for differentiation of beta cells. Mutations of this gene lead to reduced beta cell mass or impaired function. MODY 1 and MODY 3 diabetes are clinically similar. About 70% of people develop this type of diabetes by age 25 years, but it occurs at much later ages in a few. This type of diabetes can often be treated with sulfonylureas with excellent results for decades. However, the loss of insulin secretory capacity is slowly progressive and most eventually need insulin.
MODY 3:
This is the form of MODY which can most resemble diabetes mellitus type 1, and one of the incentives for diagnosing it is that insulin may be discontinued or deferred in favor of oral sulfonylureas. Some people treated with insulin for years due to a presumption of type 1 diabetes have been able to switch to oral medication and discontinue injections. Long-term diabetic complications can occur if blood glucose levels are not adequately controlled.
MODY 3:
High-sensitivity measurements of C-reactive protein may help to distinguish between HNF1A-MODY and other forms of diabetes | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bovine papular stomatitis**
Bovine papular stomatitis:
Bovine papular stomatitis is a farmyard pox caused by Bovine papular stomatitis virus (BPSV), which can spread from infected cattle to cause disease in milkers, farmers and veterinarians. Generally there is usually one or a few skin lesions typically on the hands or forearm. The disease is generally mild.BPSV is a member of the family Poxviridae and the genus Parapoxvirus. Spread typically occurs by direct contact with the infected animal, but has been reported in people without direct contact.It may appear similar to foot-and-mouth disease.It occurs worldwide in cattle.In other animals the lesions are reddish, raised, sometimes ulcerative lesions on the lips, muzzle, and in the mouth. It usually occurs before the age of two years. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cabal (video game)**
Cabal (video game):
Cabal (カベール, Kabēru) is a 1988 arcade shooter video game originally developed by TAD Corporation and published in Japan by Taito, in North America by Fabtek and in Europe by Capcom. In the game, the player controls a commando, viewed from behind, trying to destroy various enemy military bases. The game was innovative for the era, but only a mild success in the arcades, and became better known for its various home conversions.
Gameplay:
Cabal has one-player and two-player-simultaneous modes of gameplay. Each player assumes the role of an unnamed commando trying to destroy several enemy military bases. There are 5 stages with four screens each. The player starts with a stock of three lives and uses a gun with limitless ammunition and a fixed number of grenades to fend off enemy troops and attack the base. The commando is seen from behind and starts behind a protective wall which can be damaged and shattered by enemy fire. To stay alive, the player needs to avoid enemy bullets by running left or right, hiding behind cover, or using a dodge-roll. An enemy gauge at the bottom of the screen depletes as foes are destroyed and certain structures (which collapse rather than shatter) are brought down. When the enemy gauge is emptied, the level is successfully completed, all of the remaining buildings onscreen collapse, and the player progresses to the next stage. If a player is killed, he is immediately revived at the cost of one life or the game ends if they have no lives remaining. Boss fights, however, restart from the beginning if the only remaining player dies.
Gameplay:
From time to time, power-ups are released from objects destroyed onscreen. Some power-ups give special weapons such as an extremely fast-firing machine gun or an automatic shotgun with a lower firing rate and larger area of effect. Others grant extra grenades or additional points.
The arcade cabinet is a standard upright cabinet. Each player uses a trackball to move their character from side to side and move the crosshairs about the screen. On later board revisions, a joystick was installed instead with an optional sub-pcb for use with a trackball. With a trackball, dodge-rolling is done by pushing the trackball to maximum speed.
Gameplay:
Cabal was somewhat innovative in that it featured a 3D perspective in which the player character was situated in the foreground with an over-the-shoulder camera view, similar to modern third-person shooters. Players cannot move the character while firing (holding down the fire button gives players control of the aiming cursor), and when moving the character to avoid incoming bullets, the aiming cursor moves along in tandem. This creates the need for a careful balance between offensive and defensive tactics, separating Cabal from run-and-gun shooters which relied more on reflexes. Advanced gameplay involves destructible asset management in balancing dodging (which gets riskier as the number of enemy projectiles on screen increases) with the safer alternative of taking cover behind a protective but limited durability wall.
Ports and related releases:
Cabal was ported to several home computers of the era, including the DOS computers, Amstrad CPC, Commodore 64, ZX Spectrum, Atari ST and Amiga. It was also ported to the Nintendo Entertainment System console by Rare. A version for the Atari Lynx was previewed and even slated to be published in April 1992, but it was never released by Fabtek.When converting the game to the Nintendo Entertainment System, Rare were given a Cabal cabinet but did not have access to the game's source code, so they had to play the game over and over and redraw the graphics from memory. To accommodate the many layers and sprites of the arcade game, programmer Anthony Ball used a common coding trick: swapping sprites from left to right every other frame. This has the negative side effect of causing the sprites to flicker when they reach the console's limit of eight per line, but Ball, like many programmers of the era, found this an acceptable trade-off for including all the game's content, and in a 2016 interview he said he is happy with the quality of the conversion.Cabal was followed in 1990 by Blood Bros., though the sequel had a western theme as opposed to Cabal's Vietnam-era theme.
Reception and legacy:
In Japan, Game Machine listed Cabal on their November 1, 1988, issue as being the eighth most-successful table arcade unit of the month.The arcade version was reviewed by Clare Edgeley in Computer and Video Games magazine. She gave it a positive review, while comparing it favorably with Operation Wolf (1987) and Combat School (1987). Nick Kelly of Commodore User rated Cabal seven out of ten, comparing it favorably with Gryzor (1987) and Devastators (1988).The ZX Spectrum version won the award for best advert of the year according to the readers of Crash.The game's success inspired many "Cabal clones," such as NAM-1975 (1990) and Wild Guns (1994). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Modulus modulus**
Modulus modulus:
Modulus modulus, commonly known as the buttonsnail, is a species of small sea snail, a marine gastropod mollusk in the family Modulidae.
Distribution:
The distribution of this species includes both the east and west coast of Florida.
Description:
The maximum recorded shell length is 16.5 mm. The overall shape of the shell is button-like, with a gray or brown streaked, ridge-sculptured body whorl and a low spire.
Habitat:
The minimum recorded depth is 0 m. The maximum recorded depth is 105 m.
Found in shell grit and coral sand, among sea grass beds -at 2 to 3 feet depth. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apodicticity**
Apodicticity:
"Apodictic", also spelled "apodeictic" (Ancient Greek: ἀποδεικτικός, "capable of demonstration"), is an adjectival expression from Aristotelean logic that refers to propositions that are demonstrably, necessarily or self-evidently true. Apodicticity or apodixis is the corresponding abstract noun, referring to logical certainty.
Apodicticity:
Apodictic propositions contrast with assertoric propositions, which merely assert that something is (or is not) true, and with problematic propositions, which assert only the possibility of something's being true. Apodictic judgments are clearly provable or logically certain. For instance, "Two plus two equals four" is apodictic, because it is true by definition. "Chicago is larger than Omaha" is assertoric. "A corporation could be wealthier than a country" is problematic. In Aristotelian logic, "apodictic" is opposed to "dialectic", as scientific proof is opposed to philosophical reasoning. Kant contrasted "apodictic" with "problematic" and "assertoric" in the Critique of Pure Reason, on page A70/B95.
Apodictic a priorism:
Hans Reichenbach, one of the founders of logical positivism, offered a modified version of Immanuel Kant's a priorism by distinguishing between apodictic a priorism and constitutive a priorism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Landmap**
Landmap:
Landmap was a service based at the University of Manchester, England, which provided UK academia with a free-of-charge spatial data download service, using Open Geospatial Consortium (OGC) standards for maximum interoperability, which was enhanced and supported by a range of teaching and learning materials. The service was hosted at the Mimas datacentre from 2007 until 2013, and was funded by the government via Jisc.
Landmap:
The spatial data and the learning materials are primarily for students, lecturers and researchers and can be accessed only through Shibboleth or Athens or similar UK university/institution authentication.
End of service:
Jisc funding for the Landmap service terminated on 31 December 2013.
The Landmap data purchased by Jisc have been transferred to NERC's Centre for Environmental Data Analysis (CEDA) and can be accessed from there. The Learning Zone hosted by Landmap can be accessed on the Landmap Legacy site.
History:
The Landmap Project began late 1999 as a joint project between Mimas, UCL Department of Geomatic Engineering, and other project partners. The project produced orthorectified satellite image mosaics of Landsat, SPOT and ERS radar data and a high resolution Digital Elevation Model (DEM) for the British Isles. The DEM was quality assured using a Kinematic Global Positioning Survey (KGPS). The outputs of the Landmap Project were provided as a basic download service to the UK and Irish academic community.
History:
The Landmap Service followed a JISC subscription-based model for service cost recovery between August 2004 and July 2007. In 2005 Landmap obtained an agreement with the European Space Agency to provide ENVISAT Advanced Synthetic Radar (ASAR) data to the UK academic community through the Category 1 Project 'Monitoring the UK with ENVISAT ASAR and MERIS'. During this time a pre-processing chain was developed to provide orthorectified GeoTIFF ASAR data and to deliver this data through the Landmap website. In 2007 Landmap presented the work done as part of the Category 1 Project at the biannual ESA Fringe Workshop. In 2006 the Image Processing Course for Erdas, ENVI, IDRISI Kilimanjaro and PCI Geomatica was released onto the Landmap website. The course materials were authored by IS Limited and access was protected using Athens.
History:
The Landmap service was awarded five years' funding by Jisc from 1 August 2007, which removed the subscription element of the service and allowed for a modest budget for data acquisition and some e-learning content creation. This allowed all members of the academic community to obtain spatial data from the service free of charge, making the size and quantity of users for an academic institution irrelevant in terms of accessing spatial data.
History:
New data acquisitions included the Cities Revealed datasets provided by The GeoInformation Group and colour infrared data from Bluesky. During this time there was also an agreement with Infoterra for tasking the TopSat satellite with requests by the UK academic community for images practically anywhere in the world. These images obtained by TopSat were provided on the Landmap website and displayed in Google Earth.
History:
Landmap started to work with e-learning technology company Telaman to help create a pedagogy for the new Learning Zone area of the website. New e-learning content was sought from the academic and commercial spatial science community. The Learning Zone expanded greatly in 2008/2009 with new content provided by The GeoInformation Group and lecturers from the academic community.
Conference papers:
Millin-Chalabi, G., Schumm, J., Gupta, B., Tun, Y., Kandeh, J. and Kitmitto, K. (2011) The Landmap Service: Reaching New Horizons in Data Management and E-Learning. RSPSoc Annual Conference 2011, Bournemouth University, UK Millin, G. and et al. (2009) The GeoKnowledge Project: Providing E-Learning Resources for UK Academia. RSPSoc Annual Conference 2009, University of Leicester, UK. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Divergent series**
Divergent series:
In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.
If a series converges, the individual terms of the series must approach zero. Thus any series in which the individual terms do not approach zero diverges. However, convergence is a stronger condition: not all series whose terms approach zero converge. A counterexample is the harmonic series 1+12+13+14+15+⋯=∑n=1∞1n.
The divergence of the harmonic series was proven by the medieval mathematician Nicole Oresme.
Divergent series:
In specialized mathematical contexts, values can be objectively assigned to certain series whose sequences of partial sums diverge, in order to make meaning of the divergence of the series. A summability method or summation method is a partial function from the set of series to values. For example, Cesàro summation assigns Grandi's divergent series 1−1+1−1+⋯ the value 1/2. Cesàro summation is an averaging method, in that it relies on the arithmetic mean of the sequence of partial sums. Other methods involve analytic continuations of related series. In physics, there are a wide variety of summability methods; these are discussed in greater detail in the article on regularization.
History:
Before the 19th century, divergent series were widely used by Leonhard Euler and others, but often led to confusing and contradictory results. A major problem was Euler's idea that any divergent series should have a natural sum, without first defining what is meant by the sum of a divergent series. Augustin-Louis Cauchy eventually gave a rigorous definition of the sum of a (convergent) series, and for some time after this, divergent series were mostly excluded from mathematics. They reappeared in 1886 with Henri Poincaré's work on asymptotic series. In 1890, Ernesto Cesàro realized that one could give a rigorous definition of the sum of some divergent series, and defined Cesàro summation. (This was not the first use of Cesàro summation, which was used implicitly by Ferdinand Georg Frobenius in 1880; Cesàro's key contribution was not the discovery of this method, but his idea that one should give an explicit definition of the sum of a divergent series.) In the years after Cesàro's paper, several other mathematicians gave other definitions of the sum of a divergent series, although these are not always compatible: different definitions can give different answers for the sum of the same divergent series; so, when talking about the sum of a divergent series, it is necessary to specify which summation method one is using.
Examples:
1 - 1 + 1 - 1 + ⋯ “ = ” 12 1 − 2 + 3 − 4 + ⋯ “ = ” 14 1 − 1 + 2 − 6 + 24 − 120 + ⋯ “ = ” 0.596 347 … 1 − 2 + 4 − 8 + ⋯ “ = ” 13 1 + 2 + 4 + 8 + ⋯ “ = ” −1 1 + 1 + 1 + 1 + ⋯ “ = ” −12 1 + 2 + 3 + 4 + ⋯ “ = ” 12
Theorems on methods for summing divergent series:
A summability method M is regular if it agrees with the actual limit on all convergent series. Such a result is called an Abelian theorem for M, from the prototypical Abel's theorem. More subtle, are partial converse results, called Tauberian theorems, from a prototype proved by Alfred Tauber. Here partial converse means that if M sums the series Σ, and some side-condition holds, then Σ was convergent in the first place; without any side-condition such a result would say that M only summed convergent series (making it useless as a summation method for divergent series).
Theorems on methods for summing divergent series:
The function giving the sum of a convergent series is linear, and it follows from the Hahn–Banach theorem that it may be extended to a summation method summing any series with bounded partial sums. This is called the Banach limit. This fact is not very useful in practice, since there are many such extensions, inconsistent with each other, and also since proving such operators exist requires invoking the axiom of choice or its equivalents, such as Zorn's lemma. They are therefore nonconstructive.
Theorems on methods for summing divergent series:
The subject of divergent series, as a domain of mathematical analysis, is primarily concerned with explicit and natural techniques such as Abel summation, Cesàro summation and Borel summation, and their relationships. The advent of Wiener's tauberian theorem marked an epoch in the subject, introducing unexpected connections to Banach algebra methods in Fourier analysis.
Summation of divergent series is also related to extrapolation methods and sequence transformations as numerical techniques. Examples of such techniques are Padé approximants, Levin-type sequence transformations, and order-dependent mappings related to renormalization techniques for large-order perturbation theory in quantum mechanics.
Properties of summation methods:
Summation methods usually concentrate on the sequence of partial sums of the series. While this sequence does not converge, we may often find that when we take an average of larger and larger numbers of initial terms of the sequence, the average converges, and we can use this average instead of a limit to evaluate the sum of the series. A summation method can be seen as a function from a set of sequences of partial sums to values. If A is any summation method assigning values to a set of sequences, we may mechanically translate this to a series-summation method AΣ that assigns the same values to the corresponding series. There are certain properties it is desirable for these methods to possess if they are to arrive at values corresponding to limits and sums, respectively.
Properties of summation methods:
Regularity. A summation method is regular if, whenever the sequence s converges to x, A(s) = x. Equivalently, the corresponding series-summation method evaluates AΣ(a) = x.
Properties of summation methods:
Linearity. A is linear if it is a linear functional on the sequences where it is defined, so that A(k r + s) = k A(r) + A(s) for sequences r, s and a real or complex scalar k. Since the terms an+1 = sn+1 − sn of the series a are linear functionals on the sequence s and vice versa, this is equivalent to AΣ being a linear functional on the terms of the series.
Properties of summation methods:
Stability (also called translativity). If s is a sequence starting from s0 and s′ is the sequence obtained by omitting the first value and subtracting it from the rest, so that s′n = sn+1 − s0, then A(s) is defined if and only if A(s′) is defined, and A(s) = s0 + A(s′). Equivalently, whenever a′n = an+1 for all n, then AΣ(a) = a0 + AΣ(a′). Another way of stating this is that the shift rule must be valid for the series that are summable by this method.The third condition is less important, and some significant methods, such as Borel summation, do not possess it.One can also give a weaker alternative to the last condition. Finite re-indexability. If a and a′ are two series such that there exists a bijection f:N→N such that ai = a′f(i) for all i, and if there exists some N∈N such that ai = a′i for all i > N, then AΣ(a) = AΣ(a′). (In other words, a′ is the same series as a, with only finitely many terms re-indexed.) This is a weaker condition than stability, because any summation method that exhibits stability also exhibits finite re-indexability, but the converse is not true.)A desirable property for two distinct summation methods A and B to share is consistency: A and B are consistent if for every sequence s to which both assign a value, A(s) = B(s). (Using this language, a summation method A is regular iff it is consistent with the standard sum Σ.) If two methods are consistent, and one sums more series than the other, the one summing more series is stronger.
Properties of summation methods:
There are powerful numerical summation methods that are neither regular nor linear, for instance nonlinear sequence transformations like Levin-type sequence transformations and Padé approximants, as well as the order-dependent mappings of perturbative series based on renormalization techniques.
Taking regularity, linearity and stability as axioms, it is possible to sum many divergent series by elementary algebraic manipulations. This partly explains why many different summation methods give the same answer for certain series.
Properties of summation methods:
For instance, whenever r ≠ 1, the geometric series (stability) (linearity) hence unless it is infinite can be evaluated regardless of convergence. More rigorously, any summation method that possesses these properties and which assigns a finite value to the geometric series must assign this value. However, when r is a real number larger than 1, the partial sums increase without bound, and averaging methods assign a limit of infinity.
Classical summation methods:
The two classical summation methods for series, ordinary convergence and absolute convergence, define the sum as a limit of certain partial sums. These are included only for completeness; strictly speaking they are not true summation methods for divergent series since, by definition, a series is divergent only if these methods do not work. Most but not all summation methods for divergent series extend these methods to a larger class of sequences.
Classical summation methods:
Absolute convergence Absolute convergence defines the sum of a sequence (or set) of numbers to be the limit of the net of all partial sums ak1 + ... + akn, if it exists. It does not depend on the order of the elements of the sequence, and a classical theorem says that a sequence is absolutely convergent if and only if the sequence of absolute values is convergent in the standard sense.
Classical summation methods:
Sum of a series Cauchy's classical definition of the sum of a series a0 + a1 + ... defines the sum to be the limit of the sequence of partial sums a0 + ... + an. This is the default definition of convergence of a sequence.
Nørlund means:
Suppose pn is a sequence of positive terms, starting from p0. Suppose also that 0.
If now we transform a sequence s by using p to give weighted means, setting tm=pms0+pm−1s1+⋯+p0smp0+p1+⋯+pm then the limit of tn as n goes to infinity is an average called the Nørlund mean Np(s).
The Nørlund mean is regular, linear, and stable. Moreover, any two Nørlund means are consistent.
Nørlund means:
Cesàro summation The most significant of the Nørlund means are the Cesàro sums. Here, if we define the sequence pk by pnk=(n+k−1k−1) then the Cesàro sum Ck is defined by Ck(s) = N(pk)(s). Cesàro sums are Nørlund means if k ≥ 0, and hence are regular, linear, stable, and consistent. C0 is ordinary summation, and C1 is ordinary Cesàro summation. Cesàro sums have the property that if h > k, then Ch is stronger than Ck.
Abelian means:
Suppose λ = {λ0, λ1, λ2,...} is a strictly increasing sequence tending towards infinity, and that λ0 ≥ 0. Suppose f(x)=∑n=0∞ane−λnx converges for all real numbers x > 0. Then the Abelian mean Aλ is defined as lim x→0+f(x).
More generally, if the series for f only converges for large x but can be analytically continued to all positive real x, then one can still define the sum of the divergent series by the limit above.
A series of this type is known as a generalized Dirichlet series; in applications to physics, this is known as the method of heat-kernel regularization.
Abelian means are regular and linear, but not stable and not always consistent between different choices of λ. However, some special cases are very important summation methods.
Abelian means:
Abel summation If λn = n, then we obtain the method of Abel summation. Here f(x)=∑n=0∞ane−nx=∑n=0∞anzn, where z = exp(−x). Then the limit of f(x) as x approaches 0 through positive reals is the limit of the power series for f(z) as z approaches 1 from below through positive reals, and the Abel sum A(s) is defined as lim z→1−∑n=0∞anzn.
Abelian means:
Abel summation is interesting in part because it is consistent with but more powerful than Cesàro summation: A(s) = Ck(s) whenever the latter is defined. The Abel sum is therefore regular, linear, stable, and consistent with Cesàro summation.
Lindelöf summation If λn = n log(n), then (indexing from one) we have f(x)=a1+a22−2x+a33−3x+⋯.
Then L(s), the Lindelöf sum (Volkov 2001), is the limit of f(x) as x goes to positive zero. The Lindelöf sum is a powerful method when applied to power series among other applications, summing power series in the Mittag-Leffler star.
If g(z) is analytic in a disk around zero, and hence has a Maclaurin series G(z) with a positive radius of convergence, then L(G(z)) = g(z) in the Mittag-Leffler star. Moreover, convergence to g(z) is uniform on compact subsets of the star.
Analytic continuation:
Several summation methods involve taking the value of an analytic continuation of a function.
Analytic continuation:
Analytic continuation of power series If Σanxn converges for small complex x and can be analytically continued along some path from x = 0 to the point x = 1, then the sum of the series can be defined to be the value at x = 1. This value may depend on the choice of path. One of the first examples of potentially different sums for a divergent series, using analytic continuation, was given by Callet, who observed that if 1≤m<n then 1−xm1−xn=1+x+⋯+xm−11+x+…xn−1=1−xm+xn−xn+m+x2n−… Evaluating at x=1 , one gets 1−1+1−1+⋯=mn.
Analytic continuation:
However, the gaps in the series are key. For m=1,n=3 for example, we actually would get 1−1+0+1−1+0+1−1+⋯=13 , so different sums correspond to different placements of the 0 's.
Analytic continuation:
Euler summation Euler summation is essentially an explicit form of analytic continuation. If a power series converges for small complex z and can be analytically continued to the open disk with diameter from −1/q + 1 to 1 and is continuous at 1, then its value at q is called the Euler or (E,q) sum of the series Σan. Euler used it before analytic continuation was defined in general, and gave explicit formulas for the power series of the analytic continuation.
Analytic continuation:
The operation of Euler summation can be repeated several times, and this is essentially equivalent to taking an analytic continuation of a power series to the point z = 1.
Analytic continuation of Dirichlet series This method defines the sum of a series to be the value of the analytic continuation of the Dirichlet series f(s)=a11s+a22s+a33s+⋯ at s = 0, if this exists and is unique. This method is sometimes confused with zeta function regularization.
If s = 0 is an isolated singularity, the sum is defined by the constant term of the Laurent series expansion.
Analytic continuation:
Zeta function regularization If the series f(s)=1a1s+1a2s+1a3s+⋯ (for positive values of the an) converges for large real s and can be analytically continued along the real line to s = −1, then its value at s = −1 is called the zeta regularized sum of the series a1 + a2 + ... Zeta function regularization is nonlinear. In applications, the numbers ai are sometimes the eigenvalues of a self-adjoint operator A with compact resolvent, and f(s) is then the trace of A−s. For example, if A has eigenvalues 1, 2, 3, ... then f(s) is the Riemann zeta function, ζ(s), whose value at s = −1 is −1/12, assigning a value to the divergent series 1 + 2 + 3 + 4 + .... Other values of s can also be used to assign values for the divergent sums ζ(0) = 1 + 1 + 1 + ... = −1/2, ζ(−2) = 1 + 4 + 9 + ... = 0 and in general ζ(−s)=∑n=1∞ns=1s+2s+3s+⋯=−Bs+1s+1, where Bk is a Bernoulli number.
Integral function means:
If J(x) = Σpnxn is an integral function, then the J sum of the series a0 + ... is defined to be lim x→∞∑npn(a0+⋯+an)xn∑npnxn, if this limit exists.
There is a variation of this method where the series for J has a finite radius of convergence r and diverges at x = r. In this case one defines the sum as above, except taking the limit as x tends to r rather than infinity.
Borel summation In the special case when J(x) = ex this gives one (weak) form of Borel summation.
Integral function means:
Valiron's method Valiron's method is a generalization of Borel summation to certain more general integral functions J. Valiron showed that under certain conditions it is equivalent to defining the sum of a series as lim n→+∞H(n)2π∑h∈Ze−12h2H(n)(a0+⋯+ah) where H is the second derivative of G and c(n) = e−G(n), and a0 + ... + ah is to be interpreted as 0 when h < 0.
Moment methods:
Suppose that dμ is a measure on the real line such that all the moments μn=∫xndμ are finite. If a0 + a1 + ... is a series such that a(x)=a0x0μ0+a1x1μ1+⋯ converges for all x in the support of μ, then the (dμ) sum of the series is defined to be the value of the integral ∫a(x)dμ if it is defined. (If the numbers μn increase too rapidly then they do not uniquely determine the measure μ.) Borel summation For example, if dμ = e−x dx for positive x and 0 for negative x then μn = n!, and this gives one version of Borel summation, where the value of a sum is given by ∫0∞e−t∑antnn!dt.
Moment methods:
There is a generalization of this depending on a variable α, called the (B′,α) sum, where the sum of a series a0 + ... is defined to be ∫0∞e−t∑antnαΓ(nα+1)dt if this integral exists. A further generalization is to replace the sum under the integral by its analytic continuation from small t.
Miscellaneous methods:
BGN hyperreal summation This summation method works by using an extension to the real numbers known as the hyperreal numbers. Since the hyperreal numbers include distinct infinite values, these numbers can be used to represent the values of divergent series. The key method is to designate a particular infinite value that is being summed, usually ω , which is used as a unit of infinity. Instead of summing to an arbitrary infinity (as is typically done with ∞ ), the BGN method sums to the specific hyperreal infinite value labeled ω . Therefore, the summations are of the form ∑x=1ωf(x) This allows the usage of standard formulas for finite series such as arithmetic progressions in an infinite context. For instance, using this method, the sum of the progression 1+2+3+… is ω22+ω2 , or, using just the most significant infinite hyperreal part, ω22 Hausdorff transformations Hardy (1949, chapter 11).
Miscellaneous methods:
Hölder summation Hutton's method In 1812 Hutton introduced a method of summing divergent series by starting with the sequence of partial sums, and repeatedly applying the operation of replacing a sequence s0, s1, ... by the sequence of averages s0 + s1/2, s1 + s2/2, ..., and then taking the limit (Hardy 1949, p. 21).
Ingham summability The series a1 + ... is called Ingham summable to s if lim x→∞∑1≤n≤xannx[xn]=s.
Albert Ingham showed that if δ is any positive number then (C,−δ) (Cesàro) summability implies Ingham summability, and Ingham summability implies (C,δ) summability Hardy (1949, Appendix II).
Lambert summability The series a1 + ... is called Lambert summable to s if lim y→0+∑n≥1annye−ny1−e−ny=s.
If a series is (C,k) (Cesàro) summable for any k then it is Lambert summable to the same value, and if a series is Lambert summable then it is Abel summable to the same value Hardy (1949, Appendix II).
Le Roy summation The series a0 + ... is called Le Roy summable to s if lim ζ→1−∑nΓ(1+ζn)Γ(1+n)an=s.
Hardy (1949, 4.11) Mittag-Leffler summation The series a0 + ... is called Mittag-Leffler (M) summable to s if lim δ→0∑nanΓ(1+δn)=s.
Miscellaneous methods:
Hardy (1949, 4.11) Ramanujan summation Ramanujan summation is a method of assigning a value to divergent series used by Ramanujan and based on the Euler–Maclaurin summation formula. The Ramanujan sum of a series f(0) + f(1) + ... depends not only on the values of f at integers, but also on values of the function f at non-integral points, so it is not really a summation method in the sense of this article.
Miscellaneous methods:
Riemann summability The series a1 + ... is called (R,k) (or Riemann) summable to s if lim sin nhnh)k=s.
Hardy (1949, 4.17) The series a1 + ... is called R2 summable to s if lim sin 2nhn2h(a1+⋯+an)=s.
Riesz means If λn form an increasing sequence of real numbers and for λn<x≤λn+1 then the Riesz (R,λ,κ) sum of the series a0 + ... is defined to be lim ω→∞κωκ∫0ωAλ(x)(ω−x)κ−1dx.
Vallée-Poussin summability The series a1 + ... is called VP (or Vallée-Poussin) summable to s if lim lim m→∞[a0+a1mm+1+a2m(m−1)(m+1)(m+2)+⋯]=s, where Γ(x) is the gamma function.
Hardy (1949, 4.17). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acceleration (special relativity)**
Acceleration (special relativity):
Accelerations in special relativity (SR) follow, as in Newtonian Mechanics, by differentiation of velocity with respect to time. Because of the Lorentz transformation and time dilation, the concepts of time and distance become more complex, which also leads to more complex definitions of "acceleration". SR as the theory of flat Minkowski spacetime remains valid in the presence of accelerations, because general relativity (GR) is only required when there is curvature of spacetime caused by the energy–momentum tensor (which is mainly determined by mass). However, since the amount of spacetime curvature is not particularly high on Earth or its vicinity, SR remains valid for most practical purposes, such as experiments in particle accelerators.One can derive transformation formulas for ordinary accelerations in three spatial dimensions (three-acceleration or coordinate acceleration) as measured in an external inertial frame of reference, as well as for the special case of proper acceleration measured by a comoving accelerometer. Another useful formalism is four-acceleration, as its components can be connected in different inertial frames by a Lorentz transformation. Also equations of motion can be formulated which connect acceleration and force. Equations for several forms of acceleration of bodies and their curved world lines follow from these formulas by integration. Well known special cases are hyperbolic motion for constant longitudinal proper acceleration or uniform circular motion. Eventually, it is also possible to describe these phenomena in accelerated frames in the context of special relativity, see Proper reference frame (flat spacetime). In such frames, effects arise which are analogous to homogeneous gravitational fields, which have some formal similarities to the real, inhomogeneous gravitational fields of curved spacetime in general relativity. In the case of hyperbolic motion one can use Rindler coordinates, in the case of uniform circular motion one can use Born coordinates.
Acceleration (special relativity):
Concerning the historical development, relativistic equations containing accelerations can already be found in the early years of relativity, as summarized in early textbooks by Max von Laue (1911, 1921) or Wolfgang Pauli (1921). For instance, equations of motion and acceleration transformations were developed in the papers of Hendrik Antoon Lorentz (1899, 1904), Henri Poincaré (1905), Albert Einstein (1905), Max Planck (1906), and four-acceleration, proper acceleration, hyperbolic motion, accelerating reference frames, Born rigidity, have been analyzed by Einstein (1907), Hermann Minkowski (1907, 1908), Max Born (1909), Gustav Herglotz (1909), Arnold Sommerfeld (1910), von Laue (1911), Friedrich Kottler (1912, 1914), see section on history.
Three-acceleration:
In accordance with both Newtonian mechanics and SR, three-acceleration or coordinate acceleration a=(ax,ay,az) is the first derivative of velocity u=(ux,uy,uz) with respect to coordinate time or the second derivative of the location r=(x,y,z) with respect to coordinate time: a=dudt=d2rdt2 .However, the theories sharply differ in their predictions in terms of the relation between three-accelerations measured in different inertial frames. In Newtonian mechanics, time is absolute by t′=t in accordance with the Galilean transformation, therefore the three-acceleration derived from it is equal too in all inertial frames: a=a′ .On the contrary in SR, both r and t depend on the Lorentz transformation, therefore also three-acceleration a and its components vary in different inertial frames. When the relative velocity between the frames is directed in the x-direction by v=vx with γv=1/1−v2/c2 as Lorentz factor, the Lorentz transformation has the form or for arbitrary velocities v=(vx,vy,vz) of magnitude |v|=v In order to find out the transformation of three-acceleration, one has to differentiate the spatial coordinates r and r′ of the Lorentz transformation with respect to t and t′ , from which the transformation of three-velocity (also called velocity-addition formula) between u and u′ follows, and eventually by another differentiation with respect to t and t′ the transformation of three-acceleration between a and a′ follows. Starting from (1a), this procedure gives the transformation where the accelerations are parallel (x-direction) or perpendicular (y-, z-direction) to the velocity: or starting from (1b) this procedure gives the result for the general case of arbitrary directions of velocities and accelerations: This means, if there are two inertial frames S and S′ with relative velocity v , then in S the acceleration a of an object with momentary velocity u is measured, while in S′ the same object has an acceleration a′ and has the momentary velocity u′ . As with the velocity addition formulas, also these acceleration transformations guarantee that the resultant speed of the accelerated object can never reach or surpass the speed of light.
Four-acceleration:
If four-vectors are used instead of three-vectors, namely R as four-position and U as four-velocity, then the four-acceleration A=(At,Ax,Ay,Az)=(At,Ar) of an object is obtained by differentiation with respect to proper time τ instead of coordinate time: where a is the object's three-acceleration and u its momentary three-velocity of magnitude |u|=u with the corresponding Lorentz factor γ=1/1−u2/c2 . If only the spatial part is considered, and when the velocity is directed in the x-direction by u=ux and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered, the expression is reduced to: Ar=a(γ4,γ2,γ2) Unlike the three-acceleration previously discussed, it is not necessary to derive a new transformation for four-acceleration, because as with all four-vectors, the components of A and A′ in two inertial frames with relative speed v are connected by a Lorentz transformation analogous to (1a, 1b). Another property of four-vectors is the invariance of the inner product A2=−At2+Ar2 or its magnitude |A|=A2 , which gives in this case:
Proper acceleration:
In infinitesimal small durations there is always one inertial frame, which momentarily has the same velocity as the accelerated body, and in which the Lorentz transformation holds. The corresponding three-acceleration a0=(ax0,ay0,az0) in these frames can be directly measured by an accelerometer, and is called proper acceleration or rest acceleration. The relation of a0 in a momentary inertial frame S′ and a measured in an external inertial frame S follows from (1c, 1d) with a′=a0 , u′=0 , u=v and γ=γv . So in terms of (1c), when the velocity is directed in the x-direction by u=ux=v=vx and when only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered, it follows: Generalized by (1d) for arbitrary directions of u of magnitude |u|=u :a0=γ2[a+(a⋅u)uu2(γ−1)]a=1γ2[a0−(a0⋅u)uu2(1−1γ)] There is also a close relationship to the magnitude of four-acceleration: As it is invariant, it can be determined in the momentary inertial frame S′ , in which Ar′=a0 and by dt′/dτ=1 it follows d2t′/dτ2=At′=0 Thus the magnitude of four-acceleration corresponds to the magnitude of proper acceleration. By combining this with (2b), an alternative method for the determination of the connection between a0 in S′ and a in S is given, namely |a0|=|A|=γ4[a2+γ2(u⋅ac)2] from which (3a) follows again when the velocity is directed in the x-direction by u=ux and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered.
Acceleration and force:
Assuming constant mass m , the four-force F as a function of three-force f is related to four-acceleration (2a) by F=mA , thus: The relation between three-force and three-acceleration for arbitrary directions of the velocity is thus When the velocity is directed in the x-direction by u=ux and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered Therefore, the Newtonian definition of mass as the ratio of three-force and three-acceleration is disadvantageous in SR, because such a mass would depend both on velocity and direction. Consequently, the following mass definitions used in older textbooks are not used anymore: m‖=fxax=mγ3 as "longitudinal mass", m⊥=fyay=fzaz=mγ as "transverse mass".The relation (4b) between three-acceleration and three-force can also be obtained from the equation of motion where p is the three-momentum. The corresponding transformation of three-force between f in S and f′ in S′ (when the relative velocity between the frames is directed in the x-direction by v=vx and only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity are considered) follows by substitution of the relevant transformation formulas for u , a , mγ , d(mγ)/dt , or from the Lorentz transformed components of four-force, with the result: Or generalized for arbitrary directions of u , as well as v with magnitude |v|=v
Proper acceleration and proper force:
The force f0 in a momentary inertial frame measured by a comoving spring balance can be called proper force. It follows from (4e, 4f) by setting f′=f0 and u′=0 as well as u=v and γ=γv . Thus by (4e) where only accelerations parallel (x-direction) or perpendicular (y-, z-direction) to the velocity u=ux=v=vx are considered: Generalized by (4f) for arbitrary directions of u of magnitude |u|=u :f0=fγ−(f⋅u)uu2(γ−1)f=f0γ+(f0⋅u)uu2(1−1γ) Since in momentary inertial frames one has four-force F=(0,f0) and four-acceleration A=(0,a0) , equation (4a) produces the Newtonian relation f0=ma0 , therefore (3a, 4c, 5a) can be summarized By that, the apparent contradiction in the historical definitions of transverse mass m⊥ can be explained. Einstein (1905) described the relation between three-acceleration and proper force m⊥Einstein=fy0ay=fz0az=mγ2 ,while Lorentz (1899, 1904) and Planck (1906) described the relation between three-acceleration and three-force m⊥Lorentz=fyay=fzaz=mγ
Curved world lines:
By integration of the equations of motion one obtains the curved world lines of accelerated bodies corresponding to a sequence of momentary inertial frames (here, the expression "curved" is related to the form of the worldlines in Minkowski diagrams, which should not be confused with "curved" spacetime of general relativity). In connection with this, the so-called clock hypothesis of clock postulate has to be considered: The proper time of comoving clocks is independent of acceleration, that is, the time dilation of these clocks as seen in an external inertial frame only depends on its relative velocity with respect to that frame. Two simple cases of curved world lines are now provided by integration of equation (3a) for proper acceleration: a) Hyperbolic motion: The constant, longitudinal proper acceleration α=ax0=axγ3 by (3a) leads to the world line The worldline corresponds to the hyperbolic equation c4/α2=(x+c2/α)2−c2t2 , from which the name hyperbolic motion is derived. These equations are often used for the calculation of various scenarios of the twin paradox or Bell's spaceship paradox, or in relation to space travel using constant acceleration.
Curved world lines:
b) The constant, transverse proper acceleration ay0=ayγ2 by (3a) can be seen as a centripetal acceleration, leading to the worldline of a body in uniform rotation where v=rΩ0 is the tangential speed, r is the orbital radius, Ω0 is the angular velocity as a function of coordinate time, and Ω=γΩ0 as the proper angular velocity.
Curved world lines:
A classification of curved worldlines can be obtained by using the differential geometry of triple curves, which can be expressed by spacetime Frenet-Serret formulas. In particular, it can be shown that hyperbolic motion and uniform circular motion are special cases of motions having constant curvatures and torsions, satisfying the condition of Born rigidity. A body is called Born rigid if the spacetime distance between its infinitesimally separated worldlines or points remains constant during acceleration.
Accelerated reference frames:
Instead of inertial frames, these accelerated motions and curved worldlines can also be described using accelerated or curvilinear coordinates. The proper reference frame established that way is closely related to Fermi coordinates. For instance, the coordinates for an hyperbolically accelerated reference frame are sometimes called Rindler coordinates, or those of a uniformly rotating reference frame are called rotating cylindrical coordinates (or sometimes Born coordinates). In terms of the equivalence principle, the effects arising in these accelerated frames are analogous to effects in a homogeneous, fictitious gravitational field. In this way it can be seen, that the employment of accelerating frames in SR produces important mathematical relations, which (when further developed) play a fundamental role in the description of real, inhomogeneous gravitational fields in terms of curved spacetime in general relativity.
History:
For further information see von Laue, Pauli, Miller, Zahar, Gourgoulhon, and the historical sources in history of special relativity.
History:
1899: Hendrik Lorentz derived the correct (up to a certain factor ϵ ) relations for accelerations, forces and masses between a resting electrostatic systems of particles S0 (in a stationary aether), and a system S emerging from it by adding a translation, with k as the Lorentz factor: 1ϵ2 , 1kϵ2 , 1kϵ2 for f/f0 by (5a); 1k3ϵ , 1k2ϵ , 1k2ϵ for a/a0 by (3a); k3ϵ , kϵ , kϵ for f/(ma) , thus longitudinal and transverse mass by (4c); Lorentz explained that he has no means of determining the value of ϵ . If he had set ϵ=1 , his expressions would have assumed the exact relativistic form.
History:
1904: Lorentz derived the previous relations in a more detailed way, namely with respect to the properties of particles resting in the system Σ′ and the moving system Σ , with the new auxiliary variable l equal to 1/ϵ compared to the one in 1899, thus: F(Σ)=(l2,l2k,l2k)F(Σ′) for f as a function of f0 by (5a); mj(Σ)=(l2,l2k,l2k)mj(Σ′) for ma as a function of ma0 by (5b); j(Σ)=(lk3,lk2,lk2)j(Σ′) for a as a function of a0 by (3a); m(Σ)=(k3l,kl,kl)m(Σ′) for longitudinal and transverse mass as a function of the rest mass by (4c, 5b).
History:
This time, Lorentz could show that l=1 , by which his formulas assume the exact relativistic form. He also formulated the equation of motion F=dGdt with G=e26πc2Rklw which corresponds to (4d) with f=dpdt=d(mγu)dt , with l=1 , F=f , G=p , w=u , k=γ , and e2/(6πc2R)=m as electromagnetic rest mass. Furthermore, he argued, that these formulas should not only hold for forces and masses of electrically charged particles, but for other processes as well so that the earth's motion through the aether remains undetectable.
History:
1905: Henri Poincaré introduced the transformation of three-force (4e): X1′=kl3ρρ′(X1+ϵΣX1ξ),Y1′=ρρ′Y1l3,Z1′=ρρ′Z1l3 with ρρ′=kl3(1+ϵξ) , and k as the Lorentz factor, ρ the charge density. Or in modern notation: ϵ=v , ξ=ux , (X1,Y1,Z1)=f , and ΣX1ξ=f⋅u . As Lorentz, he set l=1 1905: Albert Einstein derived the equations of motions on the basis of his special theory of relativity, which represent the relation between equally valid inertial frames without the action of a mechanical aether. Einstein concluded, that in a momentary inertial frame k the equations of motion retain their Newtonian form: μd2ξdτ2=ϵX′,μd2ηdτ2=ϵY′,μd2ζdτ2=ϵZ′ This corresponds to f0=ma0 , because μ=m and (d2ξdτ2,d2ηdτ2,d2ζdτ2)=a0 and (ϵX′,ϵY′,ϵZ′)=f0 . By transformation into a relatively moving system K he obtained the equations for the electrical and magnetic components observed in that frame: d2xdt2=ϵμ1β3X,d2ydt2=ϵμ1β(Y−vVN),d2zdt2=ϵμ1β(Z+vVM) This corresponds to (4c) with a=fm(1γ3,1γ,1γ) , because μ=m and (d2xdt2,d2ydt2,d2zdt2)=a and [ϵX,ϵ(Y−vVN),ϵ(Z+vVM)]=f and β=γ . Consequently, Einstein determined the longitudinal and transverse mass, even though he related it to the force (ϵX′,ϵY′,ϵZ′)=f0 in the momentary rest frame measured by a comoving spring balance, and to the three-acceleration a in system K longitudinal mass transverse mass This corresponds to (5b) with ma(γ3,γ2,γ2)=f(1,γ,γ)=f0 1905: Poincaré introduces the transformation of three-acceleration (1c): dξ′dt′=dξdt1k3μ3,dη′dt′=dηdt1k2μ2−dξdtηϵk2μ3,dζ′dt′=dζdt1k2μ2−dξdtζϵk2μ3 where (ξ,η,ζ)=u as well as k=γ and ϵ=v and μ=1+ξϵ=1+uxv Furthermore, he introduced the four-force in the form: k0X1,k0Y1,k0Z1,k0T1 where k0=γ0 and (X1,Y1,Z1)=f and T1=ΣX1ξ=f⋅u 1906: Max Planck derived the equation of motion etc.
History:
with e(x˙Ex+y˙Ey+z˙Ez)=m(x˙x¨+y˙y¨+z˙z¨)(1−q2c2)3/2 and etc.
and etc.
The equations correspond to (4d) with f=dpdt=d(mγu)dt=mγ3((a⋅u)uc2)+mγa , with X=fx and q=v and x˙x¨+y˙y¨+z˙z¨=u⋅a , in agreement with those given by Lorentz (1904).1907: Einstein analyzed a uniformly accelerated reference frame and obtained formulas for coordinate dependent time dilation and speed of light, analogous to those given by Kottler-Møller-Rindler coordinates.
History:
1907: Hermann Minkowski defined the relation between the four-force (which he called the moving force) and the four acceleration mddτdxdτ=Rx,mddτdydτ=Ry,mddτdzdτ=Rz,mddτdtdτ=Rt corresponding to mA=F 1908: Minkowski denotes the second derivative x,y,z,t with respect to proper time as "acceleration vector" (four-acceleration). He showed, that its magnitude at an arbitrary point P of the worldline is c2/ϱ , where ϱ is the magnitude of a vector directed from the center of the corresponding "curvature hyperbola" (German: Krümmungshyperbel) to P 1909: Max Born denotes the motion with constant magnitude of Minkowski's acceleration vector as "hyperbolic motion" (German: Hyperbelbewegung), in the course of his study of rigidly accelerated motion. He set p=dx/dτ (now called proper velocity) and q=−dt/dτ=1+p2/c2 as Lorentz factor and τ as proper time, with the transformation equations x=−qξ,y=η,z=ζ,t=pc2ξ which corresponds to (6a) with ξ=c2/α and sinh (ατ/c) . Eliminating p Born derived the hyperbolic equation x2−c2t2=ξ2 , and defined the magnitude of acceleration as b=c2/ξ . He also noticed that his transformation can be used to transform into a "hyperbolically accelerated reference system" (German: hyperbolisch beschleunigtes Bezugsystem).
History:
1909: Gustav Herglotz extends Born's investigation to all possible cases of rigidly accelerated motion, including uniform rotation.
History:
1910: Arnold Sommerfeld brought Born's formulas for hyperbolic motion in a more concise form with l=ict as the imaginary time variable and φ as an imaginary angle: cos sin φ He noted that when r,y,z are variable and φ is constant, they describe the worldline of a charged body in hyperbolic motion. But if r,y,z are constant and φ is variable, they denote the transformation into its rest frame.
History:
1911: Sommerfeld explicitly used the expression "proper acceleration" (German: Eigenbeschleunigung) for the quantity v˙0 in v˙=v˙0(1−β2)3/2 , which corresponds to (3a), as the acceleration in the momentary inertial frame.
1911: Herglotz explicitly used the expression "rest acceleration" (German: Ruhbeschleunigung) instead of proper acceleration. He wrote it in the form γl0=β3γl and γt0=β2γt which corresponds to (3a), where β is the Lorentz factor and γl0 or γt0 are the longitudinal and transverse components of rest acceleration.
History:
1911: Max von Laue derived in the first edition of his monograph "Das Relativitätsprinzip" the transformation for three-acceleration by differentiation of the velocity addition q˙x=(cc2−v2c2+vqx′)3q˙x′,q˙y=(cc2−v2c2+vqx′)2(q˙x′−vqy′q˙x′c2+vqx′), equivalent to (1c) as well as to Poincaré (1905/6). From that he derived the transformation of rest acceleration (equivalent to 3a), and eventually the formulas for hyperbolic motion which corresponds to (6a): ±qx=±dxdt=cbtc2+b2t2,±(x−x0)=cbc2+b2t2, thus x2−c2t2=x2−u2=c4/b2,y=η,z=ζ and the transformation into a hyperbolic reference system with imaginary angle φ cos sin tan φ=LX He also wrote the transformation of three-force as Kx=Kx′+vc2(q′K′)1+vqx′c2,Ky=Ky′1−β21+vqx′c2,Kz=Kz′1−β21+vqx′c2, equivalent to (4e) as well as to Poincaré (1905).
History:
1912–1914: Friedrich Kottler obtained general covariance of Maxwell's equations, and used four-dimensional Frenet-Serret formulas to analyze the Born rigid motions given by Herglotz (1909). He also obtained the proper reference frames for hyperbolic motion and uniform circular motion.
History:
1913: von Laue replaced in the second edition of his book the transformation of three-acceleration by Minkowski's acceleration vector for which he coined the name "four-acceleration" (German: Viererbeschleunigung), defined by Y˙=dYdτ with Y as four-velocity. He showed, that the magnitude of four-acceleration corresponds to the rest acceleration q˙0 by |Y|˙=1c|q˙0| which corresponds to (3b). Subsequently, he derived the same formulas as in 1911 for the transformation of rest acceleration and hyperbolic motion, and the hyperbolic reference frame. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tealight**
Tealight:
A tealight (also tea-light, tea light, tea candle, or informally tea lite, t-lite or t-candle) is a candle in a thin metal or plastic cup so that the candle can liquefy completely while lit. They are typically small, circular, usually wider than their height, and inexpensive. Tealights derive their name from their use in teapot warmers, but are also used as food warmers in general, e.g. fondue.Tealights are a popular choice for accent lighting and for heating scented oil. A benefit that they have over taper candles is that they do not drip.
Tealight:
Tealights may be set afloat on water for decorative effect. Because of their small size and low level of light, multiple tealights are often burned simultaneously. Longer-burning tealights may be called nightlights. They are also lit for religious purposes.
Varieties:
Tealights can come in many different shapes and sizes, small and large, as well as burn times and scents. However, tealights are commonly short and cylindrical, approximately 38 mm (1.5 in) in diameter by 16 mm (0.63 in) high, with white unscented wax.
A standard tealight has a power output of around 32 watts, depending on the wax used.When used in batches of fifty or more, such as at a party, the two criteria most desired are "least amount of smoke" and being long-lasting.
Cup design:
The wick is tethered to a piece of metal to stop it from floating to the top of the molten wax and burning out before the wax does.
Tealights have been protected under several patented designs. In some cases, the standard tea light metal cup has been replaced with a clear plastic cup, sometimes made out of polycarbonate plastic. The clear cup allows more light to escape the holder. However, the metal cups are much more common.
Holders:
When not placed on a tray, tealights are placed in a special holder, which may be pierced or have partly clear walls to allow light to pass through.
Holders:
From small pockets of glass, metal, ceramic, and other materials to larger, more elaborate tea light lamps, holders come in a wide range of styles, colours, and shapes. They have an appropriately sized cup to use a tealight candle, either scented or unscented. Discount stores, gift stores, and home decor stores often carry an array of holders for these small candles.
Electrical:
Electric tealights have become increasingly popular as newer technology becomes available. They can feature incandescent or LED bulbs, the latter becoming the preferred format as LEDs become more-efficient and brighter. They can come in many different colours to set a mood, match a decor or augment the container's design. Some can also simulate a moving flame with various mechanical or electronic animations.
Electrical:
Electric tealights are not useful as a source of heat, so they are not suitable for chafing dishes or other food warmers.
Safety:
The use of tealights may be prohibited by safety regulations, such as in hospitals.Electric tealights are much safer than a flame-based tealight, and can be left unattended as there is no open flame or heat to worry about. This allows them to be placed inside freestanding lace structures, or in candle holders made from paper, wood or other flammable materials. They can also be made much smaller to fit where a large flamed-based tealight cannot. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Haversack**
Haversack:
A haversack, musette bag, or small pack is a bag with a single shoulder strap. Although similar to a backpack, the single shoulder strap differentiates this type from other backpacks. There are exceptions to this general rule.
Origins:
The word haversack is an adaptation of the German Hafersack and also the Dutch haverzak meaning "oat sack", (which more properly describes a small cloth bag on a strap worn over one shoulder and originally referred to the bag of oats carried as horse fodder). The term was adopted by both the English and French (as havresac) cavalry in the 17th century. The word haver likewise means "oats" in Northern English and Scottish dialects.The haversack, especially when used in the military, was generally square and about 12 inches (30 cm) per side with a button-down flap to close it. When empty, the bag could be folded in three and an extra button on the back of the bag would allow it to be refixed in this position. For the military, this made it neat and, when held to the side in its folded form by the soldier's belt, it became part of the uniform of many regiments in the British army.
Origins:
During the American Revolutionary War, soldiers used haversacks to carry their individual food rations for the day, when the mission did not call for a full rucksack.
Commonwealth usage:
In Australia, India and other commonwealth countries in South Asia the word haversack is synonymous with rucksack or other similar terms and is casually used to describe any big backpack.
U.S. Army:
Haversacks were in use during the American Civil War, as is recounted in Ulysses Grant's memoirs: "In addition to the supplies transported by boat, the men were to carry forty rounds of ammunition in the cartridge-boxes and four days' rations in haversacks."In 1910, the U.S. Army adopted the M1910 haversack as the standard pack for all infantrymen. The pack is essentially a sheet of canvas that folds around its contents (clothing, daily rations, and assorted personal items), and is held together by adjustable straps that thread through loops. A "tail" threaded onto the bottom of the haversack with a leather strap is intended to hold the bedroll and can be detached from the haversack without disturbing the contents of the pack. Shoulder straps and a single rear strap are designed to attach to a cartridge belt in a suspender configuration. The exterior of the pack has grommets for attaching a bayonet scabbard, a mess kit pouch, and a canvas carrier for a short-handled shovel (entrenching tool).The M1910 haversack continued production during the interwar years with minor modifications:"An upgraded haversack was developed in 1928 that had quick release buckles and a web strap and buckle closure on the meat can pouch replacing the metal button. However, the M-1928 haversack did not go into production until 1940, and older haversacks continued to be issued until stocks were exhausted." The M-1944 Combat Pack was developed from the much lighter and user-friendly US Marine Corps M-1941 Jungle pack which was developed during the Banana Wars which required a lighter pack in the tropics. The M-1944 pack had some shortcomings and a new M-1945 began replacing earlier packs in February 1945. The two packs had incompatible combat and cargo packs because of different release buckles.The new two-part design, based on the Marine M-1941 jungle pack, used a much smaller back pack (for rations, clothes, ammunition, and messkit), and a separate cargo bag that attached to the bottom for extra clothes, shoes, and miscellaneous other items. The upper field pack had the same type of grommet tabs and loops as the M-1928 for attaching a bayonet and entrenchment tool plus straps for securing a "horseshoe" bedroll.The M-1936 field bag was a copy of the British officers Musette bag of World War I and was issued to officers, engineers and mounted personnel. It was a smaller pack lacking shoulder straps and could be attached to a set of cotton web suspenders or carried by a single general purpose shoulder strap. It was intended to carry rations, mess gear, and other essential items and was smaller as less essential gear would be carried on a vehicle.
U.S. Marine Corps:
The Marines carried the M-1910 haversack and the somewhat-improved M-1928 haversack in both world wars, but they also developed their own exclusive pack system in 1941. The M-1910 haversack was considered too overweight and cumbersome for jungle fighting in the tropics of Central America during the years of the Banana Wars.
A more versatile two-part M-1941 system was devised. It has an upper "marching pack" for rations, poncho and clothes, and a lower knapsack for extra shoes and utilities. The exterior of the upper pack had loops and grommet tabs for attaching a bayonet, shovel, bedroll, extra canteen, and first-aid pouch. It was issued in tan or khaki canvas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Posterior inferior iliac spine**
Posterior inferior iliac spine:
The posterior inferior iliac spine (Sweeney's Tubercle) is an anatomical landmark that describes a bony "spine", or projection, at the posterior and inferior surface of the iliac bone.
It is one of two such spines on the posterior surface, the other being the posterior superior iliac spine. These two spines are separated by a bony notch. They appear as two dimples in the skin, at the level of the lower back.
The posterior inferior iliac spine corresponds with the posterior extremity of the auricular surface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Label (Mac OS)**
Label (Mac OS):
In Apple's Macintosh operating systems, labels are a type of seven distinct colored and named parameters of metadata that can be attributed to items (files, folders and disks) in the filesystem. Labels were introduced in Macintosh System 7, released in 1991, and they were an improvement of the ability to colorize items in earlier versions of the Finder. Labels remained a feature of the Macintosh operating system through the end of Mac OS 9 in late 2001, but they were omitted from Mac OS X versions 10.0 to 10.2, before being reintroduced in version 10.3 in 2003, though not without criticism. During the short time period when Mac OS X lacked labels, third-party software replicated the feature.
In classic Mac OS:
In classic Mac OS versions 7 through 9, applying a label to an item causes the item's icon to be tinted in that color when using a color computer monitor (as opposed to the black-and-white monitors of early Macs), and labels can be used as a search and sorting criterion. There is a choice of seven colors because three bits are reserved for the label color: 001 through 111, and 000 for no label. The names of the colors can be changed to represent categories assigned to the label colors. Both label colors and names can be customized in the classic Mac OS systems; however, Mac OS 8 and 9 provided this functionality through the Labels tab in the Finder Preferences dialog, while System 7 provided a separate Labels control panel. Labels in Mac OS 9 and earlier, once customized, were specific to an individual install; booting into another install, be it on another Mac or different disk would show different colors and names unless set identically. A colorless label could be produced by changing a label's color to black or white.
In Mac OS X and later:
Mac OS X versions 10.3 to 10.8 apply the label color to the background of item names, except when an item is selected in column view, which changes the item name to the standard highlight color except for a label-colored dot after the name. Beginning in OS X 10.9, the label-colored background of item names is replaced with a small label-colored dot, and becomes a kind of tag.
Relation to tags:
The Mac operating system has allowed users to assign multiple arbitrary tags as extended file attributes to any item ever since OS X 10.9 was released in 2013. These tags coexist with the legacy label system for backward compatibility, so that multiple colored (or colorless) tags can be added to a single item, but only the last colored tag applied to an item will set the legacy label that will be seen when viewing the item in the older operating systems. Labeled items that were created in the older operating systems will superficially seem to be tagged in OS X 10.9 and later even though they are only labeled and lack the newer tag extended file attributes (until they are edited in the new system). Since label colors can be changed in classic Mac OS but are standardized and unchangeable in the newer operating systems, someone who wants to synchronize the label colors between a classic and modern system can change the label colors in classic Mac OS to match the newer system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paul Mackenzie**
Paul Mackenzie:
Paul B. Mackenzie (born 1950) is a theoretical physicist at the Fermi National Accelerator Laboratory. He did graduate work in physics at Cornell University where he was a student of G. Peter Lepage. He is an expert on Lattice Gauge Theory. He is the chair of the Executive Committee of USQCD, the US collaboration for developing the necessary supercomputing hardware and software for quantum chromodynamics formulated on a lattice.
Selected publications:
Mackenzie's has published 71 scientific papers listed in the INSPIRE-HEP Literature Database. The most widely cited of them, "Viability of lattice perturbation theory" in Physical Review D 48 (5), pp. 2250–2264 (1993) has been cited 589 times by March 2009. The second most widely cited, "On the elimination of scale ambiguities in perturbative quantum chromodynamics " Physical Review D 28 (1), pp. 228–235 (1983) has been cited 406 times. Both papers are with Lepage, and the second also with Stan Brodsky. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The FEBS Journal**
The FEBS Journal:
The FEBS Journal is a biweekly peer-reviewed scientific journal published by John Wiley & Sons on behalf of the Federation of European Biochemical Societies. It covers research on all aspects of biochemistry, molecular biology, cell biology, and the molecular bases of disease. The editor-in-chief is Seamus Martin (Trinity College Dublin), who took over from Richard Perham (University of Cambridge) in 2014. Content is available for free 1 year after publication, except review content, which is available immediately. The journal also publishes special and virtual issues focusing on a specific theme.
The FEBS Journal:
Since 2021, the journal has given an annual award, "The FEBS Journal Richard Perham Prize", for an outstanding research paper published in the journal. The winners receive a €5,000 cash prize (to be divided equally between the first and last authors) and the senior author of the study is invited to give a talk at the FEBS Annual Congress. The journal also gives more frequent poster prize awards to early-career scientists presenting at conferences.
History:
The journal was established in 1906 by Carl Neuberg, who also served as the first editor-in-chief. Its original name was Biochemische Zeitschrift. It was renamed to the European Journal of Biochemistry in 1967, with Claude Liébecq as editor-in-chief, succeeded by Richard Perham, during whose tenure the name became the FEBS Journal, in 2005.
Notable papers:
During the early years the Biochemisches Zeitschrift published numerous papers important in the history of biochemistry, including that of Michaelis and Menten.
Notable papers:
The two name changes make it difficult to compare all the most notable papers published in the journal, but some are the following: Biochemische Zeitschrift Michaelis, L.; Menten, M.L. (1913). "Die Kinetik der Invertinwirkung" [The kinetics of invertase action]. Biochemische Zeitschrift. 49 (17): 333–369. (3667 citations)Warburg, O.; Christian, W. (1942). "Isolierung und Kristallisation des Gärungsferments Enolase" [Isolation and crystallization of yeast enolase]. Biochem. Z. 310: 384–421. (2188 citations)Hagedorn, H.C.; Jensen, B.N. (1923). "On the micro-determination of blood-sugar by means of ferric cyanide". Biochem. Z. 135: 46–58. (1237 citations) European Journal of Biochemistry Laskey, Ronald A.; Mills, Anthony D. (1975). "Quantitative Film Detection of 3H and 14C in Polyacrylamide Gels by Fluorography". European Journal of Biochemistry. 56 (2): 335–341. doi:10.1111/j.1432-1033.1975.tb02238.x. PMID 1175627. (9207 citations)Marklund, S.; Marklund, G. (1974). "Involvement of superoxide anion radical in autoxidation of pyrogallol and a convenient assay for superoxide-dismutase". Eur. J. Biochem. 47 (3): 469–474. doi:10.1111/j.1432-1033.1974.tb03714.x. PMID 4215654. (4971 citations)Bonner, William M.; Laskey, Ronald A. (1974). "A Film Detection Method for Tritium-Labelled Proteins and Nucleic Acids in Polyacrylamide Gels". European Journal of Biochemistry. 46 (1): 83–88. doi:10.1111/j.1432-1033.1974.tb03599.x. PMID 4850204. (4925 citations) FEBS Journal Gialeli, Chrisostomi; Theocharis, Achilleas D.; Karamanos, Nikos K. (2011). "Roles of matrix metalloproteinases in cancer progression and their pharmacological targeting". FEBS Journal. 278 (1): 16–27. doi:10.1111/j.1742-4658.2010.07919.x. PMID 21087457. S2CID 2260074. (964 citations)Zelensky, Alex N.; Gready, Jill E. (2005). "The C-type lectin-like domain superfamily". FEBS Journal. 272 (24): 6179–6217. doi:10.1111/j.1742-4658.2005.05031.x. PMID 16336259. S2CID 7084402. (807 citations)Yoshida, Hiderou (2007). "ER stress and diseases". FEBS Journal. 274 (3): 630–658. doi:10.1111/j.1742-4658.2007.05639.x. PMID 17288551. S2CID 25715028. (758 citations)
Abstracting and indexing:
The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2019 impact factor of 4.392. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Non-blocking I/O (Java)**
Non-blocking I/O (Java):
java.nio (NIO stands for New Input/Output) is a collection of Java programming language APIs that offer features for intensive I/O operations. It was introduced with the J2SE 1.4 release of Java by Sun Microsystems to complement an existing standard I/O. NIO was developed under the Java Community Process as JSR 51. An extension to NIO that offers a new file system API, called NIO.2, was released with Java SE 7 ("Dolphin").
Features and organization:
The APIs of NIO were designed to provide access to the low-level I/O operations of modern operating systems. Although the APIs are themselves relatively high-level, the intent is to facilitate an implementation that can directly use the most efficient operations of the underlying platform.
Features and organization:
The Java NIO APIs are provided in the java.nio package and its subpackages. The documentation by Oracle identifies these features. Buffers for data of primitive types Character set encoders and decoders A pattern-matching facility based on Perl-style regular expressions (in package java.util.regex) Channels, a new primitive I/O abstraction A file interface that supports locks and memory mapping of files up to Integer.MAX_VALUE bytes (2 GiB) A multiplexed, non-blocking I/O facility for writing scalable servers NIO buffers NIO data transfer is based on buffers (java.nio.Buffer and related classes). These classes represent a contiguous extent of memory, together with a small number of data transfer operations. Although theoretically these are general-purpose data structures, the implementation may select memory for alignment or paging characteristics, which are not otherwise accessible in Java. Typically, this would be used to allow the buffer contents to occupy the same physical memory used by the underlying operating system for its native I/O operations, thus allowing the most direct transfer mechanism, and eliminating the need for any additional copying. In most operating systems, provided the particular area of memory has the right properties, transfer can take place without using the CPU at all. The NIO buffer is intentionally limited in features in order to support these goals.
Features and organization:
There are buffer classes for all of Java's primitive types except boolean, which can share memory with byte buffers and allow arbitrary interpretation of the underlying bytes.
Features and organization:
Usage NIO buffers maintain several pointers that dictate the function of their accessor methods. The NIO buffer implementation contains a rich set of methods for modifying these pointers: The flip() method, rather than performing a "flip" or paging function in the canonical sense, moves the position pointer to the origin of the underlying array (if any) and the limit pointer to the former position of the position pointer.
Features and organization:
Three get() methods are supplied for transferring data out of a NIO buffer. The bulk implementation, rather than performing a "get" in the traditional sense, "puts" the data into a specified array. The "offset" argument supplied to this method refers not to the offset from within the buffer from which to read, nor an offset from the position pointer, but rather the offset from 0 within the target array.
Features and organization:
Unless using the absolute get() and put() methods, any get() or put() is conducted from the position pointer. Should one need to read from a different position within the underlying array, whilst not adjusting the writing position, the mark() and reset() methods have been supplied.
The mark() method effectively stores the position of the position pointer by setting the mark pointer to the position of the position pointer. The reset() method causes the position pointer to move to the mark pointer's position.
Upon invocation of the clear() method or the flip() method the mark pointer is discarded.
The clear() method does not ensure zero-ing of the buffer, but does return the limit pointer to the upper boundary of the underlying array, and the position pointer to zero.
put() and get() operations for NIO buffers are not thread safe.
You can only map() a java.nio.MappedByteBuffer from a java.nio.channels.FileChannel up to Integer.MAX_VALUE in size (2GiB); regions beyond this limit can be accessed using an offset greater than zero.
Features and organization:
Channels Channels (classes implementing the interface java.nio.channels.Channel) are designed to provide for bulk data transfers to and from NIO buffers. This is a low-level data transfer mechanism that exists in parallel with the classes of the higher-level I/O library (packages java.io and java.net). A channel implementation can be obtained from a high-level data transfer class such as java.io.File, java.net.ServerSocket, or java.net.Socket, and vice versa. Channels are analogous to "file descriptors" found in Unix-like operating systems.
Features and organization:
File channels (java.nio.channels.FileChannel) can use arbitrary buffers but can also establish a buffer directly mapped to file contents using memory-mapped file. They can also interact with file system locks. Similarly, socket channels (java.nio.channels.SocketChannel and java.nio.channels.ServerSocketChannel) allow for data transfer between sockets and NIO buffers.
Features and organization:
FileChannel can be used to do a file copy, which is potentially far more efficient than using old read/write with a byte array. The typical code for this is: Selectors A selector (java.nio.channels.Selector and subclasses) provides a mechanism for waiting on channels and recognizing when one or more become available for data transfer. When a number of channels are registered with the selector, it enables blocking of the program flow until at least one channel is ready for use, or until an interruption condition occurs.
Features and organization:
Although this multiplexing behavior could be implemented with threads, the selector can provide a significantly more efficient implementation using lower-level operating system constructs. A POSIX-compliant operating system, for example, would have direct representations of these concepts, select(). A notable application of this design would be the common paradigm in server software which involves simultaneously waiting for responses on a number of sessions.
Features and organization:
Character sets In Java, a character set is a mapping between Unicode characters (or a subset of them) and bytes.
The java.nio.charset package of NIO provides facilities for identifying character sets and providing encoding and decoding algorithms for new mappings.
Reception It is unexpected that a Channel associated with a Java IO RandomAccess file closes the file descriptor on an interrupt, whereas RandomAccessFiles own read method does not do this.
JDK 7 and NIO.2:
JDK 7 includes a java.nio.file package which, with the Path class (also new to JDK 7), among other features, provides extended capabilities for filesystem tasks, e.g. can work with symbolic/hard links and dump big directory listings into buffers more quickly than the old File class does.
The java.nio.file package and its related package, java.nio.file.attribute, provide comprehensive support for file I/O and for accessing the file system. A zip file system provider is also available in JDK 7.
JDK 7 and NIO.2:
The java.nio.file.LinkOption is an example of emulating extensible enums with interfaces. In Java, it is not possible to have one Enum extend another Enum. However, it is possible to emulate an extensible Enum type by having an Enum implement one or more interfaces. LinkOption is an enum type that implements both the OpenOption and CopyOption interfaces, which emulates the effects of an extensible Enum type. A small down-side to this approach is that implementations cannot be inherited between various Enum types. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Permutation (music)**
Permutation (music):
In music, a permutation (order) of a set is any ordering of the elements of that set. A specific arrangement of a set of discrete entities, or parameters, such as pitch, dynamics, or timbre. Different permutations may be related by transformation, through the application of zero or more operations, such as transposition, inversion, retrogradation, circular permutation (also called rotation), or multiplicative operations (such as the cycle of fourths and cycle of fifths transforms). These may produce reorderings of the members of the set, or may simply map the set onto itself.
Permutation (music):
Order is particularly important in the theories of composition techniques originating in the 20th century such as the twelve-tone technique and serialism. Analytical techniques such as set theory take care to distinguish between ordered and unordered collections. In traditional theory concepts like voicing and form include ordering; for example, many musical forms, such as rondo, are defined by the order of their sections.
Permutation (music):
The permutations resulting from applying the inversion or retrograde operations are categorized as the prime form's inversions and retrogrades, respectively. Applying both inversion and retrograde to a prime form produces its retrograde-inversions, considered a distinct type of permutation. Permutation may be applied to smaller sets as well. However, transformation operations of such smaller sets do not necessarily result in permutation the original set. Here is an example of non-permutation of trichords, using retrogradation, inversion, and retrograde-inversion, combined in each case with transposition, as found within in the tone row (or twelve-tone series) from Anton Webern's Concerto: If the first three notes are regarded as the "original" cell, then the next 3 are its transposed retrograde-inversion (backwards and upside down), the next three are the transposed retrograde (backwards), and the last 3 are its transposed inversion (upside down).Not all prime series have the same number of variations because the transposed and inverse transformations of a tone row may be identical, a quite rare phenomenon: less than 0.06% of all series admit 24 forms instead of 48.One technique facilitating twelve-tone permutation is the use of number values corresponding with musical letters. The first note of the first of the primes, actually prime zero (commonly mistaken for prime one), is represented by 0. The rest of the numbers are counted half-step-wise such that: B = 0, C = 1, C♯/D♭ = 2, D = 3, D♯/E♭ = 4, E = 5, F = 6, F♯/G♭ = 7, G = 8, G♯/A♭ = 9, A = 10, and A♯/B♭ = 11.
Permutation (music):
Prime zero is retrieved entirely by choice of the composer. To receive the retrograde of any given prime, the numbers are simply rewritten backwards. To receive the inversion of any prime, each number value is subtracted from 12 and the resulting number placed in the corresponding matrix cell (see twelve-tone technique). The retrograde inversion is the values of the inversion numbers read backwards. Therefore: A given prime zero (derived from the notes of Anton Webern's Concerto): 0, 11, 3, 4, 8, 7, 9, 5, 6, 1, 2, 10 The retrograde: 10, 2, 1, 6, 5, 9, 7, 8, 4, 3, 11, 0 The inversion: 0, 1, 9, 8, 4, 5, 3, 7, 6, 11, 10, 2 The retrograde inversion: 2, 10, 11, 6, 7, 3, 5, 4, 8, 9, 1, 0 More generally, a musical permutation is any reordering of the prime form of an ordered set of pitch classes or, with respect to twelve-tone rows, any ordering at all of the set consisting of the integers modulo 12. In that regard, a musical permutation is a combinatorial permutation from mathematics as it applies to music. Permutations are in no way limited to the twelve-tone serial and atonal musics, but are just as well utilized in tonal melodies especially during the 20th and 21st centuries, notably in Rachmaninoff's Variations on the Theme of Paganini for orchestra and piano.Cyclical permutation (also called rotation) is the maintenance of the original order of the tone row with the only change being the initial pitch class, with the original order following after. A secondary set may be considered a cyclical permutation beginning on the sixth member of a hexachordally combinatorial row. The tone row from Berg's Lyric Suite, for example, is realized thematically and then cyclically permuted (0 is bolded for reference): 5 4 0 9 7 2 8 1 3 6 t e 3 6 t e 5 4 0 9 7 2 8 1 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cheesecloth**
Cheesecloth:
Cheesecloth is a loose-woven gauze-like carded cotton cloth used primarily in cheesemaking and cooking.
Grades:
Cheesecloth is available in at least seven different grades, from open to extra-fine weave. Grades are distinguished by the number of threads per inch in each direction.
Uses:
Food preparation The primary use of cheesecloth is in some styles of cheesemaking, where it is used to remove whey from cheese curds, and to help hold the curds together as the cheese is formed. Cheesecloth is also used in straining stocks and custards, bundling herbs, making tofu and ghee, and thickening yogurt. Queso blanco and queso fresco are Spanish and Mexican cheeses that are made from whole milk using cheesecloth. Italian Ricotta is made from cow, sheep or goat acidified whey, traditionally formed in cheesecloth cones. Quark is a type of German unsalted cheese that is sometimes formed with cheesecloth. Paneer is a kind of Indian fresh cheese that is commonly made with cheesecloth. Fruitcake is wrapped in rum-infused cheesecloth during the process of "feeding" the fruitcake as it ripens.
Uses:
Manufacturing, testing and preservation Cheesecloth can also be used for several printmaking processes including lithography for wiping up gum arabic. In intaglio, a heavily-starched cheesecloth called tarlatan is used for wiping away excess ink from the printing surface.Cheesecloth #60 is used in product safety and regulatory testing for potential fire hazards. Cheesecloth is wrapped tightly over the device under test, which is then subjected to simulated conditions such as lightning surges conducted through power or telecom cables, power faults, etc. The device may be destroyed but must not ignite the cheesecloth. This is to ensure that the device can fail safely, and not start electrical fires in the vicinity.
Uses:
Cheesecloth made to United States Federal Standard CCC-C-440 is used to test the durability of optical coatings per United States Military Standard MIL-C-48497. The optics are exposed to a 95–100% humidity environment at 120 °F (49 °C) for 24 hours, and then a 1⁄4 inch (6.4 mm) thick by 3⁄8 in (9.5 mm) wide pad of cheesecloth is rubbed over the optical surface for at least 50 strokes under at least 1 pound-force (4.4 N). The optical surface is examined for streaks or scratches, and then its optical performance is measured to ensure that no deterioration occurred.Cheesecloth is used in India and Pakistan for making summer shirts. Cheesecloth material shirts were popular for beachwear during the 1960s and 1970s in the United States. Cheesecloth has been used to create the illusion of "ectoplasm" during spirit channelling or other ghost-related phenomena.Cheesecloth has a use in anatomical dissection laboratories to slow the process of desiccation. The cloth can be soaked with a preservative solution such as formalin then wrapped around the specimen or at other times simply wrapped first then sprayed with water. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ambrein**
Ambrein:
Ambrein is a triterpene alcohol that is the chief constituent of ambergris, a secretion from the digestive system of the sperm whale, and has been suggested as the possible active component producing the supposed aphrodisiac effects of ambergris. Although ambrein itself is odorless, it serves as the biological precursor for a number of aromatic derivatives such as ambroxan and is thought to possess fixative properties for other odorants.
Ambrein:
It has been shown to act as an analgesic and it has been proven to increase sexual behavior in rats, providing some support for its traditional aphrodisiac use.
Apart from its supposed aphrodisiac effects, ambrein has been shown to decrease spontaneous contractions of smooth muscles in rats, guinea pigs, and rabbits. It is able to reduce these contractions by serving as an antagonist and interfering with the Ca2+ ions from outside of the cell.
Discovery:
In 1946, Ruzicka and Lardon "established that the fragrance of ambergris is based on the triterpene (named) ambrein".
Biosynthesis:
Ambrein is synthesized from common triterpenoid precursor squalene. The squalene-hopene cyclase (SHC) catalyzes cyclization of squalene into the monocyclic 3-deoxyachilleol A. Tetraprenyl-beta-curcumene synthase (BmeTC) converts 3-deoxyachilleol A into the tricyclic ambrein. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Homokaryotic**
Homokaryotic:
Monokaryotic (adj.) is a term used to refer to multinucleate cells where all nuclei are genetically identical. In multinucleate cells, nuclei share one common cytoplasm, as is found in hyphal cells or mycelium of filamentous fungi. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Orange Peel (horse)**
Orange Peel (horse):
Orange Peel (foaled 1919) was a Thoroughbred stallion that had a significant influence on the breeding of sport horses.
Orange Peel has had a great influence on the breeding of show jumpers. Orange Peel sired 19 sons from 1924 to 1940, and his descendants are very successful today, with 26 of the top 100 show jumping sires of 1990 having him in their pedigree.
One of Orange Peel's greatest descendants was his grandson, the Anglo-Norman Ibrahim, who produced such great sires as Quastor (1960) and Almé Z (1966). Sons of Orange Peel include The Last Orange, the sire of Ibrahim, Jus de Pomme, and Plein d'Espoirs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hearing loss**
Hearing loss:
Hearing loss is a partial or total inability to hear. Hearing loss may be present at birth or acquired at any time afterwards. Hearing loss may occur in one or both ears. In children, hearing problems can affect the ability to acquire spoken language, and in adults it can create difficulties with social interaction and at work. Hearing loss can be temporary or permanent. Hearing loss related to age usually affects both ears and is due to cochlear hair cell loss. In some people, particularly older people, hearing loss can result in loneliness. Deaf people usually have little to no hearing.Hearing loss may be caused by a number of factors, including: genetics, ageing, exposure to noise, some infections, birth complications, trauma to the ear, and certain medications or toxins. A common condition that results in hearing loss is chronic ear infections. Certain infections during pregnancy, such as cytomegalovirus, syphilis and rubella, may also cause hearing loss in the child. Hearing loss is diagnosed when hearing testing finds that a person is unable to hear 25 decibels in at least one ear. Testing for poor hearing is recommended for all newborns. Hearing loss can be categorized as mild (25 to 40 dB), moderate (41 to 55 dB), moderate-severe (56 to 70 dB), severe (71 to 90 dB), or profound (greater than 90 dB). There are three main types of hearing loss: conductive hearing loss, sensorineural hearing loss, and mixed hearing loss.About half of hearing loss globally is preventable through public health measures. Such practices include immunization, proper care around pregnancy, avoiding loud noise, and avoiding certain medications. The World Health Organization recommends that young people limit exposure to loud sounds and the use of personal audio players to an hour a day in an effort to limit exposure to noise. Early identification and support are particularly important in children. For many, hearing aids, sign language, cochlear implants and subtitles are useful. Lip reading is another useful skill some develop. Access to hearing aids, however, is limited in many areas of the world.As of 2013 hearing loss affects about 1.1 billion people to some degree. It causes disability in about 466 million people (5% of the global population), and moderate to severe disability in 124 million people. Of those with moderate to severe disability 108 million live in low and middle income countries. Of those with hearing loss, it began during childhood for 65 million. Those who use sign language and are members of Deaf culture may see themselves as having a difference rather than a disability. Many members of Deaf culture oppose attempts to cure deafness and some within this community view cochlear implants with concern as they have the potential to eliminate their culture. The terms hearing impairment or hearing loss are often viewed negatively as emphasizing what people cannot do, although the terms are still regularly used when referring to deafness in medical contexts.
Definition:
Hearing loss is defined as diminished acuity to sounds which would otherwise be heard normally. The terms hearing impaired or hard of hearing are usually reserved for people who have relative inability to hear sound in the speech frequencies. Hearing loss occurs when sound waves enter the ears and damage the sensitive tissues The severity of hearing loss is categorized according to the increase in intensity of sound above the usual level required for the listener to detect it.
Definition:
Deafness is defined as a degree of loss such that a person is unable to understand speech, even in the presence of amplification. In profound deafness, even the highest intensity sounds produced by an audiometer (an instrument used to measure hearing by producing pure tone sounds through a range of frequencies) may not be detected. In total deafness, no sounds at all, regardless of amplification or method of production, can be heard.
Definition:
Speech perception is another aspect of hearing which involves the perceived clarity of a word rather than the intensity of sound made by the word. In humans, this is usually measured with speech discrimination tests, which measure not only the ability to detect sound, but also the ability to understand speech. There are very rare types of hearing loss that affect speech discrimination alone. One example is auditory neuropathy, a variety of hearing loss in which the outer hair cells of the cochlea are intact and functioning, but sound information is not faithfully transmitted by the auditory nerve to the brain.Use of the terms "hearing impaired", "deaf-mute", or "deaf and dumb" to describe deaf and hard of hearing people is discouraged by many in the deaf community as well as advocacy organizations, as they are offensive to many deaf and hard of hearing people.
Definition:
Hearing standards Human hearing extends in frequency from 20 to 20,000 Hz, and in intensity from 0 dB to 120 dB HL or more. 0 dB does not represent absence of sound, but rather the softest sound an average unimpaired human ear can hear; some people can hear down to −5 or even −10 dB. Sound is generally uncomfortably loud above 90 dB and 115 dB represents the threshold of pain. The ear does not hear all frequencies equally well: hearing sensitivity peaks around 3,000 Hz. There are many qualities of human hearing besides frequency range and intensity that cannot easily be measured quantitatively. However, for many practical purposes, normal hearing is defined by a frequency versus intensity graph, or audiogram, charting sensitivity thresholds of hearing at defined frequencies. Because of the cumulative impact of age and exposure to noise and other acoustic insults, 'typical' hearing may not be normal.
Signs and symptoms:
difficulty using the telephone loss of sound localization difficulty understanding speech, especially of children and women whose voices are of a higher frequency.
Signs and symptoms:
difficulty understanding speech in the presence of background noise (cocktail party effect) sounds or speech sounding dull, muffled or attenuated need for increased volume on television, radio, music and other audio sourcesHearing loss is sensory, but may have accompanying symptoms: pain or pressure in the ears a blocked feelingThere may also be accompanying secondary symptoms: hyperacusis, heightened sensitivity with accompanying auditory pain to certain intensities and frequencies of sound, sometimes defined as "auditory recruitment" tinnitus, ringing, buzzing, hissing or other sounds in the ear when no external sound is present vertigo and disequilibrium tympanophonia, also known as autophonia, abnormal hearing of one's own voice and respiratory sounds, usually as a result of a patulous (a constantly open) eustachian tube or dehiscent superior semicircular canals disturbances of facial movement (indicating a possible tumour or stroke) or in persons with Bell's palsy Complications Hearing loss is associated with Alzheimer's disease and dementia. The risk increases with the hearing loss degree. There are several hypotheses including cognitive resources being redistributed to hearing and social isolation from hearing loss having a negative effect. According to preliminary data, hearing aid usage can slow down the decline in cognitive functions.Hearing loss is responsible for causing thalamocortical dysrthymia in the brain which is a cause for several neurological disorders including tinnitus and visual snow syndrome.
Signs and symptoms:
Cognitive decline Hearing loss is an increasing concern especially in aging populations. The prevalence of hearing loss increases about two-fold for each decade increase in age after age 40. While the secular trend might decrease individual level risk of developing hearing loss, the prevalence of hearing loss is expected to rise due to the aging population in the US. Another concern about aging process is cognitive decline, which may progress to mild cognitive impairment and eventually dementia. The association between hearing loss and cognitive decline has been studied in various research settings. Despite the variability in study design and protocols, the majority of these studies have found consistent association between age-related hearing loss and cognitive decline, cognitive impairment, and dementia. The association between age-related hearing loss and Alzheimer's disease was found to be nonsignificant, and this finding supports the hypothesis that hearing loss is associated with dementia independent of Alzheimer pathology. There are several hypothesis about the underlying causal mechanism for age-related hearing loss and cognitive decline. One hypothesis is that this association can be explained by common etiology or shared neurobiological pathology with decline in other physiological system. Another possible cognitive mechanism emphasize on individual's cognitive load. As people developing hearing loss in the process of aging, the cognitive load demanded by auditory perception increases, which may lead to change in brain structure and eventually to dementia. One other hypothesis suggests that the association between hearing loss and cognitive decline is mediated through various psychosocial factors, such as decrease in social contact and increase in social isolation. Findings on the association between hearing loss and dementia have significant public health implication, since about 9% of dementia cases are associated with hearing loss.
Signs and symptoms:
Falls Falls have important health implications, especially for an aging population where they can lead to significant morbidity and mortality. Elderly people are particularly vulnerable to the consequences of injuries caused by falls, since older individuals typically have greater bone fragility and poorer protective reflexes. Fall-related injury can also lead to burdens on the financial and health care systems. In literature, age-related hearing loss is found to be significantly associated with incident falls. There is also a potential dose-response relationship between hearing loss and falls—greater severity of hearing loss is associated with increased difficulties in postural control and increased prevalence of falls. The underlying causal link between the association of hearing loss and falls is yet to be elucidated. There are several hypotheses that indicate that there may be a common process between decline in auditory system and increase in incident falls, driven by physiological, cognitive, and behavioral factors. This evidence suggests that treating hearing loss has potential to increase health-related quality of life in older adults.
Signs and symptoms:
Depression Depression is one of the leading causes of morbidity and mortality worldwide. In older adults, the suicide rate is higher than it is for younger adults, and more suicide cases are attributable to depression. Different studies have been done to investigate potential risk factors that can give rise to depression in later life. Some chronic diseases are found to be significantly associated with risk of developing depression, such as coronary heart disease, pulmonary disease, vision loss and hearing loss. Hearing loss can attribute to decrease in health-related quality of life, increase in social isolation and decline in social engagement, which are all risk factors for increased risk of developing depression symptoms.
Signs and symptoms:
Spoken language ability Post-lingual deafness is hearing loss that is sustained after the acquisition of language, which can occur due to disease, trauma, or as a side-effect of a medicine. Typically, hearing loss is gradual and often detected by family and friends of affected individuals long before the patients themselves will acknowledge the disability. Post-lingual deafness is far more common than pre-lingual deafness. Those who lose their hearing later in life, such as in late adolescence or adulthood, face their own challenges, living with the adaptations that allow them to live independently.
Signs and symptoms:
Prelingual deafness is profound hearing loss that is sustained before the acquisition of language, which can occur due to a congenital condition or through hearing loss before birth or in early infancy. Prelingual deafness impairs an individual's ability to acquire a spoken language in children, but deaf children can acquire spoken language through support from cochlear implants (sometimes combined with hearing aids). Non-signing (hearing) parents of deaf babies (90–95% of cases) usually go with oral approach without the support of sign language, as these families lack previous experience with sign language and cannot competently provide it to their children without learning it themselves. Unfortunately, this may in some cases (late implantation or not sufficient benefit from cochlear implants) bring the risk of language deprivation for the deaf baby because the deaf baby would not have a sign language if the child is unable to acquire spoken language successfully. The 5–10% of cases of deaf babies born into signing families have the potential of age-appropriate development of language due to early exposure to a sign language by sign-competent parents, thus they have the potential to meet language milestones, in sign language in lieu of spoken language.
Causes:
Hearing loss has multiple causes, including ageing, genetics, perinatal problems and acquired causes like noise and disease. For some kinds of hearing loss the cause may be classified as of unknown cause.
Causes:
There is a progressive loss of ability to hear high frequencies with aging known as presbycusis. For men, this can start as early as 25 and women at 30. Although genetically variable, it is a normal concomitant of ageing and is distinct from hearing losses caused by noise exposure, toxins or disease agents. Common conditions that can increase the risk of hearing loss in elderly people are high blood pressure, diabetes, or the use of certain medications harmful to the ear. While everyone loses hearing with age, the amount and type of hearing loss is variable.Noise-induced hearing loss (NIHL), also known as acoustic trauma, typically manifests as elevated hearing thresholds (i.e. less sensitivity or muting). Noise exposure is the cause of approximately half of all cases of hearing loss, causing some degree of problems in 5% of the population globally. The majority of hearing loss is not due to age, but due to noise exposure. Various governmental, industry and standards organizations set noise standards. Many people are unaware of the presence of environmental sound at damaging levels, or of the level at which sound becomes harmful. Common sources of damaging noise levels include car stereos, children's toys, motor vehicles, crowds, lawn and maintenance equipment, power tools, gun use, musical instruments, and even hair dryers. Noise damage is cumulative; all sources of damage must be considered to assess risk. In the US, 12.5% of children aged 6–19 years have permanent hearing damage from excessive noise exposure. The World Health Organization estimates that half of those between 12 and 35 are at risk from using personal audio devices that are too loud. Hearing loss in adolescents may be caused by loud noise from toys, music by headphones, and concerts or events.Hearing loss can be inherited. Around 75–80% of all these cases are inherited by recessive genes, 20–25% are inherited by dominant genes, 1–2% are inherited by X-linked patterns, and fewer than 1% are inherited by mitochondrial inheritance. Syndromic deafness occurs when there are other signs or medical problems aside from deafness in an individual, such as Usher syndrome, Stickler syndrome, Waardenburg syndrome, Alport's syndrome, and neurofibromatosis type 2. Nonsyndromic deafness occurs when there are no other signs or medical problems associated with the deafness in an individual.Fetal alcohol spectrum disorders are reported to cause hearing loss in up to 64% of infants born to alcoholic mothers, from the ototoxic effect on the developing fetus plus malnutrition during pregnancy from the excess alcohol intake. Premature birth can be associated with sensorineural hearing loss because of an increased risk of hypoxia, hyperbilirubinaemia, ototoxic medication and infection as well as noise exposure in the neonatal units. Also, hearing loss in premature babies is often discovered far later than a similar hearing loss would be in a full-term baby because normally babies are given a hearing test within 48 hours of birth, but doctors must wait until the premature baby is medically stable before testing hearing, which can be months after birth. The risk of hearing loss is greatest for those weighing less than 1500 g at birth.
Causes:
Disorders responsible for hearing loss include auditory neuropathy, Down syndrome, Charcot–Marie–Tooth disease variant 1E, autoimmune disease, multiple sclerosis, meningitis, cholesteatoma, otosclerosis, perilymph fistula, Ménière's disease, recurring ear infections, strokes, superior semicircular canal dehiscence, Pierre Robin, Treacher-Collins, Usher Syndrome, Pendred Syndrome, and Turner syndrome, syphilis, vestibular schwannoma, and viral infections such as measles, mumps, congenital rubella (also called German measles) syndrome, several varieties of herpes viruses, HIV/AIDS, and West Nile virus.
Causes:
Some medications may reversibly affect hearing. These medications are considered ototoxic. This includes loop diuretics such as furosemide and bumetanide, non-steroidal anti-inflammatory drugs (NSAIDs) both over-the-counter (aspirin, ibuprofen, naproxen) as well as prescription (celecoxib, diclofenac, etc.), paracetamol, quinine, and macrolide antibiotics. Others may cause permanent hearing loss. The most important group is the aminoglycosides (main member gentamicin) and platinum based chemotherapeutics such as cisplatin and carboplatin.In addition to medications, hearing loss can also result from specific chemicals in the environment: metals, such as lead; solvents, such as toluene (found in crude oil, gasoline and automobile exhaust, for example); and asphyxiants. Combined with noise, these ototoxic chemicals have an additive effect on a person's hearing loss. Hearing loss due to chemicals starts in the high frequency range and is irreversible. It damages the cochlea with lesions and degrades central portions of the auditory system. For some ototoxic chemical exposures, particularly styrene, the risk of hearing loss can be higher than being exposed to noise alone. The effects is greatest when the combined exposure include impulse noise. A 2018 informational bulletin by the US Occupational Safety and Health Administration (OSHA) and the National Institute for Occupational Safety and Health (NIOSH) introduces the issue, provides examples of ototoxic chemicals, lists the industries and occupations at risk and provides prevention information.There can be damage either to the ear, whether the external or middle ear, to the cochlea, or to the brain centers that process the aural information conveyed by the ears. Damage to the middle ear may include fracture and discontinuity of the ossicular chain. Damage to the inner ear (cochlea) may be caused by temporal bone fracture. People who sustain head injury are especially vulnerable to hearing loss or tinnitus, either temporary or permanent.
Pathophysiology:
Sound waves reach the outer ear and are conducted down the ear canal to the eardrum, causing it to vibrate. The vibrations are transferred by the 3 tiny ear bones of the middle ear to the fluid in the inner ear. The fluid moves hair cells (stereocilia), and their movement generates nerve impulses which are then taken to the brain by the cochlear nerve. The auditory nerve takes the impulses to the brainstem, which sends the impulses to the midbrain. Finally, the signal goes to the auditory cortex of the temporal lobe to be interpreted as sound.Hearing loss is most commonly caused by long-term exposure to loud noises, from recreation or from work, that damage the hair cells, which do not grow back on their own.Older people may lose their hearing from long exposure to noise, changes in the inner ear, changes in the middle ear, or from changes along the nerves from the ear to the brain.
Diagnosis:
Identification of a hearing loss is usually conducted by a general practitioner medical doctor, otolaryngologist, certified and licensed audiologist, school or industrial audiometrist, or other audiometric technician. Diagnosis of the cause of a hearing loss is carried out by a specialist physician (audiovestibular physician) or otorhinolaryngologist.
Diagnosis:
Hearing loss is generally measured by playing generated or recorded sounds, and determining whether the person can hear them. Hearing sensitivity varies according to the frequency of sounds. To take this into account, hearing sensitivity can be measured for a range of frequencies and plotted on an audiogram. Other method for quantifying hearing loss is a hearing test using a mobile application or hearing aid application, which includes a hearing test. Hearing diagnosis using mobile application is similar to the audiometry procedure. Audiograms, obtained using mobile applications, can be used to adjust hearing aid applications. Another method for quantifying hearing loss is a speech-in-noise test. which gives an indication of how well one can understand speech in a noisy environment. Otoacoustic emissions test is an objective hearing test that may be administered to toddlers and children too young to cooperate in a conventional hearing test. Auditory brainstem response testing is an electrophysiological test used to test for hearing deficits caused by pathology within the ear, the cochlear nerve and also within the brainstem.
Diagnosis:
A case history (usually a written form, with questionnaire) can provide valuable information about the context of the hearing loss, and indicate what kind of diagnostic procedures to employ. Examinations include otoscopy, tympanometry, and differential testing with the Weber, Rinne, Bing and Schwabach tests. In case of infection or inflammation, blood or other body fluids may be submitted for laboratory analysis. MRI and CT scans can be useful to identify the pathology of many causes of hearing loss.
Diagnosis:
Hearing loss is categorized by severity, type, and configuration. Furthermore, a hearing loss may exist in only one ear (unilateral) or in both ears (bilateral). Hearing loss can be temporary or permanent, sudden or progressive. The severity of a hearing loss is ranked according to ranges of nominal thresholds in which a sound must be so it can be detected by an individual. It is measured in decibels of hearing loss, or dB HL. There are three main types of hearing loss: conductive hearing loss, sensorineural hearing loss, and mixed hearing loss. An additional problem which is increasingly recognised is auditory processing disorder which is not a hearing loss as such but a difficulty perceiving sound. The shape of an audiogram shows the relative configuration of the hearing loss, such as a Carhart notch for otosclerosis, 'noise' notch for noise-induced damage, high frequency rolloff for presbycusis, or a flat audiogram for conductive hearing loss. In conjunction with speech audiometry, it may indicate central auditory processing disorder, or the presence of a schwannoma or other tumor.
Diagnosis:
People with unilateral hearing loss or single-sided deafness (SSD) have difficulty in hearing conversation on their impaired side, localizing sound, and understanding speech in the presence of background noise. One reason for the hearing problems these patients often experience is due to the head shadow effect.Idiopathic sudden hearing loss is a condition where a person as an immediate decrease in the sensitivity of their sensorineural hearing that does not have a known cause. This type of loss is usually only on one side (unilateral) and the severity of the loss varies. A common threshold of a "loss of at least 30 dB in three connected frequencies within 72 hours" is sometimes used, however there is no universal definition or international consensus for diagnosing idiopathic sudden hearing loss.
Prevention:
It is estimated that half of cases of hearing loss are preventable. About 60% of hearing loss in children under the age of 15 can be avoided. There are a number of effective preventative strategies, including: immunization against rubella to prevent congenital rubella syndrome, immunization against H. influenza and S. pneumoniae to reduce cases of meningitis, and avoiding or protecting against excessive noise exposure. The World Health Organization also recommends immunization against measles, mumps, and meningitis, efforts to prevent premature birth, and avoidance of certain medication as prevention. World Hearing Day is a yearly event to promote actions to prevent hearing damage.
Prevention:
Avoiding exposure to loud noise can help prevent noise-induced hearing loss. 18% of adults exposed to loud noise at work for five years or more report hearing loss in both ears as compared to 5.5% of adults who were not exposed to loud noise at work. Different programs exist for specific populations such as school-age children, adolescents and workers. Education regarding noise exposure increases the use of hearing protectors. But the HPD (without individual selection, training and fit testing) does not significantly reduce the risk of hearing loss. The use of antioxidants is being studied for the prevention of noise-induced hearing loss, particularly for scenarios in which noise exposure cannot be reduced, such as during military operations.
Prevention:
Workplace noise regulation Noise is widely recognized as an occupational hazard. In the United States, the National Institute for Occupational Safety and Health (NIOSH) and the Occupational Safety and Health Administration (OSHA) work together to provide standards and enforcement on workplace noise levels. The hierarchy of hazard controls demonstrates the different levels of controls to reduce or eliminate exposure to noise and prevent hearing loss, including engineering controls and personal protective equipment (PPE). Other programs and initiative have been created to prevent hearing loss in the workplace. For example, the Safe-in-Sound Award was created to recognize organizations that can demonstrate results of successful noise control and other interventions. Additionally, the Buy Quiet program was created to encourage employers to purchase quieter machinery and tools. By purchasing less noisy power tools like those found on the NIOSH Power Tools Database and limiting exposure to ototoxic chemicals, great strides can be made in preventing hearing loss.Companies can also provide personal hearing protector devices tailored to both the worker and type of employment. Some hearing protectors universally block out all noise, and some allow for certain noises to be heard. Workers are more likely to wear hearing protector devices when they are properly fitted.Often interventions to prevent noise-induced hearing loss have many components. A 2017 Cochrane review found that stricter legislation might reduce noise levels. Providing workers with information on their sound exposure levels was not shown to decrease exposure to noise. Ear protection, if used correctly, can reduce noise to safer levels, but often, providing them is not sufficient to prevent hearing loss. Engineering noise out and other solutions such as proper maintenance of equipment can lead to noise reduction, but further field studies on resulting noise exposures following such interventions are needed. Other possible solutions include improved enforcement of existing legislation and better implementation of well-designed prevention programmes, which have not yet been proven conclusively to be effective. The conclusion of the Cochrane Review was that further research could modify what is now regarding the effectiveness of the evaluated interventions.The Institute for Occupational Safety and Health of the German Social Accident Insurance has created a hearing impairment calculator based on the ISO 1999 model for studying threshold shift in relatively homogeneous groups of people, such as workers with the same type of job. The ISO 1999 model estimates how much hearing impairment in a group can be ascribed to age and noise exposure. The result is calculated via an algebraic equation that uses the A-weighted sound exposure level, how many years the people were exposed to this noise, how old the people are, and their sex. The model's estimations are only useful for people without hearing loss due to non-job related exposure and can be used for prevention activities.
Prevention:
Screening The United States Preventive Services Task Force recommends neonatal hearing screening for all newborns, as the first three years of life are believed to be the most important for language development. Universal neonatal hearing screenings have now been widely implemented across the U.S., with rates of newborn screening increasing from less than 3% in the early 1990s to 98% in 2009. Newborns whose screening reveals a high index of suspicion of hearing loss are referred for additional diagnostic testing with the goal of providing early intervention and access to language.The American Academy of Pediatrics advises that children should have their hearing tested several times throughout their schooling: When they enter school At ages 6, 8, and 10 At least once during middle school At least once during high schoolWhile the American College of Physicians indicated that there is not enough evidence to determine the utility of screening in adults over 50 years old who do not have any symptoms, the American Language, Speech Pathology and Hearing Association recommends that adults should be screened at least every decade through age 50 and at three-year intervals thereafter, to minimize the detrimental effects of the untreated condition on quality of life. For the same reason, the US Office of Disease Prevention and Health Promotion included as one of Healthy People 2020 objectives: to increase the proportion of persons who have had a hearing examination.
Management:
Management depends on the specific cause if known as well as the extent, type and configuration of the hearing loss. Sudden hearing loss due to an underlying nerve problem may be treated with corticosteroids.Most hearing loss, that resulting from age and noise, is progressive and irreversible, and there are currently no approved or recommended treatments. A few specific kinds of hearing loss are amenable to surgical treatment. In other cases, treatment is addressed to underlying pathologies, but any hearing loss incurred may be permanent. Some management options include hearing aids, cochlear implants, middle ear implants, assistive technology, and closed captioning. This choice depends on the level of hearing loss, type of hearing loss, and personal preference. Hearing aid applications are one of the options for hearing loss management. For people with bilateral hearing loss, it is not clear if bilateral hearing aids (hearing aids in both ears) are better than a unilateral hearing aid (hearing aid in one ear).
Management:
Idiopathic sudden hearing loss For people with idiopathic sudden hearing loss, different treatment approaches have been suggested that are usually based on the suspected cause of the sudden hearing loss. Treatment approaches may include corticosteroid medications, rheological drugs, vasodilators, anesthetics, and other medications chosen based on the suspected underlying pathology that caused the sudden hearing loss. The evidence supporting most treatment options for idiopathic sudden hearing loss is very weak and adverse effects of these different medications is a consideration when deciding on a treatment approach.
Epidemiology:
Globally, hearing loss affects about 10% of the population to some degree. It caused moderate to severe disability in 124.2 million people as of 2004 (107.9 million of whom are in low and middle income countries). Of these 65 million acquired the condition during childhood. At birth ~3 per 1000 in developed countries and more than 6 per 1000 in developing countries have hearing problems.Hearing loss increases with age. In those between 20 and 35 rates of hearing loss are 3% while in those 44 to 55 it is 11% and in those 65 to 85 it is 43%.A 2017 report by the World Health Organization estimated the costs of unaddressed hearing loss and the cost-effectiveness of interventions, for the health-care sector, for the education sector and as broad societal costs. Globally, the annual cost of unaddressed hearing loss was estimated to be in the range of $750–790 billion international dollars.
Epidemiology:
The International Organization for Standardization (ISO) developed the ISO 1999 standards for the estimation of hearing thresholds and noise-induced hearing impairment. They used data from two noise and hearing study databases, one presented by Burns and Robinson (Hearing and Noise in Industry, Her Majesty's Stationery Office, London, 1970) and by Passchier-Vermeer (1968). As race are some of the factors that can affect the expected distribution of pure-tone hearing thresholds several other national or regional datasets exist, from Sweden, Norway, South Korea, the United States and Spain.In the United States hearing is one of the health outcomes measure by the National Health and Nutrition Examination Survey (NHANES), a survey research program conducted by the National Center for Health Statistics. It examines health and nutritional status of adults and children in the United States. Data from the United States in 2011-2012 found that rates of hearing loss has declined among adults aged 20 to 69 years, when compared with the results from an earlier time period (1999-2004). It also found that adult hearing loss is associated with increasing age, sex, ethnicity, educational level, and noise exposure. Nearly one in four adults had audiometric results suggesting noise-induced hearing loss. Almost one in four adults who reported excellent or good hearing had a similar pattern (5.5% on both sides and 18% on one side). Among people who reported exposure to loud noise at work, almost one third had such changes.
Social and cultural aspects:
People with extreme hearing loss may communicate through sign languages. Sign languages convey meaning through manual communication and body language instead of acoustically conveyed sound patterns. This involves the simultaneous combination of hand shapes, orientation and movement of the hands, arms or body, and facial expressions to express a speaker's thoughts. "Sign languages are based on the idea that vision is the most useful tool a deaf person has to communicate and receive information".Deaf culture refers to a tight-knit cultural group of people whose primary language is signed, and who practice social and cultural norms which are distinct from those of the surrounding hearing community. This community does not automatically include all those who are clinically or legally deaf, nor does it exclude every hearing person. According to Baker and Padden, it includes any person or persons who "identifies him/herself as a member of the Deaf community, and other members accept that person as a part of the community," an example being children of deaf adults with normal hearing ability. It includes the set of social beliefs, behaviors, art, literary traditions, history, values, and shared institutions of communities that are influenced by deafness and which use sign languages as the main means of communication. Members of the Deaf community tend to view deafness as a difference in human experience rather than a disability or disease. When used as a cultural label especially within the culture, the word deaf is often written with a capital D and referred to as "big D Deaf" in speech and sign. When used as a label for the audiological condition, it is written with a lower case d.There also multiple educational institutions for both deaf and Deaf people, that usually use sign language as the main language of instruction. Famous institutions include Gallaudet University and the National Technical Institute for the Deaf in the US, and the National University Corporation of Tsukuba University of Technology in Japan.
Research:
Stem cell transplant and gene therapy A 2005 study achieved successful regrowth of cochlea cells in guinea pigs. However, the regrowth of cochlear hair cells does not imply the restoration of hearing sensitivity, as the sensory cells may or may not make connections with neurons that carry the signals from hair cells to the brain. A 2008 study has shown that gene therapy targeting Atoh1 can cause hair cell growth and attract neuronal processes in embryonic mice. Some hope that a similar treatment will one day ameliorate hearing loss in humans.Recent research, reported in 2012 achieved growth of cochlear nerve cells resulting in hearing improvements in gerbils, using stem cells. Also reported in 2013 was regrowth of hair cells in deaf adult mice using a drug intervention resulting in hearing improvement. The Hearing Health Foundation in the US has embarked on a project called the Hearing Restoration Project. Also Action on Hearing Loss in the UK is also aiming to restore hearing.Researchers reported in 2015 that genetically deaf mice which were treated with TMC1 gene therapy recovered some of their hearing. In 2017, additional studies were performed to treat Usher syndrome and here, a recombinant adeno-associated virus seemed to outperform the older vectors.
Research:
Audition Besides research studies seeking to improve hearing, such as the ones listed above, research studies on the deaf have also been carried out in order to understand more about audition. Pijil and Shwarz (2005) conducted their study on the deaf who lost their hearing later in life and, hence, used cochlear implants to hear. They discovered further evidence for rate coding of pitch, a system that codes for information for frequencies by the rate that neurons fire in the auditory system, especially for lower frequencies as they are coded by the frequencies that neurons fire from the basilar membrane in a synchronous manner. Their results showed that the subjects could identify different pitches that were proportional to the frequency stimulated by a single electrode. The lower frequencies were detected when the basilar membrane was stimulated, providing even further evidence for rate coding. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Identity matrix**
Identity matrix:
In linear algebra, the identity matrix of size n is the n×n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the object remains unchanged by the transformation. In other contexts, it is analogous to multiplying by the number 1.
Terminology and notation:
The identity matrix is often denoted by In , or simply by I if the size is immaterial or can be trivially determined by the context.
Terminology and notation:
The term unit matrix has also been widely used, but the term identity matrix is now standard. The term unit matrix is ambiguous, because it is also used for a matrix of ones and for any unit of the ring of all n×n matrices.In some fields, such as group theory or quantum mechanics, the identity matrix is sometimes denoted by a boldface one, 1 , or called "id" (short for identity). Less frequently, some mathematics books use U or E to represent the identity matrix, standing for "unit matrix" and the German word Einheitsmatrix respectively.In terms of a notation that is sometimes used to concisely describe diagonal matrices, the identity matrix can be written as The identity matrix can also be written using the Kronecker delta notation:
Properties:
When A is an m×n matrix, it is a property of matrix multiplication that In particular, the identity matrix serves as the multiplicative identity of the matrix ring of all n×n matrices, and as the identity element of the general linear group GL(n) , which consists of all invertible n×n matrices under the matrix multiplication operation. In particular, the identity matrix is invertible. It is an involutory matrix, equal to its own inverse. In this group, two square matrices have the identity matrix as their product exactly when they are the inverses of each other.
Properties:
When n×n matrices are used to represent linear transformations from an n -dimensional vector space to itself, the identity matrix In represents the identity function, for whatever basis was used in this representation.
Properties:
The i th column of an identity matrix is the unit vector ei , a vector whose i th entry is 1 and 0 elsewhere. The determinant of the identity matrix is 1, and its trace is n The identity matrix is the only idempotent matrix with non-zero determinant. That is, it is the only matrix such that: When multiplied by itself, the result is itself All of its rows and columns are linearly independent.The principal square root of an identity matrix is itself, and this is its only positive-definite square root. However, every identity matrix with at least two rows and columns has an infinitude of symmetric square roots.The rank of an identity matrix In equals the size n , i.e.: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metamerism (biology)**
Metamerism (biology):
In biology, metamerism is the phenomenon of having a linear series of body segments fundamentally similar in structure, though not all such structures are entirely alike in any single life form because some of them perform special functions.
In animals, metameric segments are referred to as somites or metameres. In plants, they are referred to as metamers or, more concretely, phytomers.
In animals:
In animals, zoologists define metamery as a mesodermal event resulting in serial repetition of unit subdivisions of ectoderm and mesoderm products. Endoderm is not involved in metamery. Segmentation is not the same concept as metamerism: segmentation can be confined only to ectodermally derived tissue, e.g., in the Cestoda tapeworms. Metamerism is far more important biologically since it results in metameres - also called somites - that play a critical role in advanced locomotion.
In animals:
One can divide metamerism into two main categories: homonomous metamery is a strict serial succession of metameres. It can be grouped into two more classifications known as pseudometamerism and true metamerism. An example of pseudometamerism is in the class Cestoda. The tapeworm is composed of many repeating segments - primarily for reproduction and basic nutrient exchange. Each segment acts independently from the others, which is why it is not considered true metamerism. Another worm, the earthworm in phylum Annelida, can exemplify true metamerism. In each segment of the worm, a repetition of organs and muscle tissue can be found. What differentiates the Annelids from Cestoda is that the segments in the earthworm all work together for the whole organism. It is believed that segmentation evolved for many reasons, including a higher degree of motion. Taking the earthworm, for example: the segmentation of the muscular tissue allows the worm to move in an inching pattern. The circular muscles work to allow the segments to elongate one by one, and the longitudinal muscles then work to shorten the elongated segments. This pattern continues down the entirety of the worm, allowing it to inch along a surface. Each segment is allowed to work independently, but towards the movement of the whole worm.
In animals:
heteronomous metamery is the condition where metameres have grouped together to perform similar tasks. The extreme example of this is the insect head (5 metameres), thorax (3 metameres), and abdomen (11 metameres, not all discernible in all insects). The process that results in the grouping of metameres is called "tagmatization", and each grouping is called a tagma (plural: tagmata). In organisms with highly derived tagmata, such as the insects, much of the metamerism within a tagma may not be trivially distinguishable. It may have to be sought in structures that do not necessarily reflect the grouped metameric function (eg. the ladder nerve system or somites do not reflect the unitary structure of a thorax).In addition, an animal may be classified as "pseudometameric", meaning that it has clear internal metamerism but no corresponding external metamerism - as is seen, for example, in Monoplacophora.
In animals:
Humans and other chordates are conspicuous examples of organisms that have metameres intimately grouped into tagmata. In the Chordata the metameres of each tagma are fused to such an extent that few repetitive features are directly visible. Intensive investigation is necessary to discern the metamerism in the tagmata of such organisms. Examples of detectable evidence of vestigially metameric structures include branchial arches and cranial nerves.
In animals:
Some schemes regard the concept of metamerism as one of the four principles of construction of the human body, common to many animals, along with general bilateral symmetry (or zygomorphism), pachymerism (or tubulation), and stratification. More recent schemes also include three other concepts: segmentation (conceived as different from metamerism), polarity and endocrinosity.
In plants:
A metamer is one of several segments that share in the construction of a shoot, or into which a shoot may be conceptually (at least) resolved. In the metameristic model, a plant consists of a series of 'phytons' or phytomers, each consisting of an internode and its upper node with the attached leaf. As Asa Gray (1850) wrote: The branch, or simple stem itself, is manifestly an assemblage of similar parts, placed one above another in a continuous series, developed one from another in successive generations. Each one of these joints of stem, bearing its leaf at the apex, is a plant element; or as we term it a phyton,—a potential plant, having all the organs of vegetation, namely, stem, leaf, and in its downward development even a root, or its equivalent. This view of the composition of the plant, though by no means a new one, has not been duly appreciated. I deem it essential to a correct philosophical understanding of the plant.
In plants:
Some plants, particularly grasses, demonstrate a rather clear metameric construction, but many others either lack discrete modules or their presence is more arguable. Phyton theory has been criticized as an over-ingenious, academic conception which bears little relation to reality. Eames (1961) concluded that "concepts of the shoot as consisting of a series of structural units have been obscured by the dominance of the stem- and leaf-theory. Anatomical units like these do not exist: the shoot is the basic unit." Even so, others still consider comparative study along the length of the metameric organism to be a fundamental aspect of plant morphology.Metameric conceptions generally segment the vegetative axis into repeating units along its length, but constructs based on other divisions are possible. The pipe model theory conceives of the plant (especially trees) as made up of unit pipes ('metamers'), each supporting a unit amount of photosynthetic tissue. Vertical metamers are also suggested in some desert shrubs in which the stem is modified into isolated strips of xylem, each having continuity from root to shoot. This may enable the plant to abscise a large part of its shoot system in response to drought, without damaging the remaining part.
In plants:
In vascular plants, the shoot system differs fundamentally from the root system in that the former shows a metameric construction (repeated units of organs; stem, leaf, and inflorescence), while the latter does not. The plant embryo represents the first metamer of the shoot in spermatophytes or seed plants.
Plants (especially trees) are considered to have a 'modular construction,' a module being an axis in which the entire sequence of aerial differentiation is carried out from the initiation of the meristem to the onset of sexuality (e.g. flower or cone development) which completes its development. These modules are considered to be developmental units, not necessarily structural. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NEC µPD7720**
NEC µPD7720:
The NEC μPD7720 is the name of fixed point digital signal processors from NEC (currently Renesas Electronics). Announced in 1980, it became, along with the Texas Instruments TMS32010, one of the most popular DSPs of its day.
Background:
In the late 1970s, telephone engineers were attempting to create technology with sufficient performance to enable digital touch-tone dialing. Existing digital signal processing solutions required over a hundred chips and consumed significant amounts of power. Intel responded to this potential market by introducing the Intel 2920, an integrated processor that, while it had both digital-to-analog and analog-to-digital converters, lacked additional features (such as a hardware multiplier) that would be found in later processors. Announcements for the first "real" DSPs, the NEC μPD7720 and the Bell Labs DSP-1 chip, occurred the following year at the 1980 IEEE International Solid-State Circuits conference. The μPD7720 first became available in 1981 and commercially available in late 1982 at a cost of ¥20.000 (around $82 and inflation corrected for 2023 dollars around $304). Beyond their initial use in telephony, these processors found applications in disk drive and graphics controllers, speech synthesis and modems.
Architecture:
Detailed descriptions of the μPD7720 architecture are found in Chance (1990), Sweitzer (1984) and Simpson (1984). Briefly, the NEC μPD7720 runs at 4 MHz frequency with 128-word data RAM, 512-word data ROM, and 512-word program memory, which has VLIW-like instruction format, enabling all of ALU operation, address register increment/decrement operation, and move operation in one cycle.
Variants:
The NEC μPD77C25, which succeeded the μPD7720, runs at 8 MHz frequency with 256-word data RAM, 1,024-word data ROM, and 2,048-word program memory. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cartographic design**
Cartographic design:
Cartographic design or map design is the process of crafting the appearance of a map, applying the principles of design and knowledge of how maps are used to create a map that has both aesthetic appeal and practical function. It shares this dual goal with almost all forms of design; it also shares with other design, especially graphic design, the three skill sets of artistic talent, scientific reasoning, and technology. As a discipline, it integrates design, geography, and geographic information science.
Cartographic design:
Arthur H. Robinson, considered the father of cartography as an academic research discipline in the United States, stated that a map not properly designed "will be a cartographic failure." He also claimed, when considering all aspects of cartography, that "map design is perhaps the most complex."
History:
From ancient times to the 20th century, cartography was a craft or trade. Most map makers served several years as an apprentice, learning the skills of the master, with little room for innovation other than adapting to changing production technology. That said, there were notable exceptions, such as the occasional introduction of a novel Map projection, and the advent of thematic mapping in the 19th century highlighted by the work of Charles Dupin and Charles Joseph Minard in France. As late as 1948, Erwin Raisz's General Cartography, the standard English textbook on the subject, reads as a set of instructions of how to construct maps in keeping with tradition, with very little reflection on why it is done that way. This was despite the fact that Raisz himself was a very creative designer, developing techniques as varied as cartograms and a style of Terrain depiction on physiographic maps that few have been able to replicate.Advances in cartographic production technology in the 20th century, especially the advent and widespread availability of color Offset printing, then a multitude of advances spurred on by World War II, such as Photolithography, gave cartographers a larger palette of design options, and made it easier to creatively innovate. This was synchronized with the widespread expansion of higher education, during which most cartography training transitioned from an apprenticeship to a college degree (typically using Raisz's textbook in America). The new generation of cartography professionals and professors began to reflect on why some maps seemed to be better (in beauty and function) than others, and to think of ways to improve design. Perhaps chief among them was Arthur H. Robinson, whose short but seminal work The Look of Maps (1952) set the stage for the future of cartographic design, both for his early theorizing about map design, and for his honest acknowledgment of what was not yet known, soon spawning dozens of PhD dissertations. His subsequent textbook, Elements of Cartography (1953), was a marked departure from the past, with a major focus on design, claiming to "present cartography as an intellectual art and science rather than as a sterile system of drafting and drawing procedures."Since the 1950s, a significant focus of cartography as an academic discipline has been the cartographic communication school of thought, seeking to improve design standards through increased scientific understanding of how maps are perceived and used, typically based on cognate disciplines such as psychology (especially perception, Gestalt psychology, and psychophysical experimentation), Human vision, and geography. This focus began to be challenged towards the end of the 1980s by the study of critical cartography, which drew attention to the influence of social and political forces on map design. A second major research track has been the investigation of the design opportunities offered by changing technology, especially computer graphics starting in the 1960s, geographic information systems starting in the 1970s, and the Internet starting in the 1990s. However, as much or more of the recent innovation in cartographic design has been at the hands of professional cartographers and their sharing of resources and ideas through organisations such as the International Cartographic Association and through national mapping societies such as the North American Cartographic Information Society and the British Cartographic Society.
Map types:
A wide variety of different types of maps have been developed, and are available to use for different purposes. In addition to the general principles of cartographic design, some types of visualizations have their its own design needs, constraints, and best practices.
Map types:
Terrain/Relief/Topography. Several methods have been developed for visualizing elevation and the shape of the Earth's surface. Some techniques date back hundreds or thousands of years and are difficult to replicate digitally, such hill profiles and hachures; others, such as shaded relief and contour lines, are much easier to produce in GIS than using manual tools. Some of these methods are designed for analytical use, such as measuring slope on contours, but most are intended to produce an intuitive visual representation of the terrain.
Map types:
A Choropleth map visualizes statistical data that has been aggregated into a priori districts (such as countries or counties) using area symbols based on the visual variables of color and/or pattern. Choropleth maps are by far the most popular kind of thematic maps due to the widespread availability of aggregated statistical data (such as census data, but the nature of aggregate data can result in significant misinterpretation issues, such as the Ecological fallacy and the Modifiable areal unit problem, which can be somewhat mitigated by careful design.
Map types:
A Dasymetric map is a hybrid type that uses additional data sources to refine the boundaries of a choropleth map (especially through excluding uninhabited areas), thereby mitigating some of the sources of misinterpretation.
Map types:
A Proportional symbol map visualizes statistical data of point symbols, often circles, using the visual variable of size. The underlying data may be of point features, or it may be the same aggregate data used in choropleth maps. In the latter case, the two map types are often complimentary, as variables that are inappropriate to represent in one type are well-suited for the other.
Map types:
A Cartogram purposefully distorts the size of areal features proportional to a chosen variable, such as total population, and thus may be thought of as a hybrid between choropleth and proportional symbol maps. Several automated and manual techniques have been developed to construct cartograms, each having advantages and disadvantages. Frequently, the resultant shapes are filled as a choropleth map representing a variable thought to relate in some way to the area variable.
Map types:
An Isarithmic map (or isometric or isopleth or contour) represents a continuous field by interpolating lines wherein the field variable has equal value (an isoline). The lines themselves and/or the intervening regions may be symbolized. Some choropleth maps may be thought of as rough approximations of isarithmic maps, and dasymetric maps as slightly better approximations.
Map types:
A Continuous tone map represents a continuous field as smoothly transitioning color (hue, value, and/or saturation), usually based on a raster grid. Some have considered this to be a special type of unclassified isarithmic map, while others consider it to be something fundamentally different.A Chorochromatic map (or area-class) visualizes a discrete/nominal Field (geography) as a set of regions of homogeneous value.
Map types:
A Dot distribution map (or dot density) visualizes the density of an aggregate group as representative dots (each of which may represent a single individual or a constant number of individuals). The source data may be the actual point locations of the individuals, or choropleth-type aggregate district statistics.
Map types:
A Flow map focuses on lines of movement. A wide variety of flow maps exist, depending on whether flow volume is represented (usually using visual variables such as stroke weight or color value), and whether the route of flow is shown accurately (such as a navigation route on a Road map) or schematically (such as a Transit map or airline route map)Although these are called separate "maps," they should be thought of as single map layers, which may be combined with other thematic or feature layers in a single map composition. A bivariate map uses one or more of the methods above to represent two variables simultaneously; three or more variables produce a multivariate map.
Design process:
As map production and reproduction technology has advanced, the process of designing and producing maps has changed considerably. Most notably, GIS and graphics software not only makes it easier and faster to create a map, but it facilitates a non-linear editing process that is more flexible than in the days of manual cartography. There is still a general procedure that cartographers generally follow: Planning: The iterative nature of modern cartography makes this step somewhat less involved than before, but it is still crucial to have some form of plan. Typically, this involves answering several questions:What is the purpose of the map? Maps serve a wide variety of purposes; they may be descriptive (showing the accurate location of geographic features to be used in a variety of ways, like a street map), exploratory (showing the distribution of phenomena and their properties, to look for underlying patterns and processes, like many thematic maps), explanatory (educating the audience about a specific topic), or even rhetorical (trying to convince the audience to believe or do something).
Design process:
Who is the audience? Maps will be more useful if they cater to the intended audience. This audience could range from the cartographer herself (desiring to learn about a topic by mapping it), to focused individuals or groups, to the general public. Several characteristics of the audience can aid this process, if they can be determined, such as: their level of knowledge about the subject matter and the region being covered; their skill in map reading and understanding of geographic principles (e.g., do they know what 1:100,000 means?); and their needs, motivations and biases.
Design process:
Is a map the best solution? There are times when a map could be made, but a chart, photograph, text, or other tool may better serve the purpose.
What datasets are needed? The typical map will require data to serve several roles, including information about the primary purpose, as well as supporting background information.
What medium should be used? Different mapping media, such as posters, brochures, folded maps, page maps, screen displays, and web maps have advantages and disadvantages for different purposes, audiences, and usage contexts.
Design process:
Data Collection: In the era of Geographic information systems, it seems like vast amounts of data are available for every conceivable topic, but they must be found and obtained. Frequently, available datasets are not perfect matches for the needs of the project at hand, and must be augmented or edited. Also, it is still common for there to be no available data on the specific topic, requiring the cartographer to create them, or derive them from existing data using GIS tools.
Design process:
Design and Implementation: This step involves making decisions about all of the aspects of map design, as listed below, and implementing them using computer software. In the manual drafting era, this was a very linear process of careful decision making, in which some aspects needed to be implemented before others (often, projection first). However, current GIS and graphics software enables interactive editing of all of these aspects interchangeably, leading to a non-linear, iterative process of experimentation, evaluation, and refinement.
Design process:
Production and Distribution: The last step is to produce the map in the chosen medium, and distribute it to the audience. This could be as simple as a desktop printer, or sending it to a press, or developing an interactive Web mapping site.
Design process:
Map Cartographic design is one part of a larger process in which maps play a central role. This cartographic process begins with a real or imagined environment or setting. As map makers gather data on the subject they are mapping (usually through technology and/or remote sensing), they begin to recognize and detect patterns that can be used to classify and arrange the data for map creation (i.e., they think about the data and its patterns as well as how to best visualize them on a map). After this, the cartographer compiles the data and experiments with the many different methods of map design and production (including generalization, symbolization, and other production methods) in an attempt to encode and portray the data on a map that will allow the map user to decode and interpret the map in the way that matches the intended purpose of the map maker. Next, the user of the map reads and analyzes the map by recognizing and interpreting the symbols and patterns that are found on the map. This leads the user to take action and draw conclusions based on the information that they find on the map. In this way, maps help shape how we view the world based on the spatial perspectives and viewpoints that they help create in our mind.
Design process:
Goals While maps serve a variety of purposes, and come in a variety of styles, most designs share common goals. Some of the most commonly stated include: Accuracy, the degree to which the information on the map corresponds to the nature of the real world. Traditionally, this was the primary determinant of quality cartography. It is now accepted, due largely to studies in Critical cartography, that no dataset or map is a perfect reproduction of reality, and that the subjective biases and motivations of the cartographer are virtually impossible to circumvent. That said, maps can still be crafted to be as accurate as possible, honest about their shortcomings, and leverage their subjectivity.
Design process:
Functionality, the usefulness of the map to achieve its purpose. During much of the latter 20th century, this was the primary goal of academic cartography, especially the Cartographic Communication school of thought: to determine how to make the most efficient maps as conduits of information.
Clarity, the degree to which the map makes its purpose obvious and its information easy to access. Clarity can be achieved through removing all but the most important information, but this comes at the expense of other goals.
Richness, the volume and diversity of information the reader can glean from the map. Even maps with a narrowly-defined purpose often require the reader to see patterns in large amounts of data.
Design process:
Aesthetic appeal, a positive emotional reaction to the overall appearance of the map. Maps may be appreciated as "beautiful," but other positive affects include "interesting," "engaging," "convincing," and "motivating." Aesthetic reactions can be negative as well, such as "ugly," "cluttered," "confusing," "complicated," "annoying," or "off-putting."These goals often seem to be in conflict, and it may be tempting to prioritize one over the others. However, quality design in cartography, as in any other design field, is about finding creative and innovative solutions to achieve multiple goals. According to Edward Tufte, What is to be sought in designs for the display of information is the clear portrayal of complexity. Not the complication of the simple; rather the task of the designer is to give visual access to the subtle and the difficult--that is, the revelation of the complex.
Design process:
In fact, good design can produce synergistic results. Even aesthetics can have practical value: potential map users are more likely to pick up, and more likely to spend time with, a beautiful map than one that is difficult to look at. In turn, the practical value of maps has gained aesthetic appeal, favoring those that exude a feeling of being "professional," "authoritative," "well-crafted," "clear," or "informative." In 1942, cartographer John K. Wright said, An ugly map, with crude colors, careless line work, and disagreeable, poorly arranged lettering may be intrinsically as accurate as a beautiful map, but it is less likely to inspire confidence.
Design process:
Rudolf Arnheim, an art theorist, said this about the relationship between maps and aesthetics in 1976: The aesthetic or artistic qualities of maps are sometimes thought to be simply matters of so-called good taste, of harmonious color schemes and sensory appeal. In my opinion, those are secondary concerns. The principal task of the artist, be he a painter or a map designer, consists of translating the relevant aspects of the message into the expressive qualities of the medium in such a way that the information comes across as a direct impact of perceptual forces. This distinguishes the mere transmission of facts from the arousal of meaningful experience.
Design process:
More recently, cartographers have recognised the central role of aesthetics in cartographic design and called for greater focus on how this role functions over time and space. For example, in 2005, Dr Alex Kent (former President of the British Cartographic Society) recommended: It will thus be more useful to cartographers and the development of cartography in general to undertake further research towards understanding the role of aesthetics in cartography than to pursue universal principles. Some possible topics for investigation include: 1. A history of the development of aesthetics in cartography; 2. An exploration of geographical variations in cartographic aesthetics; and 3. A critical examination of the factors influencing aesthetic decisions in contemporary mapmaking.
Map purpose and selection of information:
Robinson codified the mapmaker's understanding that a map must be designed foremost with consideration to the audience and its needs, stating that from the very beginning of mapmaking, maps "have been made for some particular purpose or set of purposes". The intent of the map should be illustrated in a manner in which the percipient (the map reader) acknowledges its purpose in a timely fashion. The principle of figure-ground refers to this notion of engaging the user by presenting a clear presentation, leaving no confusion concerning the purpose of the map. This will enhance the user's experience and keep their attention. If the user is unable to identify what is being demonstrated in a reasonable fashion, the map may be regarded as useless.
Map purpose and selection of information:
Making a meaningful map is the ultimate goal. Alan MacEachren explains that a well designed map "is convincing because it implies authenticity". An interesting map will no doubt engage a reader. Information richness or a map that is multivariate shows relationships within the map. Showing several variables allows comparison, which adds to the meaningfulness of the map. This also generates hypothesis and stimulates ideas and perhaps further research. In order to convey the message of the map, the creator must design it in a manner which will aid the reader in the overall understanding of its purpose. The title of a map may provide the "needed link" necessary for communicating that message, but the overall design of the map fosters the manner in which the reader interprets it.In the 21st century it is possible to find a map of virtually anything from the inner workings of the human body to the virtual worlds of cyberspace. Therefore, there are now a huge variety of different styles and types of map – for example, one area which has evolved a specific and recognisable variation are those used by public transport organisations to guide passengers, namely urban rail and metro maps, many of which are loosely based on 45 degree angles as originally perfected by Harry Beck and George Dow.
Aspects of design:
Unlike cognate disciplines such as Graphic design, Cartography is constrained by the fact that geographic phenomena are where and what they are. However, within that framework the cartographer has a great deal of control over many aspects of the map.
Aspects of design:
Cartographic data and generalization The widespread availability of data from Geographic information systems, especially free data such as OpenStreetMap, has greatly shortened the time and cost of creating most maps. However, this part of the design process is still not trivial. Existing GIS data, often created for management or research purposes, is not always in a form that is most suited to a particular map purpose, and data frequently need to be augmented, edited, or updated to be useful. Some sources, especially in Europe, refer to the former as a Digital Landscape Model, and spatial data that are fine-tuned for map design as a Digital Cartographic Model.A significant part of this transformation is generalization, a set of procedures for adjusting the amount of detail (geometry and attributes) in datasets to be appropriate for a given map. All maps portray a small, strategic sample of the infinite amount of potential information in the real world; the strategy for that sample is largely driven by the scale, purpose, and audience of the map. The cartographer is thus constantly making judgements about what to include, what to leave out and what to show in a slightly incorrect place. Most often, generalization starts with detailed data created for a larger scale, and strategically removes information deemed to be unnecessary for a smaller scale map. This issue assumes more importance as the scale of the map gets smaller (i.e. the map shows a larger area) because the information shown on the map takes up more space on the ground. For example, a 2mm thick highway symbol on a map at a scale of 1:1,000,000 occupies a space 2 km wide, leaving no room for roadside features. In the late 1980s, the Ordnance Survey's first digital maps, where the absolute positions of major roads were sometimes moved hundreds of meters from their true location on digital maps at scales of 1:250,000 and 1:625,000 (the generalization technique of displacement), because of the overriding need to annotate the features.
Aspects of design:
Projections Because the Earth is (nearly) spherical, any planar representation (a map) requires it to be flattened in some way, known as a projection. Most map projections are implemented using mathematical formulas and computer algorithms based on geographic coordinates (latitude, longitude). All projections generate distortions such that shapes and areas cannot both be conserved simultaneously, and distances can never all be preserved. The mapmaker must choose a suitable map projection according to the space to be mapped and the purpose of the map; this decision process becomes increasingly important as the scope of the map increases; while a variety of projections would be indistinguishable on a city street map, there are dozens of drastically different ways of projecting the entire world, with extreme variations in the type, degree, and location of distortion.
Aspects of design:
Interruptions and arrangements World maps are often designed by cutting the globe into smaller pieces, using a different projection for each piece, and then arranging all those small maps into a single map on one piece of paper, with discontinuities between the small maps.
Perhaps the earliest types of such interrupted arrangements are various maps composed of 2 disks showing 2 hemispheres of Earth, one disk centered on some point selected by the cartographer and the other disk centered on its antipode.
More recently, cartographers have experimented with a wide variety of interrupted arrangements of projections, including homolosine and polyhedral maps.
Aspects of design:
Symbology Cartographic symbology encodes information on the map in ways intended to convey information to the map reader efficiently, taking into consideration the limited space on the map, models of human understanding through visual means, and the likely cultural background and education of the map reader. Symbology may be implicit, using universal elements of design, or may be more specific to cartography or even to the map. National topographic map series, for example, adopt a standardised symbology, which varies from country to country.Jacques Bertin, in Sémiologie Graphique (1967), introduced a system of codifying graphical elements (including map symbols) that has been a part of the canon of Cartographic knowledge ever since. He analyzed graphical objects in terms of three aspects (here using current terminology): Dimension: The basic type of geometric shape used to represent a geographic phenomenon, commonly points (marker symbols), lines (stroke symbols), or areas (fill symbols), as well as fields.
Aspects of design:
Level of measurement: the basic type of property being visualized, generally using the classification of Stanley Smith Stevens (nominal, ordinal, interval, ratio), or some extension thereof.
Aspects of design:
Visual variable: the graphical components of a symbol, including shape, size, color, orientation, pattern, transparency, and so on.Thus, a map symbol consists of a number of visual variables, graphically representing the location and spatial form of a geographic phenomenon, as well as zero or more of its properties. For example, might represent the point location of a facility, with shape being used to represent that the facility type is "mine" (a nominal property). This symbol would be intuitively understood by many users without any explanation. On a Choropleth map of median income, A dark green fill might represent an area location of a county, with hue and value being used to represent that the income is US$50,000 (a ratio property). This is an example of an ad hoc symbol with no intrinsic meaning, requiring a legend for users to discover the intended meaning.
Aspects of design:
Labeling and typography Text serves a variety of purposes on maps. Most directly, it identifies features on the map by name; in addition, it helps to classify features (as in "Jones Park"); it can explain information; it can help to locate features, in some cases on its own without a geometric map symbol (esp. natural features); it plays a role in the gestalt of the map, especially the visual hierarchy; and it contributes to the aesthetic aspects of the map, including its "look and feel" and its attractiveness. While the cartographer has a great deal of freedom in choosing the style and size of type to accomplish these purposes, two basic goals are seen as crucial: Legibility, the ease with which map users can read a particular piece of text. Map labels introduce unique challenges to legibility, due to their tendency to be small, unfamiliar, irregularly spaced, and placed on top of map symbols.
Aspects of design:
Association, the ease with which map users can recognize which feature a particular piece of text is labeling. This can be especially challenging on general purpose maps containing a large number of varied features and their labels.Most of the elements of labeling design are intended to achieve these two goals, including: the choice of typefaces, type style, size, color, and other visual variables; halos, masks, leader lines, and other additional symbols; decisions about what to label and what to not label; label text content; and label placement. While many of these decisions are specific to the particular map, functional label placement tends to follow a number of rules that have been developed through cartographic research, which has led to automated algorithms to place them automatically, to a reasonable degree of quality.
Aspects of design:
Placenames One challenge for map labeling is dealing with varying preferences of place names. Although maps are often made in one specific language, place names often differ between languages. So a map made in English may use the name Germany for that country, while a German map would use Deutschland and a French map Allemagne. A non-native term for a place is referred to as an exonym. Sometimes a name may be disputed, such as Myanmar vs. Burma. Further difficulties arise when transliteration or transcription between writing systems is required. Some well-known places have well-established names in other languages and writing systems, such as Russia or Rußland for Росси́я, but in other cases a system of transliteration or transcription is required. Sometimes multiple transliteration systems exist; for example, the Yemeni city of المخا is written variously in English as Mocha, Al Mukha, al-Makhā, al-Makha, Mocca and Moka. Some transliteration systems produce such different place names as to cause confusion, such as the transition of Chinese–English transliteration from Wade–Giles (Peking, Kwangchow) to Pinyin (Beijing, Guangzhou).
Aspects of design:
Composition The term map composition is sometimes used to refer to the composition of the symbols within the map itself, and sometimes to the composition of the map and other elements on the page. Some of the same principles apply to both processes, while others are unique to each. In the former sense of the symbols on the map, as all of the symbols and thematic layers on the map are brought together, their interactions have major effects on map reading.
Aspects of design:
A number of composition principles have been studied in cartography. While some of these ideas were posited by Arthur H. Robinson in The Look of Maps (1952), Borden Dent was likely the first to approach it in a systematic way in 1972, firmly within the Cartographic Communication school of thought. Dent's model drew heavily on psychology, especially Gestalt psychology and Perception, to evaluate what made some maps difficult to read as a whole, even when individual symbols were designed well, and creating a model that included most of the list below. Later, artistic composition principles were adopted from graphic design, many of which are similar, having come from similar sources. They all share the same goal: to combine all of the individual symbols into a single whole that achieves the goals above.
Aspects of design:
Contrast is the degree of visual difference between graphic elements (e.g., map symbols). Robinson saw contrast as the fundamental principle of composition, supporting everything else. As suggested by Robinson, and further developed by Jacques Bertin, contrast is created by manipulating the visual variables of map symbols, such as size, shape, and color.
Figure-ground is the ease with which each individual symbol or feature (the figure) can be mentally isolated from the rest of the map (the ground). The rules for establishing figure-ground are largely drawn from the gestalt principle of Prägnanz.
Aspects of design:
Visual hierarchy is the apparent order of items, from those that look most important (i.e., attract the most attention) to those that look least important. Typically, the intent is for the visual hierarchy to match the intellectual hierarchy of what is intended to be more or less important. Bertin suggested that some of the visual variables, especially size and value, naturally contributed to visual hierarchy (which he termed as dissociative), while others had differences that were more easily ignored.
Aspects of design:
Grouping (Dent) or Selectivity (Bertin) is the ease with which a reader can isolate all of the symbols of a particular appearance, while ignoring the rest of the map, allowing the reader to identify patterns in that type of feature (e.g., "where are all the blue dots?"). In Bertin's model, size, value, and hue were particularly selective, while others, such as shape, require significant contrast to be useful.
Aspects of design:
Harmony is how well all of the individual elements (map symbols) "look good" together. This generally follows from the above principles, as well as the careful selection of harmonious colors, textures, and typefaces.
Layout A typical map, whether on paper or on a web page, consists of not only the map image, but also other elements that support the map: A title tells the reader what the map is about, including the purpose or theme, and perhaps the region covered.
Aspects of design:
A legend or key explains the meaning of the symbols on the map A neatline may frames the entire map image, although many maps use negative space to set the map apart A compass rose or north arrow provides orientation Inset maps may serve several purposes, such as showing the context of the main map in a larger area, showing more detail for a subset of the main map, showing a separated but related area, or showing related themes for the same region.
Aspects of design:
A bar scale or other indication of scale translates between map measurements and real distances.
Illustrations may be included to help explain the map subject or add aesthetic appeal.
Aspects of design:
Explanatory text may discuss the subject further Metadata declares the sources, date, authorship, projection, or other information about the construction of the map.Composing and arranging all of the elements on the page involves just as much design skill and knowledge of how readers will use the map as designing the map image itself. Page composition serves several purposes, including directing the reader's attention, establishing a particular aesthetic feel, clearly stating the purpose of the map, and making the map easier to understand and use. Therefore, Page layout follows many of the same principles of Composition above, including figure-ground and Visual hierarchy, as well as aesthetic principles adopted from Graphic design, such as balance and the use of White space (visual arts). In fact, this aspect of cartographic design has more in common with graphic design than any other part of the craft.
Aspects of design:
Reproduction and distribution At one time, the process of getting a map printed was a major part of the time and effort spent in cartography. While less of a concern with modern technology, it is not insignificant. Professional cartographers are asked to produce maps that will be distributed by a variety of media, and understanding the various reproduction and distribution technologies help to cater a design to work best for the intended medium.
Aspects of design:
Inkjet printing Laser printing Offset printing, including Prepress preparation Animated mapping Web mapping | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Protein Information Resource**
Protein Information Resource:
The Protein Information Resource (PIR), located at Georgetown University Medical Center, is an integrated public bioinformatics resource to support genomic and proteomic research, and scientific studies. It contains protein sequences databases
History:
PIR was established in 1984 by the National Biomedical Research Foundation as a resource to assist researchers and customers in the identification and interpretation of protein sequence information. Prior to that, the foundation compiled the first comprehensive collection of macromolecular sequences in the Atlas of Protein Sequence and Structure, published from 1964 to 1974 under the editorship of Margaret Dayhoff. Dayhoff and her research group pioneered in the development of computer methods for the comparison of protein sequences, for the detection of distantly related sequences and duplications within sequences, and for the inference of evolutionary histories from alignments of protein sequences.Winona Barker and Robert Ledley assumed leadership of the project after the death of Dayhoff in 1983. In 1999, Cathy H. Wu joined the National Biomedical Research Foundation, and later on Georgetown University Medical Center, to head the bioinformatics efforts of PIR, and has served first as Principal Investigator and, since 2001, as Director.For four decades, PIR has provided many protein databases and analysis tools freely accessible to the scientific community, including the Protein Sequence Database, the first international database (see PIR-International), which grew out of Atlas of Protein Sequences and Structure.In 2002, PIR – along with its international partners, the European Bioinformatics Institute and the Swiss Institute of Bioinformatics – were awarded a grant from NIH to create UniProt, a single worldwide database of protein sequence and function, by unifying the Protein Information Resource-Protein Sequence Database, Swiss-Prot, and TrEMBL databases. As of 2010, PIR offers a wide variety of resources mainly oriented to assist the propagation and standardization of protein annotation: PIRSF, iProClass, and iProLINK.
History:
The Protein Ontology is another popular database released by the Protein Information Resource. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Double Altar**
Double Altar:
A double altar in Roman Catholicism is an altar that has a double front.It is constructed in this way so that Mass may be celebrated on both sides of it at the same time. These altars were frequently found in churches of religious communities in which the choir is behind the altar so that whilst one priest is celebrating Mass for the community in choir, another may celebrate for the laity assembled in the church. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alkylphenol**
Alkylphenol:
Alkylphenols are a family of organic compounds obtained by the alkylation of phenols. The term is usually reserved for commercially important propylphenol, butylphenol, amylphenol, heptylphenol, octylphenol, nonylphenol, dodecylphenol and related "long chain alkylphenols" (LCAPs). Methylphenols and ethylphenols are also alkylphenols, but they are more commonly referred to by their specific names, cresols and xylenols.
Production:
The long-chain alkylphenols are prepared by alkylation of phenol with alkenes: C6H5OH + RR'C=CHR" → RR'CH−CHR"−C6H4OHIn this way, about 500M kg/y are produced.
Environmental controversy over nonylphenols:
Alkylphenols are xenoestrogens. Long chain Alkylphenols have the most potent estrogenic activity. The European Union has implemented sales and use restrictions on certain applications in which nonylphenols are used because of their alleged "toxicity, persistence, and the liability to bioaccumulate" but the United States EPA has taken a slower approach to make sure that action is based on sound science.
Uses of long-chain alkylphenols:
Alkylphenols is a raw non-polar material when paired with their percussor alkylphenols ethoxylate a water-soluble material where the long-chain alkylphenols are used extensively as precursors to the detergents, as additives for fuels and lubricants, polymers, and as components in phenolic resins. These compounds are also used as building block chemicals that are also used in making fragrances, thermoplastic elastomers, antioxidants, oil field chemicals and fire retardant materials. Through the downstream use in making alkylphenolic resins, alkylphenols are also found in tires, adhesives, coatings, carbonless copy paper and high performance rubber products. They have been used in industry for over 40 years. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eyewire**
Eyewire:
Eyewire is a citizen science game from Sebastian Seung's Lab at Princeton University. It is a human-based computation game that uses players to map retinal neurons. Eyewire launched on December 10, 2012. The game utilizes data generated by the Max Planck Institute for Medical Research.Eyewire gameplay is used for neuroscience research by enabling the reconstruction of morphological neuron data, which helps researchers model information processing circuits.
Gameplay:
The player is given a cube with a partially reconstructed neuron branch stretching through it. The player completes the reconstruction by coloring a 2D image with a 3D image generated simultaneously. Reconstructions are compared across players as each cube is submitted, yielding a consensus reconstruction that is later checked by experienced players.
Goal:
Eyewire is used to advance the use of artificial intelligence in neuronal reconstruction. The project is also used in research determining how mammals see directional motion.
Methods:
The activity of each neuron in a 350 × 300 × 60 μm3 portion of a retina was determined by two-photon microscopy. Using serial block-face scanning electron microscopy, the same volume was stained to bring out the contrast of the plasma membranes, sliced into layers by a microtome, and imaged using an electron microscope.
A neuron is selected by the researchers. The program chooses a cubic volume associated with that neuron for the player, along with an artificial intelligence's best guess for tracing the neuron through the two-dimensional images.
Publications:
Kim, Jinseop S; Greene, Matthew J; Zlateski, Aleksandar; Lee, Kisuk; Richardson, Mark; Turaga, Srinivas C; Purcaro, Michael; Balkam, Matthew; Robinson, Amy; Behabadi, Bardia F; Campos, Michael; Denk, Winfried; Seung, H Sebastian (2014). "Space–time wiring specificity supports direction selectivity in the retina". Nature. 509 (7500): 331–336. Bibcode:2014Natur.509..331.. doi:10.1038/nature13240. PMC 4074887. PMID 24805243.
Greene, Matthew J; Kim, Jinseop S; Seung, H Sebastian (2016). "Analogous Convergence of Sustained and Transient Inputs in Parallel on and off Pathways for Retinal Motion Computation". Cell Reports. 14 (8): 1892–900. doi:10.1016/j.celrep.2016.02.001. PMC 6404534. PMID 26904938.
Tinati, Ramine; Luczak-Roesch, Markus; Simperl, Elena; Hall, Wendy (2017). "An investigation of player motivations in Eyewire, a gamified citizen science project". Computers in Human Behavior. 73: 527–40. doi:10.1016/j.chb.2016.12.074.
Accomplishments:
Eyewire neurons featured at 2014 TED Conference Virtual Reality Exhibit.
Eyewire neurons featured at US Science and Engineering Expo in Washington, DC.
Eyewire won the National Science Foundation's 2013 International Visualization Challenge in the Games and Apps Category.
An Eyewire image by Alex Norton won MIT's 2014 Koch Image Gallery Competition.
Eyewire named one of Discover Magazine's Top 100 Science Stories of 2013.
Eyewire named top citizen science project of 2013 by SciStarter.
Eyewire won Biovision's World Life Sciences Forum Catalyzer Prize on March 26, 2013.
Eyewire named to top 10 citizen science projects of 2013 by PLoS.Eyewire has been featured by Wired, Nature's blog SpotOn, Forbes, Scientific American, and NPR. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zeta Canis Majoris**
Zeta Canis Majoris:
Zeta Canis Majoris, or ζ Canis Majoris, also named Furud , is a binary star system in the southern constellation of Canis Major. This system has an apparent visual magnitude of +3.0, making it one of the brighter stars in the constellation and hence readily visible to the naked eye. Parallax measurements from the Hipparcos mission yield a distance estimate of around 362 ly (111 pc) from the Sun. It is drifting further away with a radial velocity of +32 km/s.
Name:
ζ Canis Majoris, Latinized from Zeta Canis Majoris, is the star's Bayer designation assigned by the German astronomer Johann Bayer in 1603. The traditional name Furud or Phurud derives from the Arabic ألفرود al-furūd "the solitary ones". This was an appellation early Arab poets used for a number of anonymous stars. Later Arabian astronomers attempted to identify the name with particular stars, principally in the modern constellations Centaurus and Colomba. The stars of Colomba were assigned to Canis Majoris in the Almagest, leading to more recent assignment of the name for Zeta Canis Majoris.Al Sufi referred to these stars as ألأغربة al-ʼaghribah "the ravens".In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Furud for this star.
Properties:
The binary nature of this system was first noted by G. E. Paddock based on observations made in 1906 from the D. O. Mills Observatory in Chile. It was confirmed in 1909 by S. A. Mitchell, using radial velocity measurements made by F. E. Harpham in 1908. It is a single-lined spectroscopic binary system, which means that the pair have not been individually resolved with a telescope, but the gravitational perturbations of an unseen astrometric companion can be discerned by shifts in the spectrum of the primary caused by the Doppler effect. The pair orbit around their common center of mass once every 675 days with an eccentricity of 0.57.The primary component is a large star with nearly four times the Sun's radius and almost eight times the mass of the Sun. It has a stellar classification of B2.5 V, which means it is a B-type main sequence star that is generating energy through the nuclear fusion of hydrogen at its core. The star is emitting 3,603 times the luminosity of the Sun and is a suspected Beta Cephei variable. This energy is being radiated from its outer envelope at an effective temperature of about 18,700 K, giving it the blue-white hue of a B-type star. It is relatively young for a star, with an estimated age of 32 million years.Zeta Canis Majoris is located close to the solar antapex. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Higher Topos Theory**
Higher Topos Theory:
Higher Topos Theory is a treatise on the theory of ∞-categories written by American mathematician Jacob Lurie. In addition to introducing Lurie's new theory of ∞-topoi, the book is widely considered foundational to higher category theory. Since 2018, Lurie has been transferring the contents of Higher Topos Theory (along with new material) to Kerodon, an "online resource for homotopy-coherent mathematics" inspired by the Stacks Project.
Topics:
Higher Topos Theory covers two related topics: ∞-categories and ∞-topoi (which are a special case of the former). The first five of the book's seven chapters comprise a rigorous development of general ∞-category theory in the language of quasicategories, a special class of simplicial set which acts as a model for ∞-categories. The path of this development largely parallels classical category theory, with the notable exception of the ∞-categorical Grothendieck construction; this correspondence, which Lurie refers to as "straightening and unstraightening", gains considerable importance in his treatment.
Topics:
The last two chapters are devoted to ∞-topoi, Lurie's own invention and the ∞-categorical analogue of topoi in classical category theory. The material of these chapters is original, and is adapted from an earlier preprint of Lurie's. There are also appendices discussing background material on categories, model categories, and simplicial categories.
History:
Higher Topos Theory followed an earlier work by Lurie, On Infinity Topoi, uploaded to the arXiv in 2003. Algebraic topologist Peter May was critical of this preprint, emailing Lurie's then-advisor Mike Hopkins "to say that Lurie’s paper had some interesting ideas, but that it felt preliminary and needed more rigor." Lurie released a draft of Higher Topos Theory on the arXiv in 2006, and the book was finally published in 2009.
History:
Lurie released a second book on higher category theory, Higher Algebra, as a preprint on his website in 2017. This book assumes the content of Higher Topos Theory and uses it to study algebra in the ∞-categorical context. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Normalization (image processing)**
Normalization (image processing):
In image processing, normalization is a process that changes the range of pixel intensity values. Applications include photographs with poor contrast due to glare, for example. Normalization is sometimes called contrast stretching or histogram stretching. In more general fields of data processing, such as digital signal processing, it is referred to as dynamic range expansion.The purpose of dynamic range expansion in the various applications is usually to bring the image, or other type of signal, into a range that is more familiar or normal to the senses, hence the term normalization. Often, the motivation is to achieve consistency in dynamic range for a set of data, signals, or images to avoid mental distraction or fatigue. For example, a newspaper will strive to make all of the images in an issue share a similar range of grayscale.
Normalization (image processing):
Normalization transforms an n-dimensional grayscale image Min Max } with intensity values in the range Min Max ) , into a new image newMin newMax } with intensity values in the range newMin newMax ) . The linear normalization of a grayscale digital image is performed according to the formula Min newMax newMin Max Min newMin For example, if the intensity range of the image is 50 to 180 and the desired range is 0 to 255 the process entails subtracting 50 from each of pixel intensity, making the range 0 to 130. Then each pixel intensity is multiplied by 255/130, making the range 0 to 255. Normalization might also be non linear, this happens when there isn't a linear relationship between I and IN . An example of non-linear normalization is when the normalization follows a sigmoid function, in that case, the normalized image is computed according to the formula newMax newMin newMin Where α defines the width of the input intensity range, and β defines the intensity around which the range is centered.Auto-normalization in image processing software typically normalizes to the full dynamic range of the number system specified in the image file format. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Focal adhesion**
Focal adhesion:
In cell biology, focal adhesions (also cell–matrix adhesions or FAs) are large macromolecular assemblies through which mechanical force and regulatory signals are transmitted between the extracellular matrix (ECM) and an interacting cell. More precisely, focal adhesions are the sub-cellular structures that mediate the regulatory effects (i.e., signaling events) of a cell in response to ECM adhesion.Focal adhesions serve as the mechanical linkages to the ECM, and as a biochemical signaling hub to concentrate and direct numerous signaling proteins at sites of integrin binding and clustering.
Structure and function:
Focal adhesions are integrin-containing, multi-protein structures that form mechanical links between intracellular actin bundles and the extracellular substrate in many cell types. Focal adhesions are large, dynamic protein complexes through which the cytoskeleton of a cell connects to the ECM. They are limited to clearly defined ranges of the cell, at which the plasma membrane closes to within 15 nm of the ECM substrate. Focal adhesions are in a state of constant flux: proteins associate and disassociate with it continually as signals are transmitted to other parts of the cell, relating to anything from cell motility to cell cycle. Focal adhesions can contain over 100 different proteins, which suggests a considerable functional diversity. More than anchoring the cell, they function as signal carriers (sensors), which inform the cell about the condition of the ECM and thus affect their behavior. In sessile cells, focal adhesions are quite stable under normal conditions, while in moving cells their stability is diminished: this is because in motile cells, focal adhesions are being constantly assembled and disassembled as the cell establishes new contacts at the leading edge, and breaks old contacts at the trailing edge of the cell. One example of their important role is in the immune system, in which white blood cells migrate along the connective endothelium following cellular signals to damaged biological tissue.
Morphology:
Connection between focal adhesions and proteins of the extracellular matrix generally involves integrins. Integrins bind to extra-cellular proteins via short amino acid sequences, such as the RGD motif (found in proteins such as fibronectin, laminin, or vitronectin), or the DGEA and GFOGER motifs found in collagen. Integrins are heterodimers which are formed from one beta and one alpha subunit. These subunits are present in different forms, their corresponding ligands classify these receptors into four groups: RGD receptors, laminin receptors, leukocyte-specific receptors and collagen receptors. Within the cell, the intracellular domain of integrin binds to the cytoskeleton via adapter proteins such as talin, α-actinin, filamin, vinculin and tensin. Many other intracellular signalling proteins, such as focal adhesion kinase, bind to and associate with this integrin-adapter protein–cytoskeleton complex, and this forms the basis of a focal adhesion.
Adhesion dynamics with migrating cells:
The dynamic assembly and disassembly of focal adhesions plays a central role in cell migration. During cell migration, both the composition and the morphology of the focal adhesion change. Initially, small (0.25μm²) focal adhesions called focal complexes (FXs) are formed at the leading edge of the cell in lamellipodia: they consist of integrin, and some of the adapter proteins, such as talin, paxillin and tensin. Many of these focal complexes fail to mature and are disassembled as the lamellipodia withdraw. However, some focal complexes mature into larger and stable focal adhesions, and recruit many more proteins such as zyxin. Recruitment of components to the focal adhesion occurs in an ordered, sequential manner. Once in place, a focal adhesion remains stationary with respect to the extracellular matrix, and the cell uses this as an anchor on which it can push or pull itself over the ECM. As the cell progresses along its chosen path, a given focal adhesion moves closer and closer to the trailing edge of the cell. At the trailing edge of the cell the focal adhesion must be dissolved. The mechanism of this is poorly understood and is probably instigated by a variety of different methods depending on the circumstances of the cell. One possibility is that the calcium-dependent protease calpain is involved: it has been shown that the inhibition of calpain leads to the inhibition of focal adhesion-ECM separation. Focal adhesion components are amongst the known calpain substrates, and it is possible that calpain degrades these components to aid in focal adhesion disassembly Actin retrograde flow The assembly of nascent focal adhesions is highly dependent on the process of retrograde actin flow. This is the phenomenon in a migrating cell where actin filaments polymerize at the leading edge and flow back towards the cell body. This is the source of traction required for migration; the focal adhesion acts as a molecular clutch when it tethers to the ECM and impedes the retrograde movement of actin, thus generating the pulling (traction) force at the site of the adhesion that is necessary for the cell to move forward. This traction can be visualized with traction force microscopy. A common metaphor to explain actin retrograde flow is a large number of people being washed downriver, and as they do so, some of them hang on to rocks and branches along the bank to stop their downriver motion. Thus, a pulling force is generated onto the rock or branch that they are hanging on to. These forces are necessary for the successful assembly, growth, and maturation of focal adhesions.
Adhesion dynamics with migrating cells:
Natural biomechanical sensor Extracellular mechanical forces, which are exerted through focal adhesions, can activate Src kinase and stimulate the growth of the adhesions. This indicates that focal adhesions may function as mechanical sensors, and suggests that force generated from myosin fibers could contribute to maturing the focal complexes.
Adhesion dynamics with migrating cells:
This gains further support from the fact that inhibition of myosin-generated forces leads to slow disassembly of focal adhesions, by changing the turnover kinetics of the focal adhesion proteins.The relationship between forces on focal adhesions and their compositional maturation, however, remains unclear. For instance, preventing focal adhesion maturation by inhibiting myosin activity or stress fiber assembly does not prevent forces sustained by focal adhesions, nor does it prevent cells from migrating. Thus force propagation through focal adhesions may not be sensed directly by cells at all time and force scales.
Adhesion dynamics with migrating cells:
Their role in mechanosensing is important for durotaxis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sullivan conjecture**
Sullivan conjecture:
In mathematics, Sullivan conjecture or Sullivan's conjecture on maps from classifying spaces can refer to any of several results and conjectures prompted by homotopy theory work of Dennis Sullivan. A basic theme and motivation concerns the fixed point set in group actions of a finite group G . The most elementary formulation, however, is in terms of the classifying space BG of such a group. Roughly speaking, it is difficult to map such a space BG continuously into a finite CW complex X in a non-trivial manner. Such a version of the Sullivan conjecture was first proved by Haynes Miller. Specifically, in 1984, Miller proved that the function space, carrying the compact-open topology, of base point-preserving mappings from BG to X is weakly contractible.
Sullivan conjecture:
This is equivalent to the statement that the map X → F(BG,X) from X to the function space of maps BG → X , not necessarily preserving the base point, given by sending a point x of X to the constant map whose image is x is a weak equivalence. The mapping space F(BG,X) is an example of a homotopy fixed point set. Specifically, F(BG,X) is the homotopy fixed point set of the group G acting by the trivial action on X . In general, for a group G acting on a space X , the homotopy fixed points are the fixed points F(EG,X)G of the mapping space F(EG,X) of maps from the universal cover EG of BG to X under the G -action on F(EG,X) given by g in G acts on a map f in F(EG,X) by sending it to gfg−1 . The G -equivariant map from EG to a single point ∗ induces a natural map η: XG=F(∗,X)G →F(EG,X)G from the fixed points to the homotopy fixed points of G acting on X . Miller's theorem is that η is a weak equivalence for trivial G -actions on finite-dimensional CW complexes. An important ingredient and motivation for his proof is a result of Gunnar Carlsson on the homology of BZ/2 as an unstable module over the Steenrod algebra.Miller's theorem generalizes to a version of Sullivan's conjecture in which the action on X is allowed to be non-trivial. In, Sullivan conjectured that η is a weak equivalence after a certain p-completion procedure due to A. Bousfield and D. Kan for the group G=Z/2 . This conjecture was incorrect as stated, but a correct version was given by Miller, and proven independently by Dwyer-Miller-Neisendorfer, Carlsson, and Jean Lannes, showing that the natural map (XG)p → F(EG,(X)p)G is a weak equivalence when the order of G is a power of a prime p, and where (X)p denotes the Bousfield-Kan p-completion of X . Miller's proof involves an unstable Adams spectral sequence, Carlsson's proof uses his affirmative solution of the Segal conjecture and also provides information about the homotopy fixed points F(EG,X)G before completion, and Lannes's proof involves his T-functor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cantic 5-cube**
Cantic 5-cube:
In geometry of five dimensions or higher, a cantic 5-cube, cantihalf 5-cube, truncated 5-demicube is a uniform 5-polytope, being a truncation of the 5-demicube. It has half the vertices of a cantellated 5-cube.
Cartesian coordinates:
The Cartesian coordinates for the 160 vertices of a cantic 5-cube centered at the origin and edge length 6√2 are coordinate permutations: (±1,±1,±3,±3,±3)with an odd number of plus signs.
Alternate names:
Cantic penteract, truncated demipenteract Truncated hemipenteract (thin) (Jonathan Bowers)
Related polytopes:
It has half the vertices of the cantellated 5-cube, as compared here in the B5 Coxeter plane projections: This polytope is based on the 5-demicube, a part of a dimensional family of uniform polytopes called demihypercubes for being alternation of the hypercube family.
There are 23 uniform 5-polytope that can be constructed from the D5 symmetry of the 5-demicube, of which are unique to this family, and 15 are shared within the 5-cube family. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Terotechnology**
Terotechnology:
Terotechnology (; from Greek τηρεῖν tērein "to care for" and technology) is the technology of installation, including the efficient use and management of equipment. It also involves the use of technology to carry out maintenance functions in a bid to reduce cost and increase productivity.
Definition:
The term goes back to the 1970s. Terotechnology is a system for the care of equipment. It includes the management, engineering, and financial expertise working together to improve the installation and operations.
In practice:
It involves the reliability and maintainability of physical equipment regarding installation, operation, maintenance, or replacement. Decisions are influenced by feedback throughout the life cycle of a project. In 1992 the British Standards Institution published British Standard 3843: Guide to terotechnology. The standard was withdrawn in November 2011. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dibutylhexamethylenediamine**
Dibutylhexamethylenediamine:
N,N’-Dibutylhexamethylenediamine (dibutylhexanediamine) is a chemical compound used in the production of polymers. It is highly toxic upon inhalation, and is listed as an extremely hazardous substance as defined by the U.S. Emergency Planning and Community Right-to-Know Act. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Point out**
Point out:
Point out means to show someone who a person is or where something is. Point out also means to tell someone something that they need to know. It is used in the names of: Point out Sport Three-Point Shootout, a National Basketball Association Music Point It Out, a 1969 recording by Motown Records | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Viola**
Viola:
The viola ( vee-OH-lə, Italian: [ˈvjɔːla, viˈɔːla]) is a string instrument that is bowed, plucked, or played with varying techniques. Slightly larger than a violin, it has a lower and deeper sound. Since the 18th century, it has been the middle or alto voice of the violin family, between the violin (which is tuned a perfect fifth above) and the cello (which is tuned an octave below). The strings from low to high are typically tuned to C3, G3, D4, and A4.
Viola:
In the past, the viola varied in size and style, as did its names. The word viola originates from the Italian language. The Italians often used the term viola da braccio meaning literally: 'of the arm'. "Brazzo" was another Italian word for the viola, which the Germans adopted as Bratsche. The French had their own names: cinquiesme was a small viola, haute contre was a large viola, and taile was a tenor. Today, the French use the term alto, a reference to its range.
Viola:
The viola was popular in the heyday of five-part harmony, up until the eighteenth century, taking three lines of the harmony and occasionally playing the melody line. Music for the viola differs from most other instruments in that it primarily uses the alto clef. When viola music has substantial sections in a higher register, it switches to the treble clef to make it easier to read.
Viola:
The viola often plays the "inner voices" in string quartets and symphonic writing, and it is more likely than the first violin to play accompaniment parts. The viola occasionally plays a major, soloistic role in orchestral music. Examples include the symphonic poem Don Quixote, by Richard Strauss, and the symphony/concerto Harold en Italie, by Hector Berlioz. In the earlier part of the 20th century, more composers began to write for the viola, encouraged by the emergence of specialized soloists such as Lionel Tertis and William Primrose. English composers Arthur Bliss, York Bowen, Benjamin Dale, Frank Bridge, Benjamin Britten, Rebecca Clarke and Ralph Vaughan Williams all wrote substantial chamber and concert works. Many of these pieces were commissioned by, or written for, Lionel Tertis. William Walton, Bohuslav Martinů, Tōru Takemitsu, Tibor Serly, Alfred Schnittke, and Béla Bartók have written well-known viola concertos. The concerti by Béla Bartók, Paul Hindemith, Carl Stamitz, Georg Philipp Telemann, and William Walton are considered major works of the viola repertoire. Paul Hindemith, who was a violist, wrote a substantial amount of music for viola, including the concerto Der Schwanendreher.
Form:
The viola is similar in material and construction to the violin. A full-size viola's body is between 25 mm (1 in) and 100 mm (4 in) longer than the body of a full-size violin (i.e., between 38 and 46 cm [15–18 in]), with an average length of 41 cm (16 in). Small violas typically made for children typically start at 30 cm (12 in), which is equivalent to a half-size violin. For a child who needs a smaller size, a fractional-sized violin is often strung with the strings of a viola. Unlike the violin, the viola does not have a standard full size. The body of a viola would need to measure about 51 cm (20 in) long to match the acoustics of a violin, making it impractical to play in the same manner as the violin. For centuries, viola makers have experimented with the size and shape of the viola, often adjusting proportions or shape to make a lighter instrument with shorter string lengths, but with a large enough sound box to retain the viola sound. Prior to the eighteenth century, violas had no uniform size. Large violas (tenors) were designed to play the lower register viola lines or second viola in five part harmony depending on instrumentation. A smaller viola, nearer the size of the violin, was called a vertical viola or an alto viola. It was more suited to higher register writing, as in the viola 1 parts, as their sound was usually richer in the upper register. Its size was not as conducive to a full tone in the lower register.
Form:
Several experiments have intended to increase the size of the viola to improve its sound. Hermann Ritter's viola alta, which measured about 48 cm (19 in), was intended for use in Wagner's operas. The Tertis model viola, which has wider bouts and deeper ribs to promote a better tone, is another slightly "nonstandard" shape that allows the player to use a larger instrument. Many experiments with the acoustics of a viola, particularly increasing the size of the body, have resulted in a much deeper tone, making it resemble the tone of a cello. Since many composers wrote for a traditional-sized viola, particularly in orchestral music, changes in the tone of a viola can have unintended consequences upon the balance in ensembles.
Form:
One of the most notable makers of violas of the twentieth century was Englishman A. E. Smith, whose violas are sought after and highly valued. Many of his violas remain in Australia, his country of residence, where during some decades the violists of the Sydney Symphony Orchestra had a dozen of them in their section.
Form:
More recent (and more radically shaped) innovations have addressed the ergonomic problems associated with playing the viola by making it shorter and lighter, while finding ways to keep the traditional sound. These include the Otto Erdesz "cutaway" viola, which has one shoulder cut out to make shifting easier; the "Oak Leaf" viola, which has two extra bouts; viol-shaped violas such as Joseph Curtin's "Evia" model, which also uses a moveable neck and maple-veneered carbon fibre back, to reduce weight: violas played in the same manner as cellos (see vertical viola); and the eye-catching "Dalí-esque" shapes of both Bernard Sabatier's violas in fractional sizes—which appear to have melted—and David Rivinus' Pellegrina model violas.Other experiments that deal with the "ergonomics vs. sound" problem have appeared. The American composer Harry Partch fitted a viola with a cello neck to allow the use of his 43-tone scale, called the "adapted viola". Luthiers have also created five-stringed violas, which allow a greater playing range.
Method of playing:
A person who plays the viola is called a violist or a viola player. The technique required for playing a viola has certain differences compared with that of a violin, partly because of its larger size: the notes are spread out farther along the fingerboard and often require different fingerings. The viola's less responsive strings and the heavier bow warrant a somewhat different bowing technique, and a violist has to lean more intensely on the strings.
Method of playing:
The viola is held in the same manner as the violin; however, due to its larger size, some adjustments must be made to accommodate. The viola, just like the violin, is placed on top of the left shoulder between the shoulder and the left side of the face (chin). Because of the viola's size, violists with short arms tend to use smaller-sized instruments for easier playing. The most immediately noticeable adjustments that a player accustomed to playing the violin has to make are to use wider-spaced fingerings. It is common for some players to use a wider and more intense vibrato in the left hand, facilitated by employing the fleshier pad of the finger rather than the tip, and to hold the bow and right arm farther away from the player's body. A violist must bring the left elbow farther forward or around, so as to reach the lowest string, which allows the fingers to press firmly and so create a clearer tone. Different positions are often used, including half position.
Method of playing:
The viola is strung with thicker gauge strings than the violin. This, combined with its larger size and lower pitch range, results in a deeper and mellower tone. However, the thicker strings also mean that the viola responds to changes in bowing more slowly. Practically speaking, if a violist and violinist are playing together, the violist must begin moving the bow a fraction of a second sooner than the violinist. The thicker strings also mean that more weight must be applied with the bow to make them vibrate.
Method of playing:
The viola's bow has a wider band of horsehair than a violin's bow, which is particularly noticeable near the frog (or heel in the UK). Viola bows, at 70–74 g (2.5–2.6 oz), are heavier than violin bows (58–61 g [2.0–2.2 oz]). The profile of the rectangular outside corner of a viola bow frog generally is more rounded than on violin bows.
Tuning:
The viola's four strings are normally tuned in fifths: the lowest string is C (an octave below middle C), with G, D, and A above it. This tuning is exactly one fifth below the violin, so that they have three strings in common—G, D, and A—and is one octave above the cello.
Tuning:
Each string of a viola is wrapped around a peg near the scroll and is tuned by turning the peg. Tightening the string raises the pitch; loosening the string lowers the pitch. The A string is normally tuned first, to the pitch of the ensemble: generally 400–442 Hz. The other strings are then tuned to it in intervals of fifths, usually by bowing two strings simultaneously. Most violas also have adjusters—fine tuners, particularly on the A string that make finer changes. These adjust the tension of the string via rotating a small knob above the tailpiece. Such tuning is generally easier to learn than using the pegs, and adjusters are usually recommended for younger players and put on smaller violas, though pegs and adjusters are usually used together. Some violists reverse the stringing of the C and G pegs, so that the thicker C string does not turn so severe an angle over the nut, although this is uncommon.
Tuning:
Small, temporary tuning adjustments can also be made by stretching a string with the hand. A string may be tuned down by pulling it above the fingerboard, or tuned up by pressing the part of the string in the pegbox. These techniques may be useful in performance, reducing the ill effects of an out-of-tune string until an opportunity to tune properly.
Tuning:
The tuning C–G–D–A is used for the great majority of all viola music. However, other tunings are occasionally employed, both in classical music, where the technique is known as scordatura, and in some folk styles. Mozart, in his Sinfonia Concertante for Violin, Viola and Orchestra in E♭, wrote the viola part in D major, and specified that the violist raises the strings in pitch by a semitone. He probably intended to give the viola a brighter tone so the rest of the ensemble would not overpower it. Lionel Tertis, in his transcription of the Elgar cello concerto, wrote the slow movement with the C string tuned down to B♭, enabling the viola to play one passage an octave lower.
Organizations and research:
A renewal of interest in the viola by performers and composers in the twentieth century led to increased research devoted to the instrument. Paul Hindemith and Vadim Borisovsky made an early attempt at an organization, in 1927, with the Violists' World Union. But it was not until 1968, with the creation of the Viola-Forschungsgesellschaft, now the International Viola Society (IVS), that a lasting organization took hold. The IVS now consists of twelve chapters around the world, the largest being the American Viola Society (AVS), which publishes the Journal of the American Viola Society. In addition to the journal, the AVS sponsors the David Dalton Research Competition and the Primrose International Viola Competition.
Organizations and research:
The 1960s also saw the beginning of several research publications devoted to the viola, beginning with Franz Zeyringer's, Literatur für Viola, which has undergone several versions, the most recent being in 1985. In 1980, Maurice Riley produced the first attempt at a comprehensive history of the viola, in his History of the Viola, which was followed with the second volume in 1991. The IVS published the multi-language Viola Yearbook from 1979 to 1994, during which several other national chapters of the IVS published respective newsletters. The Primrose International Viola Archive at Brigham Young University houses the greatest amount of material related to the viola, including scores, recordings, instruments, and archival materials from some of the world's greatest violists.
Music:
Reading music Music that is written for the viola primarily uses the alto clef, which is otherwise rarely used. Viola music employs the treble clef when there are substantial sections of music written in a higher register. The alto clef is defined by the placement of C4 on the middle line of the staff. In treble clef, this note is placed one ledger line below the staff and in the bass clef (used, notably, by the cello and double bass) it is placed one ledger line above.As the viola is tuned exactly one octave above the cello (meaning that the viola retains the same string notes as the cello, but an octave higher), music that is notated for the cello can be easily transcribed for alto clef without any changes in key. For example, there are numerous editions of Bach's Cello Suites transcribed for viola. The viola also has the advantage of smaller scale-length, which means that the stretches on the cello are easier on the viola. However, due the cello's larger range, ease of playing in higher positions, and abilities to play chords, some works transcribed for viola do require additional modification for ease of playing on the instrument.
Music:
Role in pre-twentieth century works In early orchestral music, the viola part was usually limited to filling in harmonies, with very little melodic material assigned to it. When the viola was given a melodic part, it was often duplicated (or was in unison with) the melody played by other strings.The concerti grossi, Brandenburg Concertos, composed by J. S. Bach, were unusual in their use of viola. The third concerto grosso, scored for three violins, three violas, three cellos, and basso continuo, requires virtuosity from the violists. Indeed, Viola I has a solo in the last movement which is commonly found in orchestral auditions. The sixth concerto grosso, Brandenburg Concerto No. 6, which was scored for 2 violas "concertino", cello, 2 violas da gamba, and continuo, had the two violas playing the primary melodic role. He also used this unusual ensemble in his cantata, Gleichwie der Regen und Schnee vom Himmel fällt, BWV 18 and in Mein Herze schwimmt im Blut, BWV 199, the chorale is accompanied by an obbligato viola.
Music:
There are a few Baroque and Classical concerti, such as those by Georg Philipp Telemann (one for solo viola, being one of the earliest viola concertos known, and one for two violas), Alessandro Rolla, Franz Anton Hoffmeister and Carl Stamitz.
Music:
The viola plays an important role in chamber music. Mozart used the viola in more creative ways when he wrote his six string quintets. The viola quintets use two violas, which frees them (especially the first viola) for solo passages and increases the variety of writing that is possible for the ensemble. Mozart also wrote for the viola in his Sinfonia Concertante, a set of two duets for violin and viola, and the Kegelstatt Trio for viola, clarinet, and piano. The young Felix Mendelssohn wrote a little-known Viola Sonata in C minor (without opus number, but dating from 1824). Robert Schumann wrote his Märchenbilder for viola and piano. He also wrote a set of four pieces for clarinet, viola, and piano, Märchenerzählungen.
Music:
Max Bruch wrote a romance for viola and orchestra, his Op. 85, which explores the emotive capabilities of the viola's timbre. In addition, his Eight pieces for clarinet, viola, and piano, Op. 83, features the viola in a very prominent, solo aspect throughout. His Concerto for Clarinet, Viola, and Orchestra, Op. 88 has been quite prominent in the repertoire and has been recorded by prominent violists throughout the 20th century.] From his earliest works, Brahms wrote music that prominently featured the viola. Among his first published pieces of chamber music, the sextets for strings Op. 18 and Op. 36 contain what amounts to solo parts for both violas. Late in life, he wrote two greatly admired sonatas for clarinet and piano, his Op. 120 (1894): he later transcribed these works for the viola (the solo part in his Horn Trio is also available in a transcription for viola). Brahms also wrote "Two Songs for Alto with Viola and Piano", Op. 91, "Gestillte Sehnsucht" ("Satisfied Longing") and "Geistliches Wiegenlied" ("Spiritual Lullaby") as presents for the famous violinist Joseph Joachim and his wife, Amalie. Dvořák played the viola and apparently said that it was his favorite instrument: his chamber music is rich in important parts for the viola. Two Czech composers, Bedřich Smetana and Leoš Janáček, included significant viola parts, originally written for viola d'amore, in their quartets "From My Life" and "Intimate Letters" respectively: the quartets begin with an impassioned statement by the viola. This is similar to Bach, Mozart, and Beethoven all occasionally played the viola part in chamber music.
Music:
The viola occasionally has a major role in orchestral music, a prominent example being Richard Strauss' tone poem Don Quixote for solo cello and viola and orchestra. Other examples are the "Ysobel" variation of Edward Elgar's Enigma Variations and the solo in his other work, In the South (Alassio), the pas de deux scene from Act 2 of Adolphe Adam's Giselle and the "La Paix" movement of Léo Delibes's ballet Coppélia, which features a lengthy viola solo.
Music:
Gabriel Fauré's Requiem was originally scored (in 1888) with divided viola sections, lacking the usual violin sections, having only a solo violin for the Sanctus. It was later scored for orchestra with violin sections, and published in 1901. Recordings of the older scoring with violas are available.While the viola repertoire is quite large, the amount written by well-known pre-20th-century composers is relatively small. There are many transcriptions of works for other instruments for the viola and the large number of 20th-century compositions is very diverse. See "The Viola Project" at the San Francisco Conservatory of Music, where Professor of Viola Jodi Levitz has paired a composer with each of her students, resulting in a recital of brand-new works played for the very first time.
Music:
Twentieth century and beyond In the earlier part of the 20th century, more composers began to write for the viola, encouraged by the emergence of specialized soloists such as Lionel Tertis. Englishmen Arthur Bliss, York Bowen, Benjamin Dale, and Ralph Vaughan Williams all wrote chamber and concert works for Tertis. William Walton, Bohuslav Martinů, and Béla Bartók wrote well-known viola concertos. Paul Hindemith wrote a substantial amount of music for the viola; being himself a violist, he often performed his own works. Claude Debussy's Sonata for flute, viola and harp has inspired a significant number of other composers to write for this combination.
Music:
Charles Wuorinen composed his virtuosic Viola Variations in 2008 for Lois Martin. Elliott Carter also wrote several works for viola including his Elegy (1943) for viola and piano; it was subsequently transcribed for clarinet. Ernest Bloch, a Swiss-born American composer best known for his compositions inspired by Jewish music, wrote two famous works for viola, the Suite 1919 and the Suite Hébraïque for solo viola and orchestra. Rebecca Clarke was a 20th-century composer and violist who also wrote extensively for the viola. Lionel Tertis records that Edward Elgar (whose cello concerto Tertis transcribed for viola, with the slow movement in scordatura), Alexander Glazunov (who wrote an Elegy, Op. 44, for viola and piano), and Maurice Ravel all promised concertos for viola, yet all three died before doing any substantial work on them.
Music:
In the latter part of the 20th century a substantial repertoire was produced for the viola; many composers including Miklós Rózsa, Revol Bunin, Alfred Schnittke, Sofia Gubaidulina, Giya Kancheli and Krzysztof Penderecki, have written viola concertos. The American composer Morton Feldman wrote a series of works entitled The Viola in My Life, which feature concertante viola parts. In spectral music, the viola has been sought after because of its lower overtone partials that are more easily heard than on the violin. Spectral composers like Gérard Grisey, Tristan Murail, and Horațiu Rădulescu have written solo works for viola. Neo-Romantic, post-Modern composers have also written significant works for viola including Robin Holloway Viola Concerto op.56 and Sonata op.87, and Peter Seabourne a large five-movement work with piano, Pietà.
Music:
Contemporary pop music The viola is sometimes used in contemporary popular music, mostly in the avant-garde. John Cale of The Velvet Underground used the viola, as do some modern groups such as alternative rock band 10,000 Maniacs, Imagine Dragons, folk duo John & Mary, British Sea Power, The Airborne Toxic Event, Marillion, and others often with instruments in a chamber setting. Jazz music has also seen its share of violists, from those used in string sections in the early 1900s to a handful of quartets and soloists emerging from the 1960s onward. It is quite unusual though, to use individual bowed string instruments in contemporary popular music.
Music:
In folk music Although not as commonly used as the violin in folk music, the viola is nevertheless used by many folk musicians across the world. Extensive research into the historical and current use of the viola in folk music has been carried out by Dr. Lindsay Aitkenhead. Players in this genre include Eliza Carthy, Mary Ramsey, Helen Bell, and Nancy Kerr. Clarence "Gatemouth" Brown was the viola's most prominent exponent in the genre of blues.
Music:
The viola is also an important accompaniment instrument in Slovakian, Hungarian and Romanian folk string band music, especially in Transylvania. Here the instrument has three strings tuned G3–D4–A3 (note that the A is an octave lower than found on the standard instrument), and the bridge is flattened with the instrument playing chords in a strongly rhythmic manner. In this usage, it is called a kontra or brácsa (pronounced "bra-cha", from German Bratsche, "viola").
Performers:
There are few well-known viola virtuoso soloists, perhaps because little virtuoso viola music was written before the twentieth century. Pre-twentieth century viola players of note include Carl Stamitz, Alessandro Rolla, Antonio Rolla, Chrétien Urhan, Casimir Ney, Louis van Waefelghem, and Hermann Ritter. Important viola pioneers from the twentieth century were Lionel Tertis, William Primrose, composer/performer Paul Hindemith, Théophile Laforge, Cecil Aronowitz, Maurice Vieux, Vadim Borisovsky, Lillian Fuchs, Dino Asciolla, Frederick Riddle, Walter Trampler, Ernst Wallfisch, Csaba Erdélyi, the only violist to ever win the Carl Flesch International Violin Competition, and Emanuel Vardi, the first violist to record the 24 Caprices by Paganini on viola. Many noted violinists have publicly performed and recorded on the viola as well, among them Eugène Ysaÿe, Yehudi Menuhin, David Oistrakh, Pinchas Zukerman, Maxim Vengerov, Julian Rachlin, James Ehnes, and Nigel Kennedy.
Performers:
Among the great composers, several preferred the viola to the violin when they were playing in ensembles, the most noted being Ludwig van Beethoven, Johann Sebastian Bach and Wolfgang Amadeus Mozart. Other composers also chose to play the viola in ensembles, including Joseph Haydn, Franz Schubert, Felix Mendelssohn, Antonín Dvořák, and Benjamin Britten. Among those noted both as violists and as composers are Rebecca Clarke and Paul Hindemith. Contemporary composers and violists Kenji Bunch, Scott Slapin, and Lev Zhurbin have written a number of works for viola.
Electric violas:
Amplification of a viola with a pickup, an instrument amplifier (and speaker), and adjusting the tone with a graphic equalizer can make up for the comparatively weaker output of a violin-family instrument string tuned to notes below G3. There are two types of instruments used for electric viola: regular acoustic violas fitted with a piezoelectric pickup and specialized electric violas, which have little or no body. While traditional acoustic violas are typically only available in historically used earth tones (e.g., brown, reddish-brown, blonde), electric violas may be traditional colors or they may use bright colors, such as red, blue or green. Some electric violas are made of materials other than wood.
Electric violas:
Most electric instruments with lower strings are violin-sized, as they use the amp and speaker to create a big sound, so they do not need a large soundbox. Indeed, some electric violas have little or no soundbox, and thus rely entirely on amplification. Fewer electric violas are available than electric violins. It can be hard for violists who prefer a physical size or familiar touch references of a viola-sized instrument, when they must use an electric viola that uses a smaller violin-sized body. Welsh musician John Cale, formerly of The Velvet Underground, is one of the more notable users of such an electric viola and he has used them both for melodies in his solo work and for drones in his work with The Velvet Underground (e.g. "Venus in Furs"). Other notable players of the electric viola are Geoffrey Richardson of Caravan and Mary Ramsey of 10,000 Maniacs.Instruments may be built with an internal preamplifier, or may put out an unbuffered transducer signal. While such signals may be fed directly to an amplifier or mixing board, they often benefit from an external preamp/equalizer on the end of a short cable, before being fed to the sound system. In rock and other loud styles, the electric viola player may use effects units such as reverb or overdrive. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fried Liver Attack**
Fried Liver Attack:
The Fried Liver Attack, also called the Fegatello Attack (named after an Italian dish), is a chess opening. This opening is a variation of the Two Knights Defense in which White sacrifices a knight for an attack on Black's king. The opening begins with the moves: 1. e4 e5 2. Nf3 Nc6 3. Bc4 Nf6 4. Ng5 d5 5. exd5 Nxd5?!This is the Two Knights Defense where White has chosen the offensive line 4.Ng5, but Black's last move is risky (5...Na5, the Polerio Defense, is considered better; other Black choices include 5...b5 and 5...Nd4). Bobby Fischer felt that 6.d4! (the Lolli Attack) was incredibly strong, to the point 5...Nxd5 is rarely played; however, the Fried Liver Attack involves a knight sacrifice on f7, defined by the move: 6. Nxf7The opening is popular with younger players who like the name and the aggressive, attacking style. It is classified as code C57 in the Encyclopaedia of Chess Openings.
History:
The Fried Liver Attack has been known for many centuries, the earliest known example being a game played by Giulio Cesare Polerio before 1606.
Considerations:
After 6...Kxf7, play usually continues 7.Qf3+ Ke6 8.Nc3 (diagram). Black will play 8...Nb4 and follow up with ...c6, bolstering their pinned knight on d5. White can force the b4-knight to abandon protection of the d5-knight with 9.a3, a move Yakov Estrin recommended, but Black is quite strong after 9.a3 Nxc2+ 10.Kd1 Nd4! or 10...Nxa1!? as tried by Šarūnas Šulskis in a 2014 game, so 9.Qe4 or 9.O-O are probably better choices.White has a strong attack, but it has not yet been proven to be decisive. Because defence is harder to play than attack in this variation, the Fried Liver is dangerous for Black, particularly with shorter time controls. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zsigmondy's theorem**
Zsigmondy's theorem:
In number theory, Zsigmondy's theorem, named after Karl Zsigmondy, states that if a>b>0 are coprime integers, then for any integer n≥1 , there is a prime number p (called a primitive prime divisor) that divides an−bn and does not divide ak−bk for any positive integer k<n , with the following exceptions: n=1 , a−b=1 ; then an−bn=1 which has no prime divisors n=2 , a+b a power of two; then any odd prime factors of a2−b2=(a+b)(a1−b1) must be contained in a1−b1 , which is also even n=6 , a=2 , b=1 ; then 63 =32×7=(a2−b2)2(a3−b3) This generalizes Bang's theorem, which states that if n>1 and n is not equal to 6, then 2n−1 has a prime divisor not dividing any 2k−1 with k<n Similarly, an+bn has at least one primitive prime divisor with the exception 23+13=9 Zsigmondy's theorem is often useful, especially in group theory, where it is used to prove that various groups have distinct orders except when they are known to be the same.
History:
The theorem was discovered by Zsigmondy working in Vienna from 1894 until 1925.
Generalizations:
Let (an)n≥1 be a sequence of nonzero integers.
The Zsigmondy set associated to the sequence is the set has no primitive prime divisors }.
Generalizations:
i.e., the set of indices n such that every prime dividing an also divides some am for some m<n . Thus Zsigmondy's theorem implies that Z(an−bn)⊂{1,2,6} , and Carmichael's theorem says that the Zsigmondy set of the Fibonacci sequence is 12 } , and that of the Pell sequence is {1} . In 2001 Bilu, Hanrot, and Voutier proved that in general, if (an)n≥1 is a Lucas sequence or a Lehmer sequence, then 30 } (see OEIS: A285314, there are only 13 such n s, namely 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 13, 18, 30).
Generalizations:
Lucas and Lehmer sequences are examples of divisibility sequences.
Generalizations:
It is also known that if (Wn)n≥1 is an elliptic divisibility sequence, then its Zsigmondy set Z(Wn) is finite. However, the result is ineffective in the sense that the proof does not give an explicit upper bound for the largest element in Z(Wn) although it is possible to give an effective upper bound for the number of elements in Z(Wn) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ChipTest**
ChipTest:
ChipTest was a 1985 chess playing computer built by Feng-hsiung Hsu, Thomas Anantharaman and Murray Campbell at Carnegie Mellon University. It is the predecessor of Deep Thought which in turn evolved into Deep Blue.
ChipTest:
ChipTest was based on a special VLSI-technology move generator chip developed by Hsu. ChipTest was controlled by a Sun-3/160 workstation and capable of searching approximately 50,000 moves per second. Hsu and Anantharaman entered ChipTest in the 1986 North American Computer Chess Championship, and it was only partially tested when the tournament began. It lost its first two rounds, but finished with an even score. In August 1987 ChipTest was overhauled and renamed ChipTest-M, M standing for microcode. The new version had eliminated ChipTest's bugs and was ten times faster, searching 500,000 moves per second and running on a Sun-4 workstation. ChipTest-M won the North American Computer Chess Championship in 1987 with a 4–0 sweep.ChipTest was invited to play in the 1987 American Open, but the team did not enter due to an objection by the HiTech team, also from Carnegie Mellon University. HiTech and ChipTest shared some code, and Hitech was already playing in the tournament. The two teams became rivals.Designing and implementing ChipTest revealed many possibilities for improvement, so the designers started on a new machine. Deep Thought 0.01 was created in May 1988 and the version 0.02 in November the same year. This new version had two customized VLSI chess processors and it was able to search 720,000 moves per second. With the "0.02" dropped from its name, Deep Thought won the World Computer Chess Championship with a perfect 5–0 score in 1989. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kennel**
Kennel:
A kennel is a structure or shelter for dogs. Used in the plural, the kennels, the term means any building, collection of buildings or a property in which dogs are housed, maintained, and (though not in all cases) bred. A kennel can be made out of various materials, the most popular being wood and canvas.
Breeding kennels:
This is a formal establishment for the propagation of dogs, whether or not they are actually housed in a separate shed, the garage, a state-of-the-art facility, or the family dwelling. Licensed breeding kennels are heavily regulated and must follow relevant government legislation. Breed club members are expected to comply with the general Code of Ethics and guidelines applicable to the breed concerned. Kennel clubs may also stipulate criteria to be met before issuing registration papers for puppies bred. A kennel name or kennel prefix is a name associated with each breeding kennel: it is the first part of the registered name of a pedigreed dog which was bred there.
Boarding kennels:
This is a place where dogs are housed temporarily for a fee, an alternative to using a pet sitter. Although many people worry about the stress placed on the animal by being put in an unfamiliar and most likely crowded environment, the majority of boarding kennels work to reduce stress. Many kennels offer one-on-one "play times" in order to get the animal out of the kennel environment. Familiar objects, such as blankets and toys from home, are also permitted at many kennels. Many kennels offer grooming and training services in addition to boarding, with the idea being that the kennel can be the owner's "one-stop shop" for all three services.In the United States the term boarding kennel can also be used to refer to boarding catteries and licensing agencies do not always differentiate between commercial boarding kennels for dogs and other animal or cat boarding kennels. In 2007 market surveys showed that $3.0 billion was spent on these services. Annual kennel boarding expenses for dog owners was $225, and for cat owners was $149 according to a 2007–2008 survey. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Online and offline**
Online and offline:
In computer technology and telecommunications, online indicates a state of connectivity and offline indicates a disconnected state. In modern terminology, this usually refers to an Internet connection, but (especially when expressed "on line" or "on the line") could refer to any piece of equipment or functional unit that is connected to a larger system. Being online means that the equipment or subsystem is connected, or that it is ready for use."Online" has come to describe activities performed on and data available on the Internet, for example: "online identity", "online predator", "online gambling", "online game", "online shopping", "online banking", and "online learning". Similar meaning is also given by the prefixes "cyber" and "e", as in the words "cyberspace", "cybercrime", "email", and "ecommerce". In contrast, "offline" can refer to either computing activities performed while disconnected from the Internet, or alternatives to Internet activities (such as shopping in brick-and-mortar stores). The term "offline" is sometimes used interchangeably with the acronym "IRL", meaning "in real life".
History:
During the 19th century, the term on line was commonly used in both the railroad and telegraph industries. For railroads, a signal box would send messages down the line (track), via a telegraph line (cable), indicating the track's status: Train on line or Line clear. Telegraph linemen would refer to sending current through a line as direct on line or battery on line; or they may refer to a problem with the circuit as being on line, as opposed to the power source or end-point equipment.Since at least 1950, in computing, the terms on-line and off-line have been used to refer to whether machines, including computers and peripheral devices, are connected or not. Here is an excerpt from the 1950 book High-Speed Computing Devices: The use of automatic computing equipment for large-scale reduction of data will be strikingly successful only if means are provided for the automatic transcription of these data to a form suitable for automatic entry into the machine. For some applications, of which the most prominent are those in which the reduced data are used to control the process being measured, the input must be developed for on-line operation. In on-line operation the input is communicated directly and without delay to the data-reduction device. For other applications, off-line operation, involving automatic transcription of data in a form suitable for later introduction to the machine, may be tolerated. These requirements may be compared with teleprinter operating requirements. For example, some teletype machines operate on line. Their operators are in instantaneous communication. Other teletype machines are operated off line, through the intervention of punched paper tape. The message is preserved by means of holes punched in the tape and is transmitted later by feeding the tape to another machine.
Examples:
Offline e-mail One example of a common use of these concepts with email is a mail user agent (MUA) that can be instructed to be in either online or offline states. One such MUA is Microsoft Outlook. When online it will attempt to connect to mail servers (to check for new mail at regular intervals, for example), and when offline it will not attempt to make any such connection. The online or offline state of the MUA does not necessarily reflect the connection status between the computer on which it is running and the internet i.e. the computer itself may be online—connected to Internet via a cable modem or other means—while Outlook is kept offline by the user, so that it makes no attempt to send or to receive messages. Similarly, a computer may be configured to employ a dial-up connection on demand (as when an application such as Outlook attempts to make connection to a server), but the user may not wish for Outlook to trigger that call whenever it is configured to check for mail.
Examples:
Offline media playing Another example of the use of these concepts is digital audio technology. A tape recorder, digital audio editor, or other device that is online is one whose clock is under the control of the clock of a synchronization master device. When the sync master commences playback, the online device automatically synchronizes itself to the master and commences playing from the same point in the recording. A device that is offline uses no external clock reference and relies upon its own internal clock. When many devices are connected to a sync master it is often convenient, if one wants to hear just the output of one single device, to take it offline because, if the device is played back online, all synchronized devices have to locate the playback point and wait for each other device to be in synchronization. (For related discussion, see MIDI timecode, Word clock, and recording system synchronization.) Offline browsing A third example of a common use of these concepts is a web browser that can be instructed to be in either online or offline states. The browser attempts to fetch pages from servers while only in the online state. In the offline state, or "offline mode", users can perform offline browsing, where pages can be browsed using local copies of those pages that have previously been downloaded while in the online state. This can be useful when the computer is offline and connection to the Internet is impossible or undesirable. The pages are downloaded either implicitly into the web browser's own cache as a result of prior online browsing by the user or explicitly by a browser configured to keep local copies of certain web pages, which are updated when the browser is in the online state, either by checking that the local copies are up-to-date at regular intervals or by checking that the local copies are up-to-date whenever the browser is switched to the online. One such web browser is Internet Explorer. When pages are added to the Favourites list, they can be marked to be "available for offline browsing". Internet Explorer will download local copies of both the marked page and, optionally, all of the pages that it links to. In Internet Explorer version 6, the level of direct and indirect links, the maximum amount of local disc space allowed to be consumed, and the schedule on which local copies are checked to see whether they are up-to-date, are configurable for each individual Favourites entry.For communities that lack adequate Internet connectivity—such as developing countries, rural areas, and prisons—offline information stores such as WiderNet's eGranary Digital Library (a collection of approximately thirty million educational resources from more than two thousand web sites and hundreds of CD-ROMs) provide offline access to information. More recently, the Internet Archive announced an offline server project intended to provide access to material on inexpensive servers that can be updated using USB sticks and SD cards.
Examples:
Offline storage Likewise, offline storage is computer data storage that is not "available for immediate use on demand by the system without human intervention". Additionally, an otherwise online system that is powered down may be considered offline.
Examples:
Offline messages With the growing communication tools and media, the words offline and online are used very frequently. If a person is active over a messaging tool and is able to accept the messages it is termed as online message and if the person is not available and the message is left to view when the person is back, it is termed as offline message. In the same context, the person's availability is termed as online and non-availability is termed as offline.
Examples:
File systems In the context of file systems, "online" and "offline" are synonymous with "mounted" and "not mounted". For example, in file systems' resizing capabilities, "online grow" and "online shrink" respectively mean the ability to increase or decrease the space allocated to that file system without needing to unmount it.
Generalisations:
Online and offline distinctions have been generalised from computing and telecommunication into the field of human interpersonal relationships. The distinction between what is considered online and what is considered offline has become a subject of study in the field of sociology.The distinction between online and offline is conventionally seen as the distinction between computer-mediated communication and face-to-face communication (e.g., face time), respectively. Online is virtuality or cyberspace, and offline is reality (i.e., real life or "meatspace"). Slater states that this distinction is "obviously far too simple". To support his argument that the distinctions in relationships are more complex than a simple dichotomy of online versus offline, he observes that some people draw no distinction between an online relationship, such as indulging in cybersex, and an offline relationship, such as being pen pals. He argues that even the telephone can be regarded as an online experience in some circumstances, and that the blurring of the distinctions between the uses of various technologies (such as PDA versus mobile phone, internet television versus internet, and telephone versus Voice over Internet Protocol) has made it "impossible to use the term online meaningfully in the sense that was employed by the first generation of Internet research".Slater asserts that there are legal and regulatory pressures to reduce the distinction between online and offline, with a "general tendency to assimilate online to offline and erase the distinction," stressing, however, that this does not mean that online relationships are being reduced to pre-existing offline relationships. He conjectures that greater legal status may be assigned to online relationships (pointing out that contractual relationships, such as business transactions, online are already seen as just as "real" as their offline counterparts), although he states it to be hard to imagine courts awarding palimony to people who have had a purely online sexual relationship. He also conjectures that an online/offline distinction may be seen by people as "rather quaint and not quite comprehensible" within 10 years.This distinction between online and offline is sometimes inverted, with online concepts being used to define and to explain offline activities, rather than (as per the conventions of the desktop metaphor with its desktops, trash cans, folders, and so forth) the other way around. Several cartoons appearing in The New Yorker have satirized this. One includes Saint Peter asking for a username and a password before admitting a man into Heaven. Another illustrates "the offline store" where "All items are actual size!", shoppers may "Take it home as soon as you pay for it!", and "Merchandise may be handled prior to purchase!" | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cobordism hypothesis**
Cobordism hypothesis:
In mathematics, the cobordism hypothesis, due to John C. Baez and James Dolan, concerns the classification of extended topological quantum field theories (TQFTs). In 2008, Jacob Lurie outlined a proof of the cobordism hypothesis, though the details of his approach have yet to appear in the literature as of 2022. In 2021, Daniel Grady and Dmitri Pavlov claimed a complete proof of the cobordism hypothesis, as well as a generalization to bordisms with arbitrary geometric structures.
Formulation:
For a symmetric monoidal (∞,n) -category C which is fully dualizable and every k -morphism of which is adjointable, for 1≤k≤n−1 , there is a bijection between the C -valued symmetric monoidal functors of the cobordism category and the objects of C
Motivation:
Symmetric monoidal functors from the cobordism category correspond to topological quantum field theories. The cobordism hypothesis for topological quantum field theories is the analogue of the Eilenberg–Steenrod axioms for homology theories. The Eilenberg–Steenrod axioms state that a homology theory is uniquely determined by its value for the point, so analogously what the cobordism hypothesis states is that a topological quantum field theory is uniquely determined by its value for the point. In other words, the bijection between C -valued symmetric monoidal functors and the objects of C is uniquely defined by its value for the point. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Antibody-drug conjugate**
Antibody-drug conjugate:
Antibody-drug conjugates or ADCs are a class of biopharmaceutical drugs designed as a targeted therapy for treating cancer. Unlike chemotherapy, ADCs are intended to target and kill tumor cells while sparing healthy cells. As of 2019, some 56 pharmaceutical companies were developing ADCs.ADCs are complex molecules composed of an antibody linked to a biologically active cytotoxic (anticancer) payload or drug. Antibody-drug conjugates are an example of bioconjugates and immunoconjugates.
Antibody-drug conjugate:
ADCs combine the targeting properties of monoclonal antibodies with the cancer-killing capabilities of cytotoxic drugs, designed to discriminate between healthy and diseased tissue.
Mechanism of action:
An anticancer drug is coupled to an antibody that targets a specific tumor antigen (or protein) that, ideally, is only found in or on tumor cells. Antibodies attach themselves to the antigens on the surface of cancerous cells. The biochemical reaction that occurs upon attaching triggers a signal in the tumor cell, which then absorbs, or internalizes, the antibody together with the linked cytotoxin. After the ADC is internalized, the cytotoxin kills the cancer. Their targeting ability was believed to limit side effects for cancer patients and give a wider therapeutic window than other chemotherapeutic agents, although this promise hasn’t yet been realized in the clinic. ADC technologies have been featured in many publications, including scientific journals.
History:
The idea of drugs that would target tumor cells and ignore others was conceived in 1900 by German Nobel laureate Paul Ehrlich; he described the drugs as a "magic bullet" due to their targeting properties.In 2001 Pfizer/Wyeth's drug Gemtuzumab ozogamicin (trade name: Mylotarg) was approved based on a study with a surrogate endpoint, through the accelerated approval process. In June 2010, after evidence accumulated showing no evidence of benefit and significant toxicity, the U.S. Food and Drug Administration (FDA) forced the company to withdraw it. It was reintroduced into the US market in 2017.Brentuximab vedotin (trade name: Adcetris, marketed by Seattle Genetics and Millennium/Takeda) was approved for relapsed HL and relapsed systemic anaplastic large-cell lymphoma (sALCL)) by the FDA on August 19, 2011 and received conditional marketing authorization from the European Medicines Agency in October 2012.
History:
Trastuzumab emtansine (ado-trastuzumab emtansine or T-DM1, trade name: Kadcyla, marketed by Genentech and Roche) was approved in February 2013 for the treatment of people with HER2-positive metastatic breast cancer (mBC) who had received prior treatment with trastuzumab and a taxane chemotherapy.The European Commission approved Inotuzumab ozogamicin as a monotherapy for the treatment of adults with relapsed or refractory CD22-positive B-cell precursor acute lymphoblastic leukemia (ALL) on June 30, 2017 under the trade name Besponsa® (Pfizer/Wyeth), followed on August 17, 2017 by the FDA.The first immunology antibody-drug conjugate (iADC), ABBV-3373, showed an improvement in disease activity in a Phase 2a study of patients with rheumatoid arthritis and a study with the second iADC, ABBV-154 to evaluate adverse events and change in disease activity in participants treated with subcutaneous injection of ABBV-154 is ongoing.In July 2018, Daiichi Sankyo Company, Limited and Glycotope GmbH have inked a pact regarding the combination of Glycotope's investigational tumor-associated TA-MUC1 antibody gatipotuzumab and Daiichi Sankyo's proprietary ADC technology for developing gatipotuzumab antibody drug conjugate.In 2019 AstraZeneca agreed to pay up to US$6.9 billion to jointly develop DS-8201 with Japan's Daiichi Sankyo. It is intended to replace Herceptin for treating breast cancer. DS8201 carries eight payloads, compared to the usual four.
Commercial products:
Thirteen ADCs have received market approval by the FDA – all for oncotherapies. Belantamab mafodotin is in the process of being withdrawn from US marketing.
Components of an ADC:
An antibody-drug conjugate consists of 3 components: Antibody - targets the cancer cell surface and may also elicit a therapeutic response.
Payload - elicits the desired therapeutic response.
Linker - attaches the payload to the antibody and should be stable in circulation only releasing the payload at the desired target. Multiple approaches to conjugation have been developed for attachment to the antibody and reviewed. DAR is the drug to antibody ratio and indicates the level of loading of the payload on the ADC.
Payloads:
Many of the payloads for oncology ADCs (oADC) are natural product based with some making covalent interactions with their target. Payloads include the microtubulin inhibitors monomethyl auristatin A MMAE, monomethyl auristatin F MMAF and mertansine, DNA binder calicheamicin and topoisomerase 1 inhibitors SN-38 and exatecan resulting in a renaissance for natural product total synthesis. Glucocorticoid Receptor Modulators (GRM) represent to most active payload class for iADCs. Approaches releasing marketed GRM molecules like dexamethasone and budesonide have been developed. Modified GRM molecules have also been developed that enable the attachment of the linker with the term ADCidified describing the medicinal chemistry process of payload optimization to facilitate linker attachment. Alternatives to small molecule payloads have also been investigated, for example, siRNA.
Linkers:
A stable link between the antibody and cytotoxic (anti-cancer) agent is a crucial aspect of an ADC. A stable ADC linker ensures that less of the cytotoxic payload falls off before reaching a tumor cell, improving safety, and limiting dosages.
Linkers:
Linkers are based on chemical motifs including disulfides, hydrazones or peptides (cleavable), or thioethers (noncleavable). Cleavable and noncleavable linkers were proved to be safe in preclinical and clinical trials. Brentuximab vedotin includes an enzyme-sensitive cleavable linker that delivers the antimicrotubule agent monomethyl auristatin E or MMAE, a synthetic antineoplastic agent, to human-specific CD30-positive malignant cells. MMAE inhibits cell division by blocking the polymerization of tubulin. Because of its high toxicity MMAE cannot be used as a single-agent chemotherapeutic drug. However, MMAE linked to an anti-CD30 monoclonal antibody (cAC10, a cell membrane protein of the tumor necrosis factor or TNF receptor) was stable in extracellular fluid. It is cleavable by cathepsin and safe for therapy. Trastuzumab emtansine is a combination of the microtubule-formation inhibitor mertansine (DM-1) and antibody trastuzumab that employs a stable, non-cleavable linker.
Linkers:
The availability of better and more stable linkers has changed the function of the chemical bond. The type of linker, cleavable or noncleavable, lends specific properties to the cytotoxic drug. For example, a non-cleavable linker keeps the drug within the cell. As a result, the entire antibody, linker and cytotoxic (anti-cancer) agent enter the targeted cancer cell where the antibody is degraded into an amino acid. The resulting complex – amino acid, linker and cytotoxic agent – is considered to be the active drug. In contrast, cleavable linkers are detached by enzymes in the cancer cell. The cytotoxic payload can then escape from the targeted cell and, in a process called "bystander killing", attack neighboring cells.Another type of cleavable linker, currently in development, adds an extra molecule between the cytotoxin and the cleavage site. This allows researchers to create ADCs with more flexibility without changing cleavage kinetics. Researchers are developing a new method of peptide cleavage based on Edman degradation, a method of sequencing amino acids in a peptide. Also under development are site-specific conjugation (TDCs) and novel conjugation techniques to further improve stability and therapeutic index, α emitting immunoconjugates, antibody-conjugated nanoparticles and antibody-oligonucleotide conjugates.
Anything Drug Conjugates:
As the antibody-drug conjugate field has matured, a more accurate definition of ADC is now Anything-Drug Conjugate. Alternatives for the antibody targeting component now include multiple smaller antibody fragments like diabodies, Fab, scFV, and bicyclic peptides.
Research:
Non-natural amino acids The first generation uses linking technologies that conjugate drugs non-selectively to cysteine or lysine residues in the antibody, resulting in a heterogeneous mixture. This approach leads to suboptimal safety and efficacy and complicates optimization of the biological, physical and pharmacological properties. Site-specific incorporation of unnatural amino acids generates a site for controlled and stable attachment. This enables the production of homogeneous ADCs with the antibody precisely linked to the drug and controlled ratios of antibody to drug, allowing the selection of a best-in-class ADC. An Escherichia coli-based open cell-free synthesis (OCFS) allows the synthesis of proteins containing site-specifically incorporated non-natural amino acids and has been optimized for predictable high-yield protein synthesis and folding. The absence of a cell wall allows the addition of non-natural factors to the system to manipulate transcription, translation and folding to provide precise protein expression modulation.
Research:
Other disease areas The majority of ADCs under development or in clinical trials are for oncological and hematological indications. This is primarily driven by the inventory of monoclonal antibodies, which target various types of cancer. However, some developers are looking to expand the application to other important disease areas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glycoside hydrolase family 17**
Glycoside hydrolase family 17:
In molecular biology, Glycoside hydrolase family 17 is a family of glycoside hydrolases. It folds into a TIM barrel.
Glycoside hydrolase family 17:
Glycoside hydrolases EC 3.2.1. are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. y[ _]9 Glycoside hydrolase family 17 CAZY GH_17 comprises enzymes with several known activities; endo-1,3-beta-glucosidase (EC 3.2.1.39); lichenase (EC 3.2.1.73); exo-1,3-glucanase (EC 3.2.1.58). Currently these enzymes have only been found in plants and in fungi. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Social login**
Social login:
Social login is a form of single sign-on using existing information from a social networking service such as Facebook, Twitter or Google, to login to a third party website instead of creating a new login account specifically for that website. It is designed to simplify logins for end users as well as provide more and more reliable demographic information to web developers.
How social login works:
Social login links accounts from one or more social networking services to a website, typically using either a plug-in or a widget. By selecting the desired social networking service, the user simply uses his or her login for that service to sign on to the website. This, in turn, negates the need for the end user to remember login information for multiple electronic commerce and other websites while providing site owners with uniform demographic information as provided by the social networking service. Many sites which offer social login also offer more traditional online registration for those who either desire it or who do not have an account with a compatible social networking service (and therefore would be precluded from creating an account with the website).
Application:
Social login can be implemented strictly as an authentication system using standards such as OpenID or SAML. For consumer websites that offer social functionality to users, social login is often implemented using the OAuth standard. OAuth is a secure authorization protocol which is commonly used in conjunction with authentication to grant 3rd party applications a "session token" allowing them to make API calls to providers on the user's behalf. Sites using the social login in this manner typically offer social features such as commenting, sharing, reactions and gamification.
Application:
While social login can be extended to corporate websites, the majority of social networks and consumer-based identity providers allow self-asserted identities. For this reason, social login is generally not used for strict, highly secure applications such as those in banking or health.
Advantages of social login:
Studies have shown that website registration forms are inefficient as many people provide false data, forget their login information for the site or simply decline to register in the first place. A study conducted in 2011 by Janrain and Blue Research found that 77 percent of consumers favored social login as a means of authentication over more traditional online registration methods. Additional benefits: Targeted Content - Web sites can obtain a profile and social graph data in order to target personalized content to the user. This includes information such as name, email, hometown, interests, activities, and friends. However, this can create issues for privacy, and result in a narrowing of the variety of views and options available on the internet.
Advantages of social login:
Multiple Identities - Users can log into websites with multiple social identities allowing them to better control their online identity.
Registration Data - Many websites use the profile data returned from social login instead of having users manually enter their PII (Personally Identifiable Information) into web forms. This can potentially speed up the registration or sign-up process.
Pre-validated Email - Identity providers who support email such as Google and Yahoo! can return the user's email address to the 3rd party website preventing the user from supplying a fabricated email address during the registration process.
Account linking - Because social login can be used for authentication, many websites allow legacy users to link pre-existing site account with their social login account without forcing re-registration.
Disadvantages of social login:
Utilizing social login through platforms such as Facebook may unintentionally render third-party websites useless within certain libraries, schools, or workplaces which block social networking services for productivity reasons. It can also cause difficulties in countries with active censorship regimes, such as China and its "Golden Shield Project," where the third party website may not be actively censored, but is effectively blocked if a user's social login is blocked.There are several other risks that come with using social login tools. These logins are also a new frontier for fraud and account abuse as attackers use sophisticated means to hack these authentication mechanisms. This can result in an unwanted increase in fraudulent account creations, or worse; attackers successfully stealing social media account credentials from legitimate users. One such way that social media accounts are exploited is when users are enticed to download malicious browser extensions that request read and write permissions on all websites. These users are not aware that later on, typically a week or so after being installed, the extensions will then download some background Javascript malware from its command and control site to run on the user's browser. From then on, these malware infected browsers can effectively be controlled remotely. These extensions will then wait until the user logs into a social media or another online account, and using those tokens or credentials will sign up for other online accounts without the rightful user's express permission.
Aggregating social login:
Social login applications compatible with many social networking services are available to web developers using blogging platforms such as WordPress. Companies such as Gigya, Janrain, Oneall.com, Lanoba.com, and LoginRadius also provide single solution social login services for web developers. These companies can provide social login access to 20 or more social network sites.
Security:
In March 2012, a research paper reported an extensive study on the security of social login mechanisms. The authors found 8 serious logic flaws in high-profile ID providers and relying party websites, such as OpenID (including Google ID and PayPal Access), Facebook, Janrain, Freelancer, FarmVille, Sears.com, etc. Because the researchers informed ID providers and the third party websites that relied on the service prior to public announcement of the discovery of the flaws, the vulnerabilities were corrected, and there have been no security breaches reported. This research concludes that the overall security quality of SSO deployments seems worrisome.
Security:
Moreover, social logins are often implemented in an insecure way. Users, in this case, have to trust every application which implemented this feature to handle their identifier confidentially. Furthermore, by placing reliance on an account which is operable on many websites, social login creates a single point of failure, thus considerably augmenting the damage that would be caused were the account to be hacked.
List of notable providers:
Here is a list of services (commonly social networks) that provide social login features which they encourage other websites to use.
Alipay AOL Apple Facebook Google KakaoTalk Line LinkedIn PayPal QQ Sina Weibo Taobao Vkontakte (ВКонтакте) Twitter WeChat Yahoo! | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Panel switch**
Panel switch:
The Panel Machine Switching System is a type of automatic telephone exchange for urban service that was used in the Bell System in the United States for seven decades. The first semi-mechanical types of this design were installed in 1915 in Newark, New Jersey, and the last were retired in the same city in 1983.
Panel switch:
The Panel switch was named for its tall panels which consisted of layered strips of terminals. Between each strip was placed an insulating layer, which kept each metal strip electrically isolated from the ones above and below. These terminals were arranged in banks, five of which occupied an average selector frame. Each bank contained 100 sets of terminals, for a total of 500 sets of terminals per frame. At the bottom, the frame had two electric motors to drive sixty selectors up and down by electromagnetically controlled clutches. As calls were completed through the system, selectors moved vertically over the sets of terminals until they reached the desired location, at which point the selector stopped its upward travel, and selections progressed to the next frame, until finally, the called subscriber's line was reached.
History:
In c. 1906, AT&T organized two research groups for solving the unique challenges in switching telephone traffic in the large urban centers in the Bell System. Large cities had a complex infrastructure of manual switching that prevented complete ad hoc conversion to mechanical switching, but more favorable economics was anticipated from conversion to mechanical operation. No satisfactory methods existed for interconnecting manual systems with machines for switching. The two groups at the Western Electric Laboratories focussed on different technologies, using a competitive development approach to stimulate invention and increase product quality, a concept that had been successful at AT&T previously in transmitter design. One group continued existing work that yielded the Rotary system, while the second group developed a system that was based on linear movement of switch components, which became known as the panel bank. As work continued, many subassemblies were shared, and the two switches only distinguished themselves in the switching mechanisms.
History:
By 1910, the design of the Rotary system had progressed farther and internal trials employed it at Western Electric as a private branch exchange (PBX). However, by 1912, the company had decided that the panel system showed better promise to solve the large-city problem, and delegated the use of the Rotary system for use in Europe to satisfy the growing demand and competition from other vendors there, under the management and manufacture by the International Western Electric Company in Belgium.After a trial installation as a PBX within Western Electric in 1913, Panel system planning commenced with design and construction of field trial central offices using a semi-mechanical method of switching, in which subscribers still used telephones without a dial, and operators answered calls and keyed the destination telephone number into the panel switch, which then completed the call automatically.
History:
These first panel-type exchanges were placed in service in Newark, New Jersey, on January 16, 1915 at the Mulberry central office serving 3640 subscribers, and on June 12 in the Waverly central office, which had 6480 lines. Panel development continued throughout the rest of the 1910s and in the 1920s in the United States. A third system in Newark (Branch Brook) followed in April 1917 for testing automatic call distribution. The first fully machine-switching Panel systems using common control principles were the Douglas and Tyler exchanges in Omaha, Nebraska, completed in December 1921. Subscribers were issued new telephones with dials, that permitted the subscriber to place local calls without operator assistance. This installation was followed by the first installations in the eastern region in the Sherwood and Syracuse-2 central offices in Paterson, New Jersey, in May and July 1922, respectively. The storied Pennsylvania exchange in New York City was cut-over to service in October 1922.Most Panel installations were replaced by modern systems during the 1970s. The last Panel switch, located in the Bigelow central office in Newark, was decommissioned by 1983.
Operational overview:
When a subscriber removes the receiver (earpiece) from the hookswitch of a telephone, the local loop circuit to the central office is closed. This causes the flow of current through the loop and a line relay, which causes the relay to operate, starting a selector in the line finder frame to hunt for the terminal of the subscriber's line. Simultaneously, a sender is selected, which provides dial tone to the caller once the line is found. The line finder then operates a cutoff relay, which prevents that telephone from being called, should another subscriber happen to dial the number.
Operational overview:
Dial tone confirms to the subscriber that the system is ready for dialing. Depending on the local numbering system, the sender required either six or seven digits in order to complete the call. As the subscriber dialed, relays in the sender counted and stored the digits for later usage. As soon as the two, or three digits of the office code were dialed and stored, the sender performed a lookup against a translator (early-type) or decoder (later-type). The translator or decoder took the two or three digits as input, and returned data to the sender that contained the parameters for connecting to the called central office. After the sender received the data provided by the translator or decoder, the sender used this information to guide the district selector and office selector to the location of the terminals that would connect the caller to the central office where the terminating line was located. The sender also stored and utilized other information pertaining to the electrical requirements for signaling over the newly established connection, and the rate at which the subscriber should be billed, should the call successfully complete.
Operational overview:
On the district or office selectors themselves, idle outgoing trunks were picked by the "sleeve test" method. After being directed by the sender to the correct group of terminals corresponding to the outgoing trunks to the called office, the selector continued moving upward through a number of terminals, checking for one with an un-grounded sleeve lead, then selecting and grounding it. If all the trunks were busy, the selector hunted to the end of the group, and finally sent back an "all circuits busy" tone. There was no provision for alternate routing as in earlier manual systems and later more sophisticated mechanical ones.
Operational overview:
Once the connection to the terminating office was established, the sender used the last four (or five) digits of the telephone number to reach the called party. It did so by converting the digits into specific locations on the remaining incoming and final frames. After the connection was established all the way to the final frame, the called party's line was tested for busy. If the line was not busy, the incoming selector circuit sent ringing voltage forward to the called party's line and waited for the called party to answer their telephone. If the called party answered, supervision signals were sent backwards through the sender, and to the district frame, which established a talking path between both subscribers, and charged the calling party for the call. At this time, the sender was released, and could be used again in service of an entirely new call. If the called subscriber's line was busy, the final selector sent a busy signal back to the called party to alert them that the caller was on the phone and could not accept their call.
Telephone numbering:
As in the Strowger system, each central office could address up to 10,000 numbered lines (0000 to 9999), requiring four digits for each subscriber station.
Telephone numbering:
The panel system was designed to connect calls in a local metropolitan calling area. Each office was assigned a two- or three-digit office code, called an office code, which indicated to the system the central office in which the desired party was located. Callers dialed the office code followed by the station number. In larger cities, such as New York City, dialing required a three-digit office code, and in less-populated cities, such as Seattle, WA and Omaha, NE, a two-digit code. The remaining digits of the telephone number corresponded to the station number, which pointed to the physical location of the subscriber's telephone on the final frame of the called office. For instance, a telephone number may be listed as PA2-5678, where PA2 (722) is the office code and 5678 is the station number. In areas that served party lines, the system accepted an additional digit for party identification. This allowed the sender to direct the final selector not only to the correct terminal, but to ring the correct subscriber's line on that terminal. The panel system supported individual, 2-party, and 4-party lines.
Circuit features:
Similar to the divided-multiple telephone switchboard, the panel system was divided into an originating section and a terminating section. The subscriber's line had two appearances in a local office: one on the originating side, and one on the terminating side. The line circuit consisted of a line relay on the originating side to indicate that a customer had gone off-hook, and a cutoff relay to keep the line relay from interfering with an established connection. The cutoff relay was controlled by a sleeve lead that, as with the multiple switchboard, could be activated by either the originating section or the terminating. On the terminating end, the line circuit was connected to a final selector, which was used in call completion. Thus, when a call was completed to a subscriber, the final selector circuit connected to the desired line, and then performed a sleeve (busy) test. If the line was not busy, the final selector operated the cut-off relay via the sleeve lead, and proceeded to ring the called subscriber.
Circuit features:
Supervision (line signaling) was supplied by a District circuit, similar to the cord circuit that plugged into a line jack on a switchboard. The District circuit supervised the calling party, and when the calling party went on-hook, it released the ground on the sleeve lead, thus releasing all selectors except the final, which returned down to their start position to make ready for further traffic. The final selector circuit was not supervised by the district circuit, and only returned to normal once the called party hung up. Some District frames were equipped with the more complex supervisory and timing circuits required to generate coin collect and return signals for handling calls from payphones.
Circuit features:
Many of the urban and commercial areas where Panel was first used had message rate service rather than flat rate calling. For this reason the line finder had a fourth wire known as the "M" lead. This enabled the District circuit to send metering pulses to control the subscriber's message register. The introduction of direct distance dialing (DDD) in the 1950s required the addition of automatic number identification equipment for centralized automatic message accounting.
Circuit features:
The terminating section of the office was fixed to the structure of the last four digits of the telephone number, had a limit of 10,000 phone numbers. In some of the urban areas where Panel was used, even a single square mile might have three or five times that many telephone subscribers. Thus the incoming selectors of several separate switching entities shared floor space and staff, but required separate incoming trunk groups from distant offices. Sometimes an Office Selector Tandem was used to distribute incoming traffic among the offices. This was a Panel office with no senders or other common control equipment; just one stage of selectors and accepting only the Office Brush and Office Group parameters. Panel Sender Tandems were also used when their greater capabilities were worth their additional cost.
Sender:
While the Strowger (step-by-step) switch moved under direct control of dial pulses that came from the telephone dial, the more sophisticated Panel switch had senders, which registered and stored the digits that the customer dialed, and then translated the received digits into numbers appropriate to drive the selectors to their desired position: District Brush, District Group, Office Brush, Office Group, Incoming Brush, Incoming Group, Final Brush, Final Tens, Final Units.
Sender:
The use of senders provided advantages over the previous direct control systems, because they allowed the office code of the telephone number to be decoupled from the actual location on the switching fabric. Thus, an office code (for example, "722") had no direct relationship to the physical layout of the trunks on the district and office frames. By the usage of translation, the trunks could be located arbitrarily on the physical frames themselves, and the decoder or translator could direct the sender to their location as needed. Additionally, because the sender stored the telephone number dialed by the subscriber, and then controlled the selectors itself, there was no need for the subscriber's dial to have a direct-control relationship to the selectors themselves. This allowed the selectors to hunt at their own speed, over large groups of terminals, and allowed for smooth, motor controlled motion, rather than the staccato, momentary motion of the step-by-step system.
Sender:
The sender also provided fault detection. As it was responsible for driving the selectors to their destinations, it was able to detect errors (known as trouble) and alert central office staff of the problem by lighting a lamp at the appropriate panel. In addition to lighting a lamp, the sender held itself and the selectors that were under its control out of service, which prevented their use by other callers. Upon noting the alarm condition, staff could inspect the sender and its associated selectors, and resolve whatever trouble occurred before returning the sender and selectors back to service.
Sender:
When the sender's job was complete, it connected the talk path from the originating to the terminating side, and dropped out of the call. At this time, the sender was available to handle another subscriber's call. In this way, a comparatively small number of senders could handle a large amount of traffic, as each was only used for a short duration during call setup. This principle became known as common control, and was used in all subsequent switching systems.
Signaling and control:
Revertive Pulsing (RP) was the primary signaling method used within and between panel switches. The selectors, once seized by the sender or another selector, would begin moving upwards under motor power. Each terminal the selector passed would send a pulse of ground potential along the circuit, back to the sender. The sender counted each pulse, and when the correct terminal was reached, the sender then signalled the selector to disengage the upward drive clutch and stop on the appropriate terminal as determined by the sender and decoder. The selector then either began its next selection operation, or extended the circuit to the next selector frame. In the case of the final frame, the last selection would result in connection to an individual's phone line and would begin ringing.
Signaling and control:
As the selectors were driven upwards by the motors, brushes attached to the vertical selector rods wiped over commutators at the top of the frame. These commutators contained alternating segments serving as insulators or conductors. When the brush passed over a conductive segment, it was grounded, thereby generating a pulse which was sent back to the sender for counting. When the sender counted the appropriate number of pulses, it cut the power to the solenoid in the terminating office, and caused the brush to stop at its current position.
Signaling and control:
Calls from one panel office to another worked very similarly to calls within an office by use of revertive pulse signalling. The originating office used the same protocol, but inserted a compensating resistance during pulsing so its sender encountered the same resistance for all trunks. This is in contrast to more modern forms of forward pulsing, where the originating equipment will directly outpulse to the terminating side the information it needs to connect the call.
Compatibility:
Later systems maintained compatibility with revertive pulsing, even as more advanced signaling methods were developed. The Number One Crossbar, which was the first successor to the Panel system also used this method of signaling exclusively, until later upgrades introduced newer signaling such as Multi-frequency signaling.
Compatibility:
Panel was initially installed in cities where many stations still used manual (non-dial) service. For compatibility with manual offices, two types of signaling were supported. In areas with mostly machine switches and only a few manual switchboards, Panel Call Indicator (PCI) signaling transmitted the called number to the "B" Board Machine Incoming operator, which lit lamps on the operator's desk at the terminating manual office. The lamps illuminated digits on a display panel corresponding to the number dialed. The manual operator connected the call to the appropriate jack, and then repeated the process for the next incoming call. In areas with mostly manual switches, the Call Annunciator signaling system was used to avoid installing lamp panels at every operator station. The Call Annunciator used speech recorded on strips of photographic film to verbally announce the called number to the answering operator.
Compatibility:
PCI signaling continued to be used for tandem purposes, decades after its original need had disappeared. In the 1950s, auxiliary senders were added for storing more than eight digits, and sending by multi-frequency (MF) signaling for direct distance dialing (DDD).
Compatibility:
Calls from manual offices to panel offices required the "A" board, or outgoing operator, to request the number from the caller, connect to an idle trunk to the distant exchange, and relay the desired number to the B Board Manual Incoming Call operator, who keyed it to the Panel machine for setting up the incoming and final frames to the called telephone number.
Motor power:
The panel switch is an example of a power drive system, in that it used 1/16 horsepower motors to drive the selectors vertically to hunt for the desired connection, and back down again when the call was completed. In contrast, Strowger or crossbar systems used individual electromagnets for operation, and in their case the power available from an electromagnet limits the maximum size of the switch element it can move. With Panel having no such restriction, its dimensions were determined solely by the needs of the switch, and the design of the exchange. The driving electric motor can be made as large as is necessary to move the switch elements. Thus, most calls required only about half as many stages as in earlier systems. Motors used on panel frames were capable of operating on alternating (AC) or direct current (DC), however they could only be started with DC. In the event of an AC power failure the motor would switch to its DC windings, and continue running until AC power was restored.
Maintenance and testing:
Because of its relative complexity compared to direct control systems, the Panel system incorporated many new types of testing apparatus. At the time of its design, it was decided that maintenance should be done on a preventative basis, and regular testing of the equipment would be used to identify faults before they became severe enough to affect subscribers. To this end, multiple types of test equipment were provided. Test equipment generally took the form of either a wooden, switchboard-like desk, a wheeled cart, known as a "Tea Wagon", or a small box-type test set that could be carried to the apparatus that required testing. The central test location in the office was known as the "OGT Desk", or "Trouble Desk", and took the form of a large wooden desk with lamps, jacks, keys, cords, and a voltmeter. This desk served as the central point for analysis and trouble resolution.
Maintenance and testing:
Other test apparatus included frame-mounted equipment that was used to routine commonly used circuits within the office. These included an automatic routine sender test frame, and an automatic routine selector test frame. When testing was to be done manually by a switchman, he or she used a Tea Wagon, which was wheeled to the apparatus to be tested, and plugged into jacks that were provided for this purpose.
Upgrades:
Throughout its service time, the Panel system was upgraded as new features became available or necessary. Starting in the mid-1920s, such upgrades improved the initial design. Major attention was initially focused on improving the sender. Early two- and three-digit type senders stored dialed digits on rotary selector switches. The senders employed translators to convert the dialed digits into the appropriate brush and group selections needed to complete the call. As better technology became available, Panel senders were upgraded to the all-relay type. These were more reliable, and in addition, replaced the translator equipment with decoders, which also operated entirely with relays, rather than with motor-driven apparatus, which yielded faster call completion, and required less maintenance.
Upgrades:
Another important improvement involved a fundamental change in the electrical logic of the switching system. The Panel originally shipped in a ground cut-off (GCO) configuration, wherein the cut-off relay had ground potential on one side of its winding at all times. A busy line condition was indicated by -48 volt battery applied to the other side of the cut-off relay winding, and thus at the sleeve lead. This would be detected by the final selector as it hunted over the terminals. Starting in 1929, all newer panel systems were deployed as battery cut-off (BCO) systems. In this revision, the presence of ground and -48V was reversed. Battery was constantly applied to one side of the cut-off relay, and the presence of ground on the other side of the winding indicated the line was busy. This change necessitated a fundamental change in the design of the system, and was undertaken for many reasons. One of the most notable was that GCO offices were more prone to fire.The line finder was also improved during the system's lifetime. Originally, the line finder frame had a capacity of 300 lines each, and used 15 brushes (vertical hunting segments) on each rod. This was intended to reduce hunting time as there were more brushes hunting over a shorter distance. As these line finders went into service, however, it became evident that 15 brushes on each vertical selector rod were quite heavy, and needed springs and pulleys at the top of the frame to compensate for their mass. Later line finders used 10 brushes and rearranged the layout to accommodate 400 lines per line finder frame. This increased capacity while eliminating the need for compensating equipment.
Upgrades:
Western Electric estimated that the design changes of 1925 to 1927 accounted for a 60% reduction in overall costs for the Panel system.The following table presents early major panel system upgrades: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metal theft**
Metal theft:
Metal theft is "the theft of items for the value of their constituent metals". It usually increases when worldwide prices for scrap metal rise, as has happened dramatically due to rapid industrialization in India and China. Apart from precious metals like gold and silver, the metals most commonly stolen are non-ferrous metals such as copper, aluminium, brass, and bronze. However, even cast iron and steel are seeing higher rates of theft due to increased scrap metal prices.One defining characteristic of metal theft is the motivation. Whereas other items are generally stolen for their extrinsic value, items involved in metal theft are stolen for their intrinsic value as raw material or commodities. Thefts often have negative consequences much greater than the value of the metal stolen, such as the destruction of valuable statues, power interruptions, and the disruption of railway traffic.
Items often stolen:
Anything made of metal has value as scrap metal, and can be stolen: Manhole covers Copper wiring, or copper pipes from houses or other buildings Utility company electrical wiring (especially power cables) and transformers Aluminium or stainless steel beer kegs Bronze or brass statues, monuments, and commemorative plaques Catalytic converters from motor vehicles (they contain precious metals) Air conditioner units Rails from train tracks Metal crosses and other ornaments from cemeteriesA 2007 United States Department of Energy study reported that law enforcement believed many copper thefts from electric utilities in warmer urban locations like San Diego, California, and Tampa, Florida, were committed without the use of vehicles by vagrants.
Motivations for theft:
Scrap metal has drastically increased in price over recent years. In 2001, ferrous scrap sold for $77 a ton, increasing to $300 per ton by 2004. In 2008, it hit nearly $500 per ton.Some elected officials and law enforcement officials have concluded that many metal thefts are by drug addicts stealing metal in order to fund their addictions. Some officials believe that many of these drug-related metal thefts are caused by methamphetamine users; however, this varies by the location of the metal being stolen. Another explanation for the phenomenon is the unusually high price of non-ferrous metals coupled with elevated levels of unemployment. Regardless of the reason, the industrialization of developing nations helps to increase the demand for scrap metal.In the fourth quarter of 2008, world market prices for metals like copper, aluminium, and platinum dropped steeply. Although there is anecdotal evidence that this price decrease has led to fewer metal thefts, strong empirical research on the exact nature of the relationship between commodity prices and metal thefts is still lacking. Some have argued that the "genie is out of the bottle" now and drops in commodity prices will not result in corresponding drops in thefts. In fact, it is possible that thefts may actually increase to compensate for the loss in value.As of December 2014 according to the National Insurance Crime Bureau the number of insurance claims for metal theft has been decreasing in the U.S possibly because of dropping scrap metal prices.
Economic impact:
As of 2014 in the United States alone, metal theft costs the economy $1 billion annually, according to Department of Energy estimates. As of 2008 It was estimated that South Africa lost approximately 5 billion Rand annually due to metal theft. As of 2008 metal theft was the fastest growing crime in the UK with the annual damage to industry estimated at £360m. Thieves often cause damage far in excess of the value they recover by selling stolen metal as scrap. For example, thieves who strip copper plumbing and electrical wiring from houses render the residences uninhabitable without expensive, time-consuming repairs.
Prevention:
Requiring scrap metal buyers to record the photo IDs of scrap metal sellers, and record scrap metal transactions may reduce the rate of metal theft. Paying scrap metal sellers by check rather than cash may reduce the rate of metal theft, and leaves records that can be investigated by police. Scrap merchants may refuse to accept certain commonly-stolen items, such as manhole covers, street signs, air-conditioning units, and railroad track components, unless the seller can prove legitimate ownership. Restrictions on some items have also been codified into law. Utility companies who are often the targets of metal theft can electroplate coding on to copper wire, which can positively identify the wire as stolen even if the insulation is burned off.
Notable metal thefts and law enforcement efforts by country:
Australia In Australia in 2008, 8 tonnes of copper wiring, believed to be stolen from a variety of locations including rail tracks, power stations and scrap metal depots, was seized on its way to the Asian black market.
Notable metal thefts and law enforcement efforts by country:
Austria In November 2011 a person tried to hand-saw a hot electrical line in a subway tunnel in Vienna. A fire arose; train traffic was stopped. The thief was probably hurt. In November 2015 a man burnt to death in Vienna in an empty building, one which had a 100 kV cable that went through the basement. The police found three people alive and assumed that they had been attempting to steal copper.
Notable metal thefts and law enforcement efforts by country:
In May 2013 the Westbahn near Amstetten had to be closed for safety reasons; grounding copper wires had been stolen; The copper stolen was worth €2,000, but total damages to the station cost upwards of €30,000.
In July 2013 in Lower Austria 160 metres (520 ft) (250 kg, 550 lb) of copper wire worth less than €1,000 was stolen from a railway transformer station. The damage to the railways electronics cost €140,000.
In May 2016, police caught several people that had stolen several tons of copper wire from a substation and caused 400,000 € worth of damage in Lower Austria.
Notable metal thefts and law enforcement efforts by country:
Canada In Quebec, during May 2006, thieves stole sections of copper roofing, gutters and wiring from four Quebec City churches, two being St. Charles de Limoilou and St. Francois d'Assise. The thieves were discovered in action on their third night, whereupon they fled. High copper prices are believed to be the reason for the thefts. Repairs were expected to cost more than $40,000.In October 2010, a 300-pound (140 kg) bronze bell was stolen in Shelburne County, Nova Scotia. Thieves removed the bell from a monument in Roseway Cemetery. The bell was part of the Roseway United Memorial Church, built in 1912, until it was demolished in 1993. It was recovered in a Halifax-area scrapyard October 6, 2010.In September 2011, Peterborough, Ontario, experienced a four-hour power outage north of the city when thieves stole power transmission wires.
Notable metal thefts and law enforcement efforts by country:
Czech Republic 327 bronze markers stolen from Theresienstadt concentration camp cemetery in mid-April 2008, with 700 more stolen the next week. A scrap metal dealer was arrested on April 18, 2008. He intended to melt them down for their copper.A ten-tonne railway bridge and 200 meters (660 ft) of railway trackage, from the town of Horní Slavkov in the Karlovy Vary Region was dismantled and removed by a gang of thieves who presented forged papers saying that the bridge had been condemned. The bridge was erected in 1901.
Notable metal thefts and law enforcement efforts by country:
France The French railway network company RFF face regular thefts of metal that affect the operation of the trains.
Germany In February 2006, near the German city of Weimar, thieves dismantled and carted away some 5 km (3 mi) of disused rail track, causing at least 200,000 euros worth of damage.
In June 2012, a badly burned man was found alive on the side of a road in Wilhelmsburg. The man was believed to be part of a group of suspects stealing overhead contact wire from a nearby railway. Three km (two mi) of copper cable had been torn down before the accident occurred and the theft was aborted.
In April 2016, a cast bronze owl was stolen from the grave of a two-year-old child in Rommerkirchen. The €800 sculpture far exceeds the €20 scrap value.
Haiti In Haiti, after the 2010 Haiti earthquake, some looters were reported to be removing rebar from the concrete of collapsed buildings in order to sell it. Others hacked up downed power lines.
India In the city of Kolkata, India, more than 10,000 manhole covers were taken in two months. These were replaced with concrete covers, but these were also stolen, this time for the rebar inside them.
Notable metal thefts and law enforcement efforts by country:
Indonesia In 2016, the sewers in Medan Merdeka avenue near National Monument, Central Jakarta, was discovered had been clogged with 10 truck loads of rubber-PVC cable jacket, causing flood in the area. Then it was discovered that the cables belongs to PT Telkom or PT PLN, state-owned telecommunication and electricity provider. The cable jacket was left clogging the sewer, while the metal thieves stole the inner copper wires.Several war graves in the Java Sea were discovered to have been removed by metal scavengers. The wrecks of HMS Exeter, HMS Encounter, and USS Perch had been totally removed. A sizable portion of HMS Electra was also scavenged.The wrecks of HNLMS De Ruyter, HNLMS Java, and HNLMS Kortenaer were also missing.
Notable metal thefts and law enforcement efforts by country:
Netherlands On 11 January 2011, the theft of 300 meters (980 ft) of copper cable caused an ICE train to derail near the Dutch city of Zevenaar. Nobody was harmed.
Russia In 2001, thieves in Khabarovsk Krai stole electric and telephone lines leading to military bases there. A small bridge was stolen in Russia in 2007, when a man chopped up its 5-meter span and hauled it away.
Notable metal thefts and law enforcement efforts by country:
South Africa Metal theft in South Africa is rampant, with an estimated R5 billion per annum lost due to the theft. The stolen metal ranges from copper cables, piping, bolts to manhole covers. The theft continuously disrupts and degrades services, such as the power supply provided by Eskom and the telecommunication services by Telkom. Eskom estimated that the theft has cost the company about R25 million per annum, with incidents increasing from 446 incidents in 2005; 1,059 in 2007 and 1,914 in 2008. The theft has cost Telkom R863 million (April 2007 – January 2008 period). Despite the minimal copper reserve South Africa has, as much as 3000 tonnes of copper leave Cape Town harbour every month. Aside from the economic impact, the theft also impacted people's lives, this includes the death of six children due to theft of manhole covers (2004–2008 figures). The theft of copper cables is a serious problem in Gauteng.
Notable metal thefts and law enforcement efforts by country:
Ukraine In February 2004, thieves in western Ukraine dismantled and stole an 11 m (36 ft) long, one-tonne steel bridge that spanned the river Svalyavka. In September 2009, smugglers attempted to make off with 25 tons of radioactive scrap metal from the Chernobyl Exclusion Zone. The Security Service of Ukraine caught them.
Notable metal thefts and law enforcement efforts by country:
United Kingdom Significant rises in metal theft were observed during 2006–2007 in the UK, especially in North East England, where metal theft was still on the rise as of 2008.In the UK, the British Metals Recycling Association is working with authorities such as the Association of Chief Police Officers and the British Transport Police to halt the problem of metal being stolen from its members' sites and to identify stolen materials. Also see Operation Tremor.
Notable metal thefts and law enforcement efforts by country:
Roofs, manhole covers, statues etc. have all been increasingly targeted recently due to the rising cost of metal. Most of the time metal is sold for scrap, but occasionally it is used by the thieves themselves. There have been many stories of metal theft; a bronze statue of former Olympic champion Steve Ovett disappeared from Preston Park in Brighton and church bells in Devon were stolen by thieves. A statue made by Henry Moore and estimated to be worth £300,000 was stolen from a museum in 2006, and believed to have been melted down for its scrap value of around £5,000. Churches, especially older churches, suffer as 'lead theft' from church (and other) roofs is on the rise.In late 2011 the police began a number of crackdowns on metal theft, the largest in South Yorkshire resulting in at least 22 arrests and the seizure of amateur smelting equipment. In August 2012, thieves stole 26 metal cages from an animal hospital in Kibworth, Leicestershire. Cages containing sick or injured animals were emptied by the thieves, leading to the death of eight animals and the escape of several others. The cages were worth about £30,000.Theft of copper cable by the side of railway tracks has also become increasingly a problem. Railway signal control cables are a common target, leading to serious safety issues and significant disruption for rail traffic. Theft of cables used for railway electrification is extremely dangerous to the perpetrator as well as bystanders as these systems are routinely energised to tens of thousands of volts.
Notable metal thefts and law enforcement efforts by country:
United States In Boston during the summer of 2008, two state employees stole 2,347 feet (715 m) of decorative iron trim that had been removed from the Longfellow Bridge for refurbishment and sold it for scrap. The men, one of whom was a Department of Conservation and Recreation district manager, were charged with receiving $12,147 for the historic original parapet coping. The estimated cost to remake the pieces, scheduled for replication by 2012, was over $500,000. The men were later convicted, in September 2009.In New Castle, Pennsylvania, two brothers dismantled a 40-by-15-foot (12.2 by 4.6 m) bridge by using a cutting torch to take it apart. Between September 16 and September 28, 2011, the brothers stole the entire bridge and then sold the steel for $5,000.Cities across the United States have become targets for metal thieves. Manhole cover thefts increased dramatically between 2007 and 2008, with Philadelphia as one of the hardest hit targets. Other cities dealing with this trend include Chicago, Illinois; Greensboro, North Carolina, Long Beach, California; and Palm Beach County, Florida.Copper wire thefts have also become increasingly common in the US. With copper prices at $3.70 a pound as of June 2007, compared to $0.60 a pound in 2002, people have been increasingly stealing copper wire from telephone and power company assets. People have even been injured and killed in power plants while trying to obtain copper wire. Other sources of stolen copper include railroad signal lines, grounding bars at electric substations, and even a 3,000-pound (1,400 kg) bell stolen from a Buddhist temple in Tacoma, Washington, which was later recovered.For example, Georgia, like many other states, has seen enough copper crime that a special task force has been created to fight it. The Metro Atlanta Copper Task Force is led by the Atlanta Police Department and involves police and recyclers from surrounding metro areas, Georgia Power, and the Fulton County District Attorney's office.Many states around the nation have passed – or are exploring – legislation to combat the problem. A new Georgia law took effect in July 2007 making it a crime to knowingly buy stolen metal. It allows prosecutors to prosecute for the actual cost of returning property to original conditions, as many of these thefts dramatically hurt the surrounding property value.On September 1, 2007, Earl Thelander of Onawa, Iowa, became the United States' first innocent copper theft fatality. Thelander sustained second- and third-degree burns over 80% of his body during an August 28, 2007, explosion, after copper thieves stripped propane and water lines from a rural residence and let the home fill with gas. Thelander, who, along with his wife, was preparing the empty home for a new tenant, reported the burglary to the Monona County Sheriff's Office, who investigated the initial crime. Hours after local law enforcement sent the Thelanders home, Thelander returned to the home to see if officials had cleared the home for entry. With no law enforcement nor fire department personnel present, he entered the home, and, smelling no fumes, felt it safe to work. In the basement, he plugged in a fan to help dry water on the basement floor, the electricity sparking an explosion.
Notable metal thefts and law enforcement efforts by country:
In response to the growing concerns and the lack of hard numbers on these crimes in Indianapolis, the Indianapolis Metropolitan Police Department (IMPD) and the University of Indianapolis Community Research Center (CRC) began in 2008 a collaborative effort to collect data on metal thefts. The Indianapolis Metal Theft Project gathers and analyzes a wide variety of data to provide a clearer understanding of the incidence, types, costs, and impacts of metal theft in Indianapolis in order to inform and implement strategies to reduce these crimes and their impacts.
Notable metal thefts and law enforcement efforts by country:
The Department of Justice's Office of Community-Oriented Policing and the Center for Problem-Oriented Policing published its 58th problem-solving guide in 2010 directed towards theft of scrap metal. Brandon Kooi provides a review of the problem in the US and internationally, followed by a number of suggested responses and what to consider in those responses.The Institute of Scrap Recycling Industries is one of the groups backing these educational efforts throughout the country. As the nation's trade association for the scrap recycling industry, ISRI provides members and community leaders with resources that they can use when facing the issue. They have also teamed with the National Crime Prevention Council (known for McGruff the Crime Dog and the "Take a Bite Out of Crime" slogan) in an effort to team with law enforcement and crime prevention organizations to fight and solve this problem, and have established a theft alert system that these groups can use. ISRI and the National Crime Prevention Council offer a number of tips for how to fight and prevent metal theft, including requiring photo ID and license plate information for every transaction, training employees on identifying stolen goods, and keeping good records that might be useful later.
Notable metal thefts and law enforcement efforts by country:
Venezuela In the 2010s, during the crisis in Venezuela, metal theft in Venezuela increased, including the smuggling of metals such as bronze, aluminum and copper. Several groups responsible for the thefts were identified nationwide that were made up of "experts" in wiring and that hired neighbors to carry out the thefts. The groups have targeted mostly electrical contractor firms, but also electrical cable from public and private infrastructure, including schools, universities, health centers, charcoal briquette factories, traffic lights, light poles, and in some cases individual homes. Thefts would sell scrap to intermediaries, which in turn would sell the scrap to legal smelters and manufacturers in Venezuela or smuggle it illegally across the border. By 2017, in the Colombian frontier city of Cúcuta, a kilo of copper could be sold for a little over $1, an important income at a time where the minimum wage in Venezuela was $5.Metal and cable theft in the country has left several neighborhoods and universities nationwide without electricity, internet or telephone service, and has led to the deterioration of utilities and infrastructure throughout Venezuela. By 2017, Venezuelan police forces had arrested over 100 people in different operations against and confiscated 7.5 tons in copper pipes. The copper originated mostly from the capital Caracas and the neighboring states of Aragua and Carabobo, and was destined for other countries in the region. Part of it was found on a ship heading from the Falcón coastal state to the Caribbean. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Steeping**
Steeping:
Steeping is the soaking of an organic solid, such as leaves, in a liquid (usually water) to extract flavours or to soften it. The specific process of teas being prepared for drinking by leaving the leaves in heated water to release the flavour and nutrients is known as steeping. Herbal teas may be prepared by decoction, infusion, or maceration. Some solids are soaked to remove an ingredient, such as salt, where the solute is not the desired product.
Corn:
One example is the steeping of corn (or maize), part of the milling process. As described by the US Corn Refiners Association, harvested kernels of corn are cleaned and then steeped in water at a temperature of 50 °C (120 °F) for 30 to 40 hours. In the process their moisture content rises from 15% to 45% and their volume more than doubles. The gluten bonds in the corn are weakened and starch is released. The corn is then ground to break free the germ and other components, and the water used (steepwater), which has absorbed various nutrients, is recycled for use in animal feeds.
Oaknuts:
Acorns are an edible nut that should not be eaten raw by humans due to toxic levels of tannins. These can be leached out by steeping the nuts in hot water (or cold water, over the course of several months).
Tea:
Dried teas as loose tea or tea bags are prepared for drinking by steeping the leaves in just boiled or heated water to release the flavour and nutrients in the dried tea into the water. This is often done in a cup, mug, teapot, pitcher or urn. A tea infuser or a tea strainer may be used to assist in this process. There is a huge variety of teas available in the market with broad based categories like oolong, green, black, white etc. and other specialized ones catering to particular regions such as Assam, Darjeeling etc. Each tea has to be prepared properly pertaining to their make and quality.
Bones:
When bones are soaked in a solution of boiling water, their calcium may be extracted. The resulting liquid is known as bone broth. The bones themselves are much softer after this process.
Beer:
Steeping grain is a part of making beer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flag algebra**
Flag algebra:
Flag algebras are an important computational tool in the field of graph theory which have a wide range of applications in homomorphism density and related topics. Roughly, they formalize the notion of adding and multiplying homomorphism densities and set up a framework to solve graph homomorphism inequalities with computers by reducing them to semidefinite programming problems. Originally introduced by Alexander Razborov in a 2007 paper, the method has since come to solve numerous difficult, previously unresolved graph theoretic questions. These include the D2,3 question regarding the region of feasible edge density, triangle density pairs and the maximum number of pentagons in triangle free graphs.
Motivation:
The motivation of the theory of flag algebras is credited to John Adrian Bondy and his work on the Caccetta-Haggkvist conjecture, where he illustrated his main ideas via a graph homomorphism flavored proof to Mantel's Theorem. This proof is an adaptation on the traditional proof of Mantel via double counting, except phrased in terms of graph homomorphism densities and shows how much information can be encoded with just density relationships.
Motivation:
Theorem (Mantel): The edge density in a triangle-free graph G is at most 12 . In other words, ex(K2,{K3})≤12.
As the graph is triangle-free, among 3 vertices in G , they can either form an independent set, a single induced edge P3~ , or a path of length 2 P3 . Denoting d(H,G) as the induced density of a subgraph H in G , double counting gives: (n3)(d(P3~,G)+2d(P3,G))=(n2)d(K2,G)⟹d(P3~,G)+2d(P3,G)=3d(K2,G).
Motivation:
Intuitively, d(P3,G)≈3d(K2,G)2 since a P3 just consists of two K2 s connected together, and there are 3 ways to label the common vertex among a set of 3 points. In fact, this can be rigorously proven by double counting the number of induced P3 s. Letting |G| denote the number of vertices of G , we have: d(P3,G)=(|G|3)−1∑v∈V(G)(d(v)2)=(|G|3)−1(|G|−12)∑v∈V(G)d(P3b,Gv)=3|G|∑v∈V(G)d(P3b,Gv), where P3b is the path of length 2 with its middle vertex labeled, and d(P3b,Gv) represents the density of P3b s subject to the constraint that the labeled vertex is used, and that P3b is counted as a proper induced subgraph only when its labeled vertex coincides with v . Now, note that d(P3b,Gv)≈d(K2b,Gv)2 since the probability of choosing two K2b s where the unlabeled vertices coincide is small (to be rigorous, a limit as |G|→∞ should be taken, so d(H,G) acts as a limit function on a sequence of larger and larger graphs G . This idea will be important for the actual definition of flag algebras.) To finish, apply the Cauchy–Schwarz inequality to get ∑v∈V(G)d(P3b,Gv)≈∑v∈V(G)d(K2b,Gv)2≥1|G|(∑v∈V(G)d(K2b,Gv))2=1|G|(1|G|−1(|G|2)2d(K2,G))2=|G|d(K2,G)2.
Motivation:
Plugging this back into our original relation proves what was hypothesized intuitively. Finally, note that d(P3~,G)≥0 so 6d(K2,G)2≤2d(P3,G)≤3d(K2,G)⟹d(K2,G)≤12.
The important ideas from this proof which will be generalized in the theory of flag algebras are substitutions such as d(P3,G)→3d(K2,G)2 , the use of labeled graph densities, considering only the "limit case" of the densities, and applying Cauchy at the end to get a meaningful result.
Definition:
Fix a collection of forbidden subgraphs H and consider the set of graphs G of H -free graphs. Now, define a type of size k to be a graph σ∈G with labeled vertices V(σ)=[k] . The type of size 0 is typically denoted as ∅ First, we define a σ -flag, a partially labeled graph which will be crucial for the theory of flag algebras: Definition: A σ -flag is a pair (F,θ) where F∈G is an underlying, unlabeled, H -free graph, while θ:[k]→V(F) defines a labeled graph embedding of σ onto the vertices F . Denote the set of σ -flags to be Fσ and the set of σ -flags of size n to be Fnσ . As an example, P3b from the proof of Mantel's Theorem above is a σ -flag where σ is a type of size 1 corresponding to a single vertex.
Definition:
For σ -flags F1,F2,…,Ft,(G,θ) satisfying |G|−|σ|≥∑i=1t|Fi|−|σ| , we can define the density of the σ -flags onto the underlying graph G in the following way: Definition: The density p(F1,F2,…,Ft;G) of the σ -flags F1,…,Ft in G is defined to be the probability of successfully randomly embedding F1,…,Ft into V(G) such that they are nonintersecting on V(G)∖im(θ) and are all labeled in the exact same way as G on im(θ) . More precisely, choose pairwise disjoint U1,U2,…,Ut⊆V(G)∖im(θ) at random and define p(F1,…,Ft;G) to be the probability that the σ -flag (G[Ui∪im(θ)],θ) is isomorphic to Fi for all i∈[t] Note that, when embedding F into G , where F,G are σ -flags, it can be done by first embedding F into a σ -flag F′ of size n∈[|F|,|G|] and then embedding F′ into G , which gives the formula: p(F;G)=∑F′∈Fnσp(F;F′)p(F′;G) . Extending this to sets of σ -flags gives the Chain Rule: Theorem (Chain Rule): If F1,F2,…,Ft,G are σ -flags, s,n are naturals such that F1,…,Ft fit in G , F1,…,Fs fit in a σ -flag of size n , and a σ -flag of size n combined with Fs+1,…,Ft fit in G , then p(F1,F2,…,Ft;G)=∑F∈Fnσp(F1,…,Fs;F)p(F,Fs+1,…,Ft;G) Recall that the previous proof for Mantel's involved linear combinations of terms of the form d(H,G) . The relevant ideas were slightly imprecise with letting |G| tend to infinity, but explicitly there is a sequence G1,G2,… such that d(H,Gi) converges to some ϕ(H) for all H , where ϕ is called a limit functional. Thus, all references to d(H,G) really refer to the limit functional. Now, graph homomorphism inequalities can be written as linear combinations of ϕ with different H s, but it would be convenient to express them as a single term. This motivates defining RFσ , the set of formal linear combinations of σ -flags over R , and now ϕ can be extended to a linear function over RFσ However, using the full space RFσ is wasteful when investigating just limit functionals, since there exist nontrivial relations between densities of certain σ -flags. In particular, the Chain Rule shows that ker ϕ is always true. Rather than dealing with all of these elements of the kernel, let the set of expressions of the above form (ie. those obtained from Chain Rule with a single σ -flag) as Kσ and quotient them out in our final analysis. These ideas combine to form the definition for a flag algebra: Definition (Flag Algebras): A flag algebra is defined on the space of linear combinations of σ -flags Aσ=RFσ/Kσ equipped with bilinear operator F⋅G=∑H∈Fnσp(F,G;H)H for F,G∈Fσ and any natural n such that F,G fit in a σ -flag of size n , extending the operator linearly to RFσ It remains to check that the choice of n does not matter for a pair F,G provided it is large enough (this can be proven with Chain Rule) as well as that if f∈Kσ then f⋅g∈Kσ , meaning that the operator respects the quotient and thus forms a well-defined algebra on the desired space.
Definition:
One important result of this definition for the operator is that multiplication is respected on limit functionals. In particular, for a limit functional ϕ , the identity ϕ(f⋅g)=ϕ(f)⋅ϕ(g) holds true. For example, it was shown that ϕ(P3b)=ϕ(K2b)2 in our proof for Mantel's, and this result is just a corollary of this statement. More generally, the fact that ϕ is multiplicative means that all limit functionals are algebra homomorphisms between Aσ and R
The downward operator:
The definition above provides a framework for dealing with σ -flags, which are partially labeled graphs. However, most of the time, unlabeled graphs, or ∅ -flags, are of greatest interest. To get from the former to the latter, define the downward operator.
The downward operator:
The downward operator is defined in the most natural way: given a σ -flag F , let ↓F to be the ∅ -flag resulting from forgetting the labels assigned to σ . Now, to define a natural mapping between σ -flags and unlabeled graphs, let qσ(F) be the probability that an injective map θ:[k]→V(F) taken at random has image isomorphic to σ , and define [[F]]σ=qσ(F)↓F . Extending [[⋅]] linearly to RFσ gives a valid linear map which sends combinations of σ -flags to combinations of unlabeled ones.
The downward operator:
The most important result regarding [[⋅]] is its averaging properties. In particular, fix a σ -flag F and unlabeled graph G with |G|≥|F| , then choosing an embedding θ of σ on G at random defines random variable p(F;(G,θ)) . It can be shown that E[p(F;(G,θ)]=qσ(F)p(↓F;G)qσ(σ)p(↓σ;G).
Optimization with flag algebras:
All linear functionals, ϕ are algebra homomorphisms ϕ:Aσ→R . Furthermore, by definition, ϕ(F)≥0 for any σ -flag F since ϕ represents a density limit. Thus, say that a homomorphism Hom (Aσ,R) is positive if and only if ψ(F)≥0∀F∈Fσ , and let Hom +(Aσ,R) be the set of positive homomorphisms. One can show that the set of limit functionals Φ is exactly the set of positive homomorphisms Hom +(Aσ,R) , so it suffices to understand the latter definition of the set.
Optimization with flag algebras:
In order for a linear combination f∈Aσ to yield a valid graph homomorphism inequality, it needs to be nonnegative over all possible linear functionals, which will then imply that it is true for all graphs. With this in mind, define the semantic cone of type σ , a set Sσ⊂Aσ such that Hom +(Aσ,R)}.
Optimization with flag algebras:
Once again, σ=∅ is the case of most interest, which corresponds to the case of unlabeled graphs. However, the downward operator has the property of mapping Sσ to S∅ , and it can be shown that the image of Sσ under [[⋅]]σ is a subset of S∅ , meaning that any results on the type σ semantic cone readily generalize to unlabeled graphs as well.
Optimization with flag algebras:
Just by naively manipulating elements of Aσ , numerous elements of the semantic cone Sσ can be generated. For example, since elements of Hom +(Aσ,R) are nonnegative for σ -flags, any conical combination of elements of Fσ will yield an element of Sσ . Perhaps more non-trivially, any conical combination of squares of elements of Aσ will also yield an element of the semantic cone.
Optimization with flag algebras:
Though one can find squares of flags which sum to nontrivial results manually, it is often simpler to automate the process. In particular, it is possible to adapt the ideas in sum-of-squares optimization for polynomials to flag algebras. Define the degree of a vector f∈RFσ to be the largest flag with nonzero coefficient in the expansion of f , and let the degree of f∗∈Aσ to be the minimum degree of a vector over all choices in f∗+Kσ . Also, define vσ,n=Fnσ→Aσ as the canonical embedding sending F to itself for all F∈Fnσ . These definitions give rise to the following flag-algebra analogue: Theorem: Given f∈Aσ , n≥|σ| , then there exist g1,…,gt∈Aσ for some t≥1 if and only if there is a positive semidefinite matrix Q:Fnσ×Fnσ→R such that f=vσ,n⊺Qvσ,n With this theorem, graph homomorphism problems can now be relaxed into semidefinite programming ones which can be solved via computer. For example, Mantel's Theorem can be rephrased as finding the smallest λ∈R such that λ∅−K2∈S∅ . As S∅ is poorly understood, it is difficult to make progress on the question in this form, but note that conic combinations of ∅ -flags and squares of vectors lie in S∅ , so instead take a semidefinite relaxation. In particular, minimize λ under the constraint that λ∅−K2=r+[[v1,2⊺Qv1,2]] where r is a conic combination of ∅ -flags and Q is positive semi-definite. This new optimization problem can be transformed into a semidefinite-programming problem which is then solvable with standard algorithms.
Generalizations:
The method of flag algebras readily generalizes to numerous graph-like constructs. As Razborov wrote in his original paper, flags can be described with finite model theory instead. Instead of graphs, models of some nondegenerate universal first-order theory T with equality in a finite relational signature L with only predicate symbols can be used. A model M , which replaces our previous notion of a graph, has ground set V(M) , whose elements are called vertices.
Generalizations:
Now, defining sub-models and model embeddings in an analogous way to subgraphs and graph embeddings, all of the definitions and theorems above can be nearly directly translated into the language of model theory. The fact that the theory of flag algebras generalizes well means that it can be used not only to solve problems in simple graphs, but also similar constructs such as, but not limited to, directed graphs and hypergraphs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fraunhofer Institute for Systems and Innovation Research ISI**
Fraunhofer Institute for Systems and Innovation Research ISI:
The Fraunhofer Institute for Systems and Innovation Research (Fraunhofer ISI) is part of the Fraunhofer Society for the promotion of Applied Research e.V. (FhG), Europe’s largest application-oriented research organization. The institute is based in Karlsruhe. It conducts applied research and development on innovations in engineering, economics, the natural sciences and social sciences. The Fraunhofer ISI is one of the leading Institutes for innovation research in Europe.
History:
At the beginning of 1972, the innovation researcher Helmar Krupp (de) recommended that the natural science and technology-oriented Fraunhofer Society should found a new institute to research the impacts and potentials of technologies and innovations. This recommendation was followed on 1 April 1972 with the establishment of the Fraunhofer Institute for System Technology and Innovation Research in Karlsruhe – under Helmar Krupp as its director. Frieder Meyer-Krahmer (de) took over as director from 1990. Under his management, the institute became one of the internationally leading innovation research institutes. This resulted in it being renamed “Fraunhofer Institute for Systems and Innovation Research” in 2004 because “System Technology” was no longer considered suitable for the institute that had since grown to well over 100 employees.
History:
In 2005, Meyer-Krahmer was appointed State Secretary of the German Federal Ministry of Education and Marion Weissenberger-Eibl (de) has been the director of Fraunhofer ISI since April 2007. Under her leadership, futures research has become a core competence at the institute. In 2007, the institute was restructured and its individual departments were turned into Competence Centers with specialized Business Units. The energy transition and new mobility and transport concepts have resulted in Fraunhofer ISI's continued growth. Fraunhofer ISI now conducts research in 7 Competence Centers.On 1 October 2018 Fraunhofer ISI was expanding its management team with the appointment of Prof. Jakob Edler. As a professor of innovation policy and strategy he was previously a director at the Manchester Institute of Innovation Research. Prof. Edler joins the management team as executive director, contributing his expertise in the governance and policy of international research and innovation initiatives.
Research interests:
In seven Competence Centers (CC), Fraunhofer ISI researches the origins, applications, opportunities and risks as well as the markets for innovative technical developments. Particular attention is paid to exploring the impacts of these innovations on the economy, the state and society and providing a basis for decision-making in academia, industry and politics.
Research interests:
Energy Policy and Energy Markets The CC Energy Policy and Energy Markets researches the implementation of a political and institutional framework for a sustainable energy system. As renewable energies and climate protection technologies advance, this CC evaluates energy and climate policy measures, strategies and instruments in order to provide decision-makers with a better picture of the future energy market. The Business Units are Renewable Energies, Energy Policy, Climate Policy, Electricity Markets and Infrastructures as well as Global Sustainable Energy Transitions.
Research interests:
Energy Technology and Energy Systems The CC Energy Technology and Energy Systems analyzes emerging technologies that can contribute to a sustainable energy system. Its five Business Units Energy Efficiency, Energy Economy, Demand Analyses and Projections, Demand Response and Smart Grids as well as Actors and Acceptance in the Transformation of the Energy System focus on the efficient and sensible use of energy and analyzing the impacts on the economy and society.
Research interests:
Foresight The CC Foresight uses scientific methods to look ahead into the future. To support industry, society and politics, the CC conducts research on alternative future scenarios, the developments of long-term objectives, future strategies and technology changes in its Business Units Futures and Society, Futures Dialogs, and Foresight for Strategy Development.
Research interests:
Innovation and Knowledge Economy The CC analyses the prerequisites for innovations and their effects from the company level up to national innovation systems. It explores the various institutions, instruments and strategies in the economy and science that generate new knowledge and innovations. This happens in the Business Units Industrial Change and New Business Models, Innovation Trends and Science Studies, and Competitiveness and Innovation Measurement.
Research interests:
Sustainability and Infrastructure Systems Taking into account ecological, political, economic and social aspects, the CC Sustainability and Infrastructure Systems conducts research on innovations that foster the decoupling of economic growth and environmental pollution. The research focus ranges from individual new products up to long-term developments in industrialized and developing countries. The four Business Units are Water Resources Management, Sustainability Innovation and Policy, Raw Materials, and Mobility.
Research interests:
Emerging Technologies The CC Emerging Technologies is concerned with the analysis of new technologies and socio-technical transformations. It examines the changes resulting from the interplay between technologies, innovations and society. Communication and an interdisciplinary perspective are essential for the research conducted. The Business Units Bioeconomy and Life Sciences, Innovations in the Health System, Information and Communication Technologies, and Industrial Technologies contribute to the respective scientific discourses.
Research interests:
Policy and Society Research and innovation are increasingly called upon to contribute to overcoming societal challenges. The CC examines the resulting requirements for research and innovation systems, as well as for the design of a research, technology and innovation policy committed to sustainability and societal well-being and its coordination with other policy fields. This happens in the Business Units Policy for Innovation and Transformation, Societal Change and Innovation, Regional Innovation Dynamics and Knowledge Exchange, and Innovation and Regulation.
Cooperation:
Among others, the institute cooperates with the Karlsruhe Institute of Technology, Leibnitz University of Hannover and the University of Kassel in Germany and, internationally, with the University of Strasbourg (Bureau d'Economie Théorique et Appliqué), ETH Zurich (Centre for Energy Policy and Economics), the Institute of Policy and Management at the Chinese Academy of Sciences (Beijing), Virginia Tech (Blacksburg) and the School of Public Policy at the Georgia Institute of Technology (Atlanta) and Manchester Institute of Innovation Research (MIoIR), Fraunhofer ISI is also a member of numerous programs, networks and advisory committees.
Infrastructure:
Fraunhofer ISI employs around 270 permanent members of staff, who work on about 400 research projects each year. 60 percent of these employees are scientists. According to the institute's own figures, its annual budget amounts to around 27.4 million euros (in 2020). Approx. 40 percent of its contracts come from the German government, another 30 percent from the European Union. About 20 percent are from industry – both companies and industrial associations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Abhyankar–Moh theorem**
Abhyankar–Moh theorem:
In mathematics, the Abhyankar–Moh theorem states that if L is a complex line in the complex affine plane C2 , then every embedding of L into C2 extends to an automorphism of the plane. It is named after Shreeram Shankar Abhyankar and Tzuong-Tsieng Moh, who published it in 1975. More generally, the same theorem applies to lines and planes over any algebraically closed field of characteristic zero, and to certain well-behaved subsets of higher-dimensional complex affine spaces. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Video decoder**
Video decoder:
A video decoder is an electronic circuit, often contained within a single integrated circuit chip, that converts base-band analog video signals to digital video. Video decoders commonly allow programmable control over video characteristics such as hue, contrast, and saturation. A video decoder performs the inverse function of a video encoder, which converts raw (uncompressed) digital video to analog video. Video decoders are commonly used in video capture devices and frame grabbers.
Signals:
The input signal to a video decoder is analog video that conforms to a standard format. For example, a standard definition (SD) decoder accepts (composite or S-Video) that conforms to SD formats such as NTSC or PAL. High definition (HD) decoders accept analog HD formats such as AHD, HD-TVI, or HD-CVI.
Signals:
The output digital video may be formatted in various ways, such as 8-bit or 16-bit 4:2:2, 12-bit 4:1:1, BT.656 (SD) or BT.1120 (HD). Usually, in addition to the digital video output bus, a video decoder will also generate a clock signal and other signals such as: Sync — indicates the beginning of a video frame Blanking — indicates video blanking interval Field — indicates whether the current video field is even or odd (applies to interlaced formats) Lock — indicates the decoder has detected and is locked (synchronized) to a valid analog input video signal
Functional blocks:
The main functional blocks of a video decoder typically include these: Analog processors Y/C (luminance/chrominance) separation Chrominance processor Luminance processor Clock/timing processor A/D converters for Y/C Output formatter Host communication interface
Process:
Video decoding involves several processing steps. First the analog signal is digitized by an analog-to-digital converter to produce a raw, digital data stream. In the case of composite video, the luminance and chrominance are then separated; this is not necessary for S-Video sources. Next, the chrominance is demodulated to produce color difference video data. At this point, the data may be modified so as to adjust brightness, contrast, saturation and hue. Finally, the data is transformed by a color space converter to generate data in conformance with any of several color space standards, such as RGB and YCbCr. Together, these steps constitute video decoding because they "decode" an analog video format such as NTSC or PAL. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NPF (firewall)**
NPF (firewall):
NPF is a BSD licensed stateful packet filter, a central piece of software for firewalling. It is comparable to iptables, ipfw, ipfilter and PF. NPF is developed on NetBSD.
History:
NPF was primarily written by Mindaugas Rasiukevicius. NPF first appeared in the NetBSD 6.0 release in 2012.
Features:
NPF is designed for high performance on SMP systems and for easy extensibility.
It supports various forms of Network Address Translation (NAT), stateful packet inspection, tree and hash tables for IP sets, bytecode (BPF or n-code) for custom filter rules and other features.
NPF has extension framework for supporting custom modules. Features such as packet logging, traffic normalization, random blocking are provided as NPF extensions.
Example of npf.conf:
# Assigning IPv4-only addresses of the specified interfaces.
$ext_if = inet4(wm0) $int_if = inet4(wm1) # Efficient tables to store IP sets.
table <1> type hash file "/etc/npf_blacklist" table <2> type tree dynamic # Variables with the service names.
$services_tcp = { http, https, smtp, domain, 9022 } $services_udp = { domain, ntp } $localnet = { 10.1.1.0/24 } # Different forms of NAT are supported.
map $ext_if dynamic 10.1.1.0/24 -> $ext_if map $ext_if dynamic 10.1.1.2 port 22 <- $ext_if port 9022 # NPF has various extensions which are supported via custom procedures.
procedure "log" { log: npflog0 } # # Grouping is mandatory in NPF.
# There must be a default group.
# group "external" on $ext_if { # Stateful passing of all outgoing traffic.
Example of npf.conf:
pass stateful out final all block in final from <1> pass stateful in final family inet proto tcp to $ext_if port ssh apply "log" pass stateful in final proto tcp to $ext_if port $services_tcp pass stateful in final proto udp to $ext_if port $services_udp # Passive FTP and traceroute pass stateful in final proto tcp to $ext_if port 49151-65535 pass stateful in final proto udp to $ext_if port 33434-33600 } group "internal" on $int_if { # Ingress filtering as per RFC 2827.
Example of npf.conf:
block in all pass in final from $localnet pass in final from <2> pass out final all } group default { pass final on lo0 all block all } | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vyatta**
Vyatta:
Vyatta is a software-based virtual router, virtual firewall and VPN product for Internet Protocol networks (IPv4 and IPv6). A free download of Vyatta has been available since March 2006. The system is a specialized Debian-based Linux distribution with networking applications such as Quagga, OpenVPN, and many others. A standardized management console, similar to Juniper JUNOS or Cisco IOS, in addition to a web-based GUI and traditional Linux system commands, provides configuration of the system and applications. In recent versions of Vyatta, web-based management interface is supplied only in the subscription edition. However, all functionality is available through KVM, serial console or SSH/telnet protocols. The software runs on standard x86-64 servers.
Vyatta:
Vyatta is also delivered as a virtual machine file and can provide (vRouter, vFirewall, VPN) functionality for Xen, VMware, KVM, Rackspace, SoftLayer, and Amazon EC2 virtual and cloud computing environments. As of October, 2012, Vyatta has also been available through Amazon Marketplace and can be purchased as a service to provide VPN, cloud bridging and other network functions to users of Amazon's AWS services.
Vyatta:
Vyatta sells a subscription edition that includes all the functionality of the open source version as well as a graphical user interface, access to Vyatta's RESTful API's, Serial Support, TACACS+, Config Sync, System Image Cloning, software updates, 24x7 phone and email technical support, and training. Certification as a Vyatta Professional is now available. Vyatta also offers professional services and consulting engagements.
Vyatta:
The Vyatta system is intended as a replacement for Cisco IOS 1800 through ASR 1000 series Integrated Services Routers (ISR) and ASA 5500 security appliances, with a strong emphasis on the cost and flexibility inherent in an open source, Linux-based system running on commodity x86 hardware or in VMware ESXi, Microsoft Hyper-V, Citrix XenServer, Open Source Xen and KVM virtual environments.
Vyatta:
In 2012, Brocade Communications Systems acquired Vyatta. In April, 2013, Brocade renamed the product from the Vyatta Subscription Edition (VSE) to the Brocade Vyatta 5400 vRouter. The latest commercial release of the Brocade vRouter is no longer open-source based.
In June 2017, Brocade sold Vyatta Software Technology to AT&T Communications.In September 2021, AT&T supplier Ciena Corporation announced an agreement to acquire the Vyatta talent and assets.
Vyatta Core:
The free community Vyatta Core software (VC) was an open source network operating system providing advanced IPv4 and IPv6 routing, stateful firewalling, secure communication through both an IPSec based VPN as well as through the SSL based OpenVPN.In October 2013, an independent group started a fork of Vyatta Core under the name VyOS.In March 2018, ATT released a new open source project based on the proprietary Brocade version of Vyatta under the name DANOS. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Boat hook**
Boat hook:
A boat hook is part of boating equipment. Its most common use is as a docking and undocking aid. It may be similar to a pike pole, however it commonly has a blunt tip, for pushing during undocking, with a hook for docking. In addition, it may have a line attached to the other end, which may have a ring for this purpose.It may be also used for pulling things out of water, such as debris or people, as well as for other fetching tasks and holding-off from other boats or landings.
History:
Evidence of boat hooks has been found from ancient Rome and the painting Christ in the Storm on the Sea of Galilee, by Rembrandt van Rijn was painted in 1633 and clearly shows one in its familiar form.
Traditional:
A traditional European boat hook pole is around 1.8- 2.4M and is typically made of ash- one of the best woods for poles, such as spears etc. It would have a brass hook- a non-rusting metal common on traditional boat fittings. The hook end would usually have a hook on one side for pulling and catching things, plus a rounded point for pushing things.
Modern Styles:
Although the traditional boat hook is still available, various different materials, such as aluminium and even a rolled up polymer are now available.
Although the boat hook is a general purpose reaching and holding-off tool on boats, there are more specialised forms, such as the Recovery Pole designed for length rather than the rigid strength of a boat hook.
Boat hook drill:
In the Royal Navy, during ceremonial occasions the Ceremonial Boat Hook Drill must be performed during berthing and unberthing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oxiperomide**
Oxiperomide:
Oxiperomide is an antipsychotic. Clinical trials demonstrated that it can reduce dyskinesia in patients with Parkinson's disease who are taking dopamine agonists without increasing Parkinsonian symptoms. It does this by selectively antagonizing dopamine receptors. Further development of this drug is not available. It appears to have never been marketed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Proarrhythmia**
Proarrhythmia:
Proarrhythmia is a new or more frequent occurrence of pre-existing arrhythmias, paradoxically precipitated by antiarrhythmic therapy, which means it is a side effect associated with the administration of some existing antiarrhythmic drugs, as well as drugs for other indications. In other words, it is a tendency of antiarrhythmic drugs to facilitate emergence of new arrhythmias.
Types of proarrhythmia:
According to the Vaughan Williams classification (VW) of antiarrhythmic drugs, there are 3 main types of Proarrhythmia during treatment with various antiarrhythmic drugs for Atrial Fibrillation or Atrial flutter: Ventricular proarrhythmia Torsades de pointes (VW type IA and type III drugs) Sustained monomorphic ventricular tachycardia (usually VW type IC drugs) Sustained polymorphic ventricular tachycardia/ventricular fibrillation without long QT (VW types IA, IC, and III drugs) Atrial proarrhythmia Conversion of atrial fibrillation to flutter (usually VW type IC drugs or amiodarone). May be a desired effect.
Types of proarrhythmia:
Increase of defibrillation threshold (a potential problem with VW type IC drugs) Provocation of recurrence (probably VW types IA, IC and III drugs). It is rare.
Abnormalities of conduction or impulse formation Sinus node dysfunction, atrioventricular block (almost all drugs) Accelerate conduction over accessory pathway (digoxin, intravenous verapamil, or diltiazem) Acceleration of ventricular rate during atrial fibrillation (VW type IA and type IC drugs).
Increased risk:
Presence of structural heart disease, especially LV systolic dysfunction.
Class IC agents.
Increased age.
Females.
Clinical pointers:
Class IA drugs Dose independent, occurring at normal levels.
Follow QT interval, keep ms.
Class IC drugs May be provoked by increased heart rate.
Exercise stress tests after loading.
Class III drugs Dose dependent.
Follow bradycardia, prolonged QT closely. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shoe-fitting fluoroscope**
Shoe-fitting fluoroscope:
Shoe-fitting fluoroscopes, also sold under the names X-ray Shoe Fitter, Pedoscope and Foot-o-scope, were X-ray fluoroscope machines installed in shoe stores from the 1920s until about the 1970s in the United States, Canada, United Kingdom, Australia, South Africa, Germany and Switzerland. In the UK, they were known as Pedoscopes, after the company based in St. Albans that manufactured them. An example can be seen at the Science Museum, London. At the beginning of the 1930s, Bally was the first company to import pedoscopes into Switzerland from the UK. In the second half of the 20th century, growing awareness of radiation hazards and increasingly stringent regulations forced their gradual phasing out. They were widely used particularly when buying shoes for children, whose shoe size continually changed until adulthood.
Shoe-fitting fluoroscope:
A shoe-fitting fluoroscope was a metal construction covered in finished wood, approximately 4 feet (1.2 m) high in the shape of short column, with a ledge with an opening through which the standing customer (adult or child) would put their feet and look through a viewing porthole at the top of the fluoroscope down at the X-ray view of the feet and shoes. Two other viewing portholes on either side enabled the parent and a sales assistant to observe the toes being wiggled to show how much room for the toes there was inside the shoe. The bones of the feet were clearly visible, as was the outline of the shoe, including the stitching around the edges.
Invention:
There are multiple claims for the invention of the shoe-fitting fluoroscope. The most likely is Jacob Lowe, who demonstrated a modified medical device at shoe retailer conventions in 1920 in Boston and in 1921 in Milwaukee. Lowe filed a US patent application in 1919, granted in 1927, and assigned it to the Adrian Company of Milwaukee for US$15,000. Syl Adrian claims that his brother, Matthew Adrian, invented and built the first machine in Milwaukee; his name is featured in a 1922 advertisement for an X-ray shoe fitter. Clarence Karrer, the son of an X-ray equipment distributor, claims to have built the first unit in 1924 in Milwaukee, but had his idea stolen and patented by one of his father's employees. In the meantime, the British company Pedoscope filed a British patent application in 1924, granted in 1926, and claimed to have been building these machines since 1920.The X-ray Shoe Fitter Corporation of Milwaukee and Pedoscope Company became the largest manufacturers of shoe-fitting fluoroscopes in the world.
Health concerns:
The risk of radiation burns to extremities was known since Wilhelm Röntgen's 1895 experiment, but this was a short-term effect with early warning from reddening of the skin (erythema). The long-term risks from chronic exposure to radiation began to emerge with Hermann Joseph Muller's 1927 paper showing genetic effects, and the incidence of bone cancer in radium dial painters of the same time period. However, there was not enough data to quantify the level of risk until atomic bomb survivors began to experience the long-term effects of radiation in the late 1940s. The first scientific evaluations of these machines in 1948 immediately sparked concern for radiation protection and electrical safety reasons, and found them ineffective at shoe fitting.Large variations in dose were possible depending on the machine design, displacement of the shielding materials, and the duration and frequency of use. Radiation surveys showed that American machines delivered an average of 13 roentgen (r) (roughly 0.13 sievert (Sv) of equivalent dose in modern units) to the customer's feet during a typical 20-second viewing, with one capable of delivering 116 r (~1 Sv) in 20 seconds. British Pedoscopes produced about ten times less radiation.A customer might try several shoes in a day, or return several times in a year, and radiation dose effects may be cumulative. A dose of 300 r can cause growth disturbance in a child, and 600 r can cause erythema in an adult. Hands and feet are relatively resistant to other forms of radiation damage, such as carcinogenesis.
Health concerns:
Although most of the dose was directed at the feet, a substantial amount would scatter or leak in all directions. Shielding materials were sometimes displaced to improve image quality, to make the machine lighter, or out of carelessness, and this aggravated the leakage. The resulting whole-body dose may have been hazardous to the salesmen, who were chronically exposed, and to children, who are about twice as radiosensitive as adults. Monitoring of American salespersons found dose rates at pelvis height of up to 95 mr/week, with an average of 7.1 mr/week (up to ~50 mSv/a, avg ~3.7 mSv/an effective dose). A 2007 paper suggested that even higher doses of 0.5 Sv/a were plausible. The most widely accepted model of radiation-induced cancer posits that the incidence of cancers due to ionizing radiation increases linearly with effective (i.e., whole-body) dose.Years or decades may elapse between radiation exposure and a related occurrence of cancer, and no follow-up studies of customers can be performed for lack of records. According to a 1950 medical article on the machines: "Present evidence indicates that at least some radiation injuries are statistical processes that do not have a threshold. If this evidence is valid, there is no exposure which is absolutely safe and which produces no effect." Three shoe salespersons were identified with rare conditions that might have been associated with their chronic occupational exposure: a severe radiation burn requiring amputation in 1950, a case of dermatitis with ulceration in 1957, and a case of basal-cell carcinoma of the sole in 2004.
Health concerns:
Shoe industry response Representatives of the shoe retail industry denied claims of potential harm in newspaper articles and opinion pieces. They argued that use of the devices prevented harm to customers' feet from poorly-fitted shoes.
Regulation:
There were no applicable regulations when shoe-fitting fluoroscopes were introduced. An estimated 10,000 machines were sold in the US, 3,000 in the UK, 1,500 in Switzerland, and 1,000 in Canada before authorities began discouraging their use. As understanding grew of the long-term health effects of radiation, a variety of bodies began speaking out and regulating the machines.
In popular culture:
Young Eddie Kaspbrak uses a shoe-fitting fluoroscope in a memory near the beginning of the novel It by Stephen King.
In the novel The House Without a Christmas Tree, the young protagonist Addie Mills describes these machines.
In 1999, Time placed Shoe-Store X Rays on a list of the 100 worst ideas of the 20th century.
A shoe-fitting fluoroscope appeared on a 2011 episode of the History series American Restoration. Its radionuclide source was found to be so dangerous that it was removed and replaced with a static X-ray.
A shoe-fitting fluoroscope can be seen near the beginning of the film Billion Dollar Brain starring Michael Caine, when his character uses it to establish the contents of a flask. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Circulating capital**
Circulating capital:
Circulating capital includes intermediate goods and operating expenses, i.e., short-lived items that are used in production and used up in the process of creating other goods or services. This is roughly equal to intermediate consumption. Finer distinctions include raw materials, intermediate goods, inventories, ancillary operating expenses and (working capital). It is contrasted with fixed capital. The term was used in more specialized ways by classical economists such as Adam Smith, David Ricardo and Karl Marx.
Circulating capital:
Where the distinction is used, circulating capital is a component of (total) capital, also including fixed capital used in a single cycle of production. In contrast to fixed capital, it is used up in every cycle (raw materials, basic and intermediate materials, combustible, energy…). In accounting, the circulating capital comes under the heading of current assets.
Circulating capital:
Building on the work of Quesnay and Turgot, Adam Smith (1776) made the first explicit distinction between fixed and circulating capital. In his usage, circulating capital includes wages and labour maintenance, money, and inputs from land, mines, and fisheries associated with production.According to Karl Marx (second volume of Das Kapital, end of chapter 7) the turnover of capital influences "the processes of production and self-expansion", the two new forms of capital, circulating and fixed, "accrue to capital from the process of circulation and affect the form of its turnover". In the following chapter Marx defines fixed capital and circulating capital. In chapter 9 he claims: "We have here not alone quantitative but also qualitative difference." Conventionally, (physical) capital assets held by a business for more than one year are regarded in annual accounting statements as "fixed", the rest as "circulating". In modern economies such as the United States, roughly half of the intermediate inputs bought or used by businesses are in fact services, and not goods. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stop-and-wait ARQ**
Stop-and-wait ARQ:
Stop-and-wait ARQ, also referred to as alternating bit protocol, is a method in telecommunications to send information between two connected devices. It ensures that information is not lost due to dropped packets and that packets are received in the correct order. It is the simplest automatic repeat-request (ARQ) mechanism. A stop-and-wait ARQ sender sends one frame at a time; it is a special case of the general sliding window protocol with transmit and receive window sizes equal to one in both cases. After sending each frame, the sender doesn't send any further frames until it receives an acknowledgement (ACK) signal. After receiving a valid frame, the receiver sends an ACK. If the ACK does not reach the sender before a certain time, known as the timeout, the sender sends the same frame again. The timeout countdown is reset after each frame transmission. The above behavior is a basic example of Stop-and-Wait. However, real-life implementations vary to address certain issues of design.
Stop-and-wait ARQ:
Typically the transmitter adds a redundancy check number to the end of each frame. The receiver uses the redundancy check number to check for possible damage. If the receiver sees that the frame is good, it sends an ACK. If the receiver sees that the frame is damaged, the receiver discards it and does not send an ACK—pretending that the frame was completely lost, not merely damaged.
Stop-and-wait ARQ:
One problem is when the ACK sent by the receiver is damaged or lost. In this case, the sender doesn't receive the ACK, times out, and sends the frame again. Now the receiver has two copies of the same frame, and doesn't know if the second one is a duplicate frame or the next frame of the sequence carrying identical DATA.
Stop-and-wait ARQ:
Another problem is when the transmission medium has such a long latency that the sender's timeout runs out before the frame reaches the receiver. In this case the sender resends the same packet. Eventually the receiver gets two copies of the same frame, and sends an ACK for each one. The sender, waiting for a single ACK, receives two ACKs, which may cause problems if it assumes that the second ACK is for the next frame in the sequence.
Stop-and-wait ARQ:
To avoid these problems, the most common solution is to define a 1 bit sequence number in the header of the frame. This sequence number alternates (from 0 to 1) in subsequent frames. When the receiver sends an ACK, it includes the sequence number of the next packet it expects. This way, the receiver can detect duplicated frames by checking if the frame sequence numbers alternate. If two subsequent frames have the same sequence number, they are duplicates, and the second frame is discarded. Similarly, if two subsequent ACKs reference the same sequence number, they are acknowledging the same frame.
Stop-and-wait ARQ:
Stop-and-wait ARQ is inefficient compared to other ARQs, because the time between packets, if the ACK and the data are received successfully, is twice the transit time (assuming the turnaround time can be zero). The throughput on the channel is a fraction of what it could be. To solve this problem, one can send more than one packet at a time with a larger sequence number and use one ACK for a set. This is what is done in Go-Back-N ARQ and the Selective Repeat ARQ. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Products of conception**
Products of conception:
Products of conception, abbreviated POC, is a medical term used for the tissue derived from the union of an egg and a sperm. It encompasses anembryonic gestation (blighted ovum) which does not have a viable embryo.
In the context of tissue from a dilation and curettage, the presence of POC essentially excludes an ectopic pregnancy.
Retained products of conception:
Retained products of conception is where products of conception remain in the uterus after childbirth, medical abortion or miscarriage (also known as spontaneous abortion). Miscarriage with retained products of conception is termed delayed when no or very little products of conception have been passed, and incomplete when some products have been passed but some still remain in utero.
Diagnosis:
The diagnosis is based on clinical presentation, quantitative HCG, ultrasound, and pathologic evaluation. A solid, heterogeneous, echogenic mass has a positive predictive value of 80%, but is present in only a minority of cases. A thickened endometrium of > 10 mm is usually considered abnormal, though no consensus exists on the appropriate cutoff. A cut-off of 8 mm or more has 34% positive rate, while a cut off of 14 mm or more has 85% sensitivity, 64% specificity for the diagnosis. Color Doppler flow in the endometrial canal can increased confidence in the diagnosis, though its absence does not exclude it, as 40% of cases of retained products have little or no flow. The differential in suspected cases includes uterine atony, blood clot, gestational trophoblastic disease, and normal post partum appearance of the uterus. Post partum blood clot is more common, reported in up to 24% of postpartum patients, and tends to be more hypoechoic than retained products with absent color flow on Doppler, and resolving spontaneously on follow up scans. The presence of gas raises the possibility of post partum endometritis, though this can also be seen in up to 21% of normal post pregnancy states. The normal post partum uterus is usually less than 2 cm in thickness, and continues to involute on follow up scans to 7 mm or less over time. Retained products are not uncommon, occurring in approximately 1% of all pregnancies, though it more common following abortions, either elective or spontaneous. There is significant overlap between appearance of a normal post partum uterus and retained products. If there is no endometrial canal mass or fluid, and endometrial thickness is less than 10 mm and without increased flow, retained products are statistically unlikely.
Diagnosis:
Infections Recent studies indicate that the products of conception may be susceptible to pathogenic infections, including viral infections. Indeed, footprints of JC polyomavirus and Merkel cell polyomavirus have been detected in chorionic villi from females affected by spontaneous abortion as well as pregnant women. Another virus, BK polyomavirus has been detected in the same tissues, but with lesser extent.
Treatment:
After medical abortion According to the 2006 WHO Frequently asked clinical questions about medical abortion, the presence of remaining products of conception in the uterus (as detected by obstetric ultrasonography) after a medical abortion is not an indication for surgical intervention (that is, vacuum aspiration or dilation and curettage). Remaining products of conception will be expelled during subsequent vaginal bleeding. Still, surgical intervention may be carried out on the woman's request, if the bleeding is heavy or prolonged, or causes anemia, or if there is evidence of endometritis.
Treatment:
In delayed miscarriage In delayed miscarriage (also called missed abortion), the Royal Women's Hospital recommendations of management depend on the findings in ultrasonography: Gestational sac greater than 30-35mm, embryo larger than ~25mm (corresponding to 9+0 weeks of gestational age): Surgery is recommended. It poses a high risk of pain and bleeding with passage of products of conception. Alternative methods may still be considered.
Treatment:
Gestational sac 15-35mm, embryo smaller than 25mm (corresponding to between 7 and 9+0 weeks of gestational age): Medication is recommended. Surgery or expectant management may be considered.
Gestational sac smaller than 15-20mm, corresponding to a gestational age of less than 7 weeks: Expectant management or medication is preferable. The products of conception may be difficult to find surgically with a considerable risk of failed surgical procedure.
In incomplete miscarriage In incomplete miscarriage, the Royal Women's Hospital recommendations of management depend on the findings in ultrasonography: Retained products of conception smaller than 15mm: Expectant management is generally preferable. There is a high chance of spontaneous expulsion.
Retained products of conception measuring between 15 and 20mm: Medical or expectant management are recommended. Surgery should only be considered upon specific indication.
At retained products of conception measuring over 35 to 50mm, the following measures are recommended:Administration of misoprostol to hasten passage of products of conception.
Admission to inpatient care for observation for a few hours or overnight until the majority of the products of conception has passed and bleeding subsided.
After apparent failure of misoprostol, a gynecologic examination should be done prior to considering surgical evacuation of the uterus or the patient leaving the | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metacarpal bones**
Metacarpal bones:
In human anatomy, the metacarpal bones or metacarpus, also known as the "palm bones", are the appendicular bones that form the intermediate part of the hand's skeleton between the phalanges (finger bones) and the carpal bones (wrist bones, which articulate with the forearm). The metacarpal bones are homologous to the metatarsal bones in the foot.
Structure:
The metacarpals form a transverse arch to which the rigid row of distal carpal bones are fixed. The peripheral metacarpals (those of the thumb and little finger) form the sides of the cup of the palmar gutter and as they are brought together they deepen this concavity. The index metacarpal is the most firmly fixed, while the thumb metacarpal articulates with the trapezium and acts independently from the others. The middle metacarpals are tightly united to the carpus by intrinsic interlocking bone elements at their bases. The ring metacarpal is somewhat more mobile while the fifth metacarpal is semi-independent.Each metacarpal bone consists of a body or shaft, and two extremities: the head at the distal or digital end (near the fingers), and the base at the proximal or carpal end (close to the wrist).
Structure:
Body The body (shaft) is prismoid in form, and curved, so as to be convex in the longitudinal direction behind, concave in front. It presents three surfaces: medial, lateral, and dorsal.
The medial and lateral surfaces are concave, for the attachment of the interosseus muscles, and separated from one another by a prominent anterior ridge.
Structure:
The dorsal surface presents in its distal two-thirds a smooth, triangular, flattened area which is covered in by the tendons of the extensor muscles. This surface is bounded by two lines, which commence in small tubercles situated on either side of the digital extremity, and, passing upward, converge and meet some distance above the center of the bone and form a ridge which runs along the rest of the dorsal surface to the carpal extremity. This ridge separates two sloping surfaces for the attachment of the interossei dorsales.
Structure:
To the tubercles on the digital extremities are attached the collateral ligaments of the metacarpophalangeal joints.
Base The base (basis) or carpal extremity is of a cuboidal form, and broader behind than in front: it articulates with the carpal bones and with the adjoining metacarpal bones; its dorsal and volar surfaces are rough, for the attachment of ligaments.
Structure:
Head The head (caput) or digital extremity presents an oblong surface markedly convex from before backward, less so transversely, and flattened from side to side; it articulates with the proximal phalanx. It is broader, and extends farther upward, on the volar than on the dorsal aspect, and is longer in the antero-posterior than in the transverse diameter. On either side of the head is a tubercle for the attachment of the collateral ligament of the metacarpophalangeal joint.
Structure:
The dorsal surface, broad and flat, supports the tendons of the extensor muscles.
The volar surface is grooved in the middle line for the passage of the flexor tendons, and marked on either side by an articular eminence continuous with the terminal articular surface.
Neck The neck, or subcapital segment, is the transition zone between the body and the head.
Structure:
Articulations Besides the metacarpophalangeal joints, the metacarpal bones articulate by carpometacarpal joints as follows: the first with the trapezium; the second with the trapezium, trapezoid, capitate and third metacarpal; the third with the capitate and second and fourth metacarpals; the fourth with the capitate, hamate, and third and fifth metacarpals; and the fifth with the hamate and fourth metacarpal; Insertions Extensor Carpi Radialis Longus/Brevis: Both insert on the base of metacarpal II; Assist with wrist extension and radial flexion of the wrist Extensor Carpi Ulnaris: Inserts on the base of metacarpal V; Extends and fixes wrist when digits are being flexed; assists with ulnar flexion of wrist Abductor Pollicis Longus: Inserts on the trapezium and base of metacarpal I; Abducts thumb in frontal plane; extends thumb at carpometacarpal joint Opponens Pollicis: Inserts on metacarpal I; flexes metacarpal I to oppose the thumb to the fingertips Opponens digiti minimi: Inserts on the medial surface of metacarpal V; Flexes metacarpal V at carpometacarpal joint when little finger is moved into opposition with tip of thumb; deepens palm of hand.
Clinical significance:
Congenital disorders The fourth and fifth metacarpal bones are commonly "blunted" or shortened, in pseudohypoparathyroidism and pseudopseudohypoparathyroidism.
A blunted fourth metacarpal, with normal fifth metacarpal, can signify Turner syndrome.
Blunted metacarpals (particularly the fourth metacarpal) are a symptom of nevoid basal-cell carcinoma syndrome.
Clinical significance:
Fracture The neck of a metacarpal is a common location for a boxer's fracture, but all parts of the metacarpal bone (including head, body and base) are susceptible to fracture. During their lifetime, 2.5% of individuals will experience at least one metacarpal fracture. Bennett's fracture (base of the thumb) is the most common. Several types of treatment exist ranging from non-operative techniques, with or without immobilization, to operative techniques using closed or open reduction and internal fixation (ORIF). Generally, most fractures showing little or no displacement can be treated successfully without surgery. Intraarticular fracture-dislocations of the metacarpal head or base may require surgical fixation, as fragment displacement affecting the joint surface is rarely tolerated well.
Other animals:
In four-legged animals, the metacarpals form part of the forefeet, and are frequently reduced in number, appropriate to the number of toes. In digitigrade and unguligrade animals, the metacarpals are greatly extended and strengthened, forming an additional segment to the limb, a feature that typically enhances the animal's speed. In both birds and bats, the metacarpals form part of the wing.
History:
Etymology The Greek physician Galen used to refer to the metacarpus as μετακάρπιον. The Latin form metacarpium more truly resembles its Ancient Greek predecessor μετακάρπιον than metacarpus. Meta– is Greek for beyond and carpal from Ancient Greek καρπός (karpós, “wrist”).
History:
In anatomic Latin, adjectives like metacarpius, metacarpicus, metacarpiaeus, metacarpeus, metacarpianus and metacarpalis can be found. The form metacarpius is more true to the later Greek form μετακάρπιος. Metacarpalis, as in ossa metacarpalia in the current official Latin nomenclature, Terminologia Anatomica is a compound consisting of Latin and Greek parts. The usage of such hybrids in anatomic Latin is disapproved by some. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fagottini**
Fagottini:
Fagottini [faɡotˈtiːni] (Italian: little bundles) is a kind of filled pasta. It is usually filled with vegetables, typically steamed carrots and green beans, ricotta, onion and olive oil. Fagottini are made by cutting sheets of pasta dough into squares, placing the filling on the square, and folding the corners to meet in a point. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**M+ FONTS**
M+ FONTS:
M+ FONTS is a series of Japanese fonts designed by Coji Morishita. The "M" stands for "minimum", while the plus sign means "above minimum".
Fonts:
Vector The "M+ OUTLINE FONTS" are of a Gothic sans-serif style, with proportional and monospaced fonts and many different weights, ranging from thin to black. The fonts support the following character sets: C0 controls and basic Latin, Latin-1 Supplement, Latin Extended-A, Japanese kana, and Japanese kanji. The fonts are developed using FontForge. The current version contains over 4600 glyphs.
Nomenclature M+ vector fonts are named as such: M+ followed by 1 or 2, and then optionally P (proportional), C (optimized for typesetting), M (monospaced), and MN (monospaced high-visibility variant for programming use). The numbers denote glyph design styles, while the letters denote Latin glyph configurations.
Fonts:
Each Type 2 font has several glyphs that differ from its respective Type 1 font. Kana & Latin-style numbering. Japanese glyphs are fullwidth, and kanji glyphs are identical between variants of the same weight. Proportional Latin fonts are available in thin, light, regular, medium, bold, heavy, and black weights, and fixed halfwidth Latin fonts are available in thin, light, regular, medium, and bold weights.
Fonts:
Raster The "M+ BITMAP FONTS" are raster fonts originally developed in 2002.
Japanese and Latin: All Japanese glyphs occupy full-width cells. Fonts are made in heights of 10 and 12 pixels in regular and bold weights.
M+ gothic: Japanese with half-width Latin glyphs.
M+ goth_p: Japanese with proportional Latin glyphs.
Latin only M+ fxd: consists of fixed-width glyphs. Has a height of 10 and 12 pixels in regular and bold weights.
M+ hlv: a replacement of Helvetica. Has a height of 10 and 12 pixels in regular and bold weights.
M+ sys: designed for user interfaces. Has a height of 10 pixels in regular and bold weights.
M+ qub: a regular-weight miniature font with a height of 6 pixels.
Accolades:
The M+ font family was selected as one of the "free fonts of the month" in Smashing Magazine and as a SourceForge "Project of the Month". It has also been selected as one of eight "excellent" fonts for print and screen.
License:
Early versions of M+ used a pseudo-license disclaimer that effectively disowned any copyright: These fonts are free software.
Unlimited permission is granted to use, copy, and distribute them, with or without modification, either commercially or noncommercially.
THESE FONTS ARE PROVIDED "AS IS" WITHOUT WARRANTY.
The version released in 2019 under cooperation with Google Fonts uses the Open Font License. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Piercing migration**
Piercing migration:
Piercing migration is the process that occurs when a body piercing moves from its initial location. This process can be painful or go unnoticed, until it has progressed. Given enough time, a ring may migrate entirely outside of the skin, although it may only migrate a small amount and come to rest.
Potential causes and effects of migration:
The effects of migration can vary widely. The most common form of migration is the way that heavy small gauge earrings will migrate downwards out of the earlobe, as is common in older women who have worn earrings most of their lives. This is known as the "cheesecutter effect", as its action is easily compared to the method of cutting cheese with a fine wire. Contemporary body and ear piercing jewelry is much more balanced in its weight to gauge ratio, although migration is still possible with heavy jewelry, even if it is of large gauge.
Potential causes and effects of migration:
Play or movement of the area pierced or implanted can also lead to migration, but it's not likely. Sometimes this can occur without an open wound being created, as the fistula stretches in one direction, and tissue fills in behind it. This is not uncommon with tongue piercings, although the migration usually stops before the jewelry would exit the body.
Potential causes and effects of migration:
Damage to the tissue surrounding the piercing can also cause migration. A damaged piercing, much like a fresh piercing, must heal the fistula that it passes through, and the jewelry may start migrating in the direction of the wound, further damaging the fistula as it moves. Should the fistula heal, the migration may stop, although it may be inclined to continue migrating, as the re-healed area of tissue may not be as strong as the original fistula was.
Potential causes and effects of migration:
Migration may also be caused by the body rejecting the material that the jewelry is made of. Like a case of a splinter or other foreign object, the body will try to push out foreign material, especially if it irritates the surrounding tissue. Contemporary body jewelry is made from surgical grade implant materials, so with proper aftercare during the healing phase and good hygiene, this is rare.
Potential causes and effects of migration:
Pressure, especially the pressure caused by improperly performed surface, navel, and eyebrow piercings often leads to migration. Proper, custom made jewelry can reduce the risk of migration associated with these piercings, although it cannot eliminate it. This type of migration is sometimes accompanied by rejection due to improper drainage due to the length of piercing, as dead tissue builds up in the healing fistula.
Rejection rate:
Rejection rate is a term used by the piercing industry. It applies to the chance of a piercing being forced out by the body. This is a body's natural reaction to a foreign object being inserted into the skin. This behavior can be witnessed with other objects such as splinters, road rash, or infections. With surface piercings being closer to the surface of the skin, the tendency to reject is higher, as it is easier for the body to force the jewelry out.
Rejection rate:
Surface piercing rejection rates Surface piercings, such as a navel piercing, Christina piercing, eyebrow piercing, or a nape piercing, tend to have a higher rejection rate than piercings that pass through a deeper area of flesh or have holes on the opposite side of each other. Thus surface piercings stand in contrast to piercings such as tongue piercing, earrings, or nose piercings. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Undecylenic acid**
Undecylenic acid:
Undecylenic acid is an organic compound with the formula CH2=CH(CH2)8CO2H. It is an unsaturated fatty acid. It is a colorless oil. Undecylenic acid is mainly used for the production of Nylon-11 and in the treatment of fungal infections of the skin, but it is also a precursor in the manufacture of many pharmaceuticals, personal hygiene products, cosmetics, and perfumes. Salts and esters of undecylenic acid are known as undecylenates.
Preparation:
Undecylenic acid is prepared by pyrolysis of ricinoleic acid, which is derived from castor oil. Specifically, the methyl ester of ricinoleic acid is cracked to yield both undecylenic acid and heptanal. The process is conducted at 500–600 °C in the presence of steam. The methyl ester is then hydrolyzed.
General commercial uses:
Undecylenic acid is converted to 11-aminoundecanoic acid on an industrial scale. This aminocarboxylic acid is the precursor to Nylon-11.Undecylenic acid is reduced to undecylene aldehyde, which is valued in perfumery. The acid is first converted to the acid chloride, which allows selective reduction.
General commercial uses:
Medical uses Undecylenic acid is an active ingredient in medications for skin infections, and to relieve itching, burning, and irritation associated with skin problems. For example, it is used against fungal skin infections, such as athlete's foot, ringworm, tinea cruris, or other generalized infections by Candida albicans. When used for tinea cruris, it can result in extreme burning. In some case studies of tinea versicolor, pain and burning result from fungicide application. In a review of placebo-controlled trials, undecenoic acid was deemed efficacious, alongside prescription azoles (e.g., clotrimazole) and allylamines (e.g., terbinafine). Undecylenic acid is also a precursor to antidandruff shampoos and antimicrobial powders.In terms of the mechanism underlying its antifungal effects against Candida albicans, undecylenic acid inhibits morphogenesis. In a study on denture liners, undecylenic acid in the liners was found to inhibit conversion of yeast to the hyphal form (which are associated with active infection), via inhibition of fatty acid biosynthesis. The mechanism of action and effectiveness in fatty acid-type antifungals is dependent on the number of carbon atoms in the chain, with efficacy increasing with the number of atoms in the chain.
General commercial uses:
U.S. FDA approval Undecylenic acid is approved by the U.S. FDA for topical route and is listed in the Code of Federal Regulations.
Research uses:
Undecylenic acid has been used as a linking molecule, because it is a bifunctional compound. Specifically it is an α,ω- (terminally functionalized) bifunctional agent. For instance, the title compound has been used to prepare silicon-based biosensors, linking silicon transducer surfaces to the terminal double bond of undecylenic acid (forming an Si-C bond), leaving the carboxylic acid groups available for conjugation of biomolecules (e.g., proteins). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multi-Scale Multidisciplinary Modeling of Electronic Materials Collaborative Research Alliance**
Multi-Scale Multidisciplinary Modeling of Electronic Materials Collaborative Research Alliance:
Multi-Scale Multidisciplinary Modeling of Electronic Materials (MSME) Collaborative Research Alliance (CRA) was a research program in the United States that was initiated and sponsored by the US Army Research Laboratory (ARL). The objective of the program was “to develop quantitative understanding of materials from the smallest to the largest relevant scales to advance the state of the art in electronic, optoelectronic and electrochemical materials and devices.”Collaborative Technology and Research Alliances is a term for partnerships between Army laboratories and centers, private industry and academia for performing research and technology development intended to benefit the US Army. The partnerships are funded by the US Army.MSME was awarded in 2012. The program was completed in 2016.
Objectives:
The objective of this Alliance was to conduct research supporting efforts in future electronic materials and devices for the Army. MSME achieved this through development of fundamental models in electronic materials research. The multiscale models were assembled by the MSME team and the experimentation for validation and verification for these models was performed by ARL scientists in each research area, which was part of a continual process to develop new models through collaboration between ARL and its partners.
Research Thrusts:
The MSME program was organized around several research thrusts, which included the following: Electrochemical Energy Devices - Focus on interfacial physics and chemistry, ion transport, nanostructures, solid-liquid interface: to include fuel cells, capacitors, etc.
Hybrid Photonic Devices - Study the interaction of photons, electrons, and phonons, to include photonics, spintronics, plasmonics, etc.
Heterogeneous Metamorphic Electronics - Examine mixed materials with partial ordering, e.g., graphene, metamaterials, nanoelectronic structures, etc.
Participants:
The research under this program was performed collaboratively by the US Army Research Laboratory and by scientists and engineers of the following institutions: Army Research Laboratory University of Utah Boston University Rensselaer Polytechnic Institute | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gamma-synuclein**
Gamma-synuclein:
Gamma-synuclein is a protein that in humans is encoded by the SNCG gene.Synuclein-gamma is a member of the synuclein family of proteins, which are believed to be involved in the pathogenesis of neurodegenerative diseases. High levels of SNCG have been identified in advanced breast carcinomas suggesting a correlation between overexpression of SNCG and breast tumor development.
Gamma-synuclein:
Gamma-synuclein is a synuclein protein found primarily in the peripheral nervous system (in primary sensory neurons, sympathetic neurons, and motor neurons) and retina. It is also detected in the brain, ovarian tumors, and in the olfactory epithelium. Gamma-synuclein is the least conserved of the synuclein proteins.Gamma-Synucleins expression in breast tumors is a marker for tumor progression as mammalian gamma-synuclein was first identified as breast cancer-specific gene 1 (BCSG1). A change in the expression of gamma-synuclein has been observed in the retina of patients with Alzheimer's disease. The normal cellular function of gamma-synuclein remains unknown.
Interactions:
Gamma-synuclein has been shown to interact with BUB1B. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Porphyrin**
Porphyrin:
Porphyrins ( POR-fər-in) are a group of heterocyclic macrocycle organic compounds, composed of four modified pyrrole subunits interconnected at their α carbon atoms via methine bridges (=CH−). In vertebrates, an essential member of the porphyrin group is heme, which is a component of hemoproteins, whose functions include carrying oxygen in the bloodstream. In plants, an essential porphyrin derivative is chlorophyll, which is involved in light-harvesting and electron transfer in photosynthesis.
Porphyrin:
The parent of porphyrins is porphine, a rare chemical compound of exclusively theoretical interest. Substituted porphines are called porphyrins. With a total of 26 π-electrons, of which 18 π-electrons form a planar, continuous cycle, the porphyrin ring structure is often described as aromatic. One result of the large conjugated system is that porphyrins typically absorb strongly in the visible region of the electromagnetic spectrum, i.e. they are deeply colored. The name "porphyrin" derives from the Greek word πορφύρα (porphyra), meaning purple.
Structure:
Porphyrin complexes consist of a square planar MN4 core. The periphery of the porphyrins, consisting of sp2-hybridized carbons, generally display small deviations from planarity. "Ruffled" or saddle-shaped porphyrins is attributed to interactions of the system with its environment. Additionally, the metal is often not centered in the N4 plane. For free porphyrins, the two pyrrole protons are mutually trans and project out of the N4 plane. These nonplanar distortions are associated with altered chemical and physical properties. Chlorophyll-rings are more distinctly nonplanar, but they are more saturated than porphyrins.
Complexes of porphyrins:
Concomitant with the displacement of two N-H protons, porphyrins bind metal ions in the N4 "pocket". The metal ion usually has a charge of 2+ or 3+. A schematic equation for these syntheses is shown: H2porphyrin + [MLn]2+ → M(porphyrinate)Ln−4 + 4 L + 2 H+, where M = metal ion and L = a ligandThe insertion of the metal center is slow in the absence of catalysts. In nature, these catalysts (enzymes) are called chelatases. When there is no metal ion (or atom) bound to the nitrogens in the center, then the compounds are called free porphine or free porphyrin. If they are bonded to a metal in the center, then they are bound. A porphyrin with an iron atom of the type found in myoglobin, hemoglobin, or certain cytochromes is called heme. Metal complexes derived from porphyrins, often called metalloporphyins, occur naturally. One of the best-known families of porphyrin complexes is heme, the pigment in red blood cells, a cofactor of the protein hemoglobin. Porphin is the simplest porphyrin, a rare compound of theoretical interest.
Ancient porphyrins:
A geoporphyrin, also known as a petroporphyrin, is a porphyrin of geologic origin. They can occur in crude oil, oil shale, coal, or sedimentary rocks. Abelsonite is possibly the only geoporphyrin mineral, as it is rare for porphyrins to occur in isolation and form crystals.The field of organic geochemistry had its origins in the isolation of porphyrins from petroleum. This finding helped establish the biological origins of petroleum. Petroleum is sometimes "fingerprinted" by analysis of trace amounts of nickel and vanadyl porphyrins.
Biosynthesis:
In non-photosynthetic eukaryotes such as animals, insects, fungi, and protozoa, as well as the α-proteobacteria group of bacteria, the committed step for porphyrin biosynthesis is the formation of δ-aminolevulinic acid (δ-ALA, 5-ALA or dALA) by the reaction of the amino acid glycine with succinyl-CoA from the citric acid cycle. In plants, algae, bacteria (except for the α-proteobacteria group) and archaea, it is produced from glutamic acid via glutamyl-tRNA and glutamate-1-semialdehyde. The enzymes involved in this pathway are glutamyl-tRNA synthetase, glutamyl-tRNA reductase, and glutamate-1-semialdehyde 2,1-aminomutase. This pathway is known as the C5 or Beale pathway.
Biosynthesis:
Two molecules of dALA are then combined by porphobilinogen synthase to give porphobilinogen (PBG), which contains a pyrrole ring. Four PBGs are then combined through deamination into hydroxymethyl bilane (HMB), which is hydrolysed to form the circular tetrapyrrole uroporphyrinogen III. This molecule undergoes a number of further modifications. Intermediates are used in different species to form particular substances, but, in humans, the main end-product protoporphyrin IX is combined with iron to form heme. Bile pigments are the breakdown products of heme.
Biosynthesis:
The following scheme summarizes the biosynthesis of porphyrins, with references by EC number and the OMIM database. The porphyria associated with the deficiency of each enzyme is also shown:
Laboratory synthesis:
A common synthesis for porphyrins is the Rothemund reaction, first reported in 1936, which is also the basis for more recent methods described by Adler and Longo. The general scheme is a condensation and oxidation process starting with pyrrole and an aldehyde.
Applications:
Photodynamic therapy Porphyrins have been evaluated in the context of photodynamic therapy (PDT) since they strongly absorb light, which is then converted to heat in the illuminated areas. This technique has been applied in macular degeneration using verteporfin.PDT is considered a noninvasive cancer treatment, involving the interaction between light of a determined frequency, a photo-sensitizer, and oxygen. This interaction produces the formation of a highly reactive oxygen species (ROS), usually singlet oxygen, as well as superoxide anion, free hydroxyl radical, or hydrogen peroxide. These high reactive oxygen species react with susceptible cellular organic biomolecules such as; lipids, aromatic amino acids, and nucleic acid heterocyclic bases, to produce oxidative radicals that damage the cell, possibly inducing apoptosis or even necrosis.
Applications:
Toxicology Heme biosynthesis is used as biomarker in environmental toxicology studies. While excess production of porphyrins indicate organochlorine exposure, lead inhibits ALA dehydratase enzyme.
Biological applications Porphyrins have been investigated as possible anti-inflammatory agents and evaluated on their anti-cancer and anti-oxidant activity. Several porphyrin-peptide conjugates were found to have antiviral activity against HIV in vitro.
Synthetic applications Complexes of cobalt(II) porphyrins have been extensively utilized as catalysts in organic synthesis. Due to their distinctive biomimetic radical mechanisms that involve metal-stabilized radical intermediates, the Co(II)–porphyrin-based catalysis system addresses some long-standing challenges in organic transformations.
Potential applications:
Biomimetic catalysis Although not commercialized, metalloporphyrin complexes are widely studied as catalysts for the oxidation of organic compounds. Particularly popular for such laboratory research are complexes of meso-tetraphenylporphyrin and octaethylporphyrin. Complexes with Mn, Fe, and Co catalyze a variety of reactions of potential interest in organic synthesis. Some complexes emulate the action of various heme enzymes such as cytochrome P450, lignin peroxidase. Metalloporphyrins are also studied as catalysts for water splitting, with the purpose of generating molecular hydrogen and oxygen for fuel cells.
Potential applications:
Molecular electronics and sensors Porphyrin-based compounds are of interest as possible components of molecular electronics and photonics. Synthetic porphyrin dyes have been incorporated in prototype dye-sensitized solar cells.Metalloporphyrins have been investigated as sensors.Phthalocyanines, which are structurally related to porphyrins, are used in commerce as dyes and catalysts, but porphyrins are not.
Supramolecular chemistry Porphyrins are often used to construct structures in supramolecular chemistry. These systems take advantage of the Lewis acidity of the metal, typically zinc. An example of a host–guest complex that was constructed from a macrocycle composed of four porphyrins. A guest-free base porphyrin is bound to the center by coordination with its four-pyridine substituents.
Theoretical interest in aromaticity Porphyrinoid macrocycles can show variable aromaticity. An Hückel aromatic porphyrin is porphycene. antiaromatic, Mobius aromatic, and non aromatic porphyrinoid macrocycles are known.
Related species:
In nature Several heterocycles related to porphyrins are found in nature, almost always bound to metal ions. These include Synthetic A benzoporphyrin is a porphyrin with a benzene ring fused to one of the pyrrole units. e.g. verteporfin is a benzoporphyrin derivative.
Related species:
Non-natural porphyrin isomers The first synthetic porphyrin isomer was reported by Emanual Vogel and coworkers in 1986. This isomer [18]porphyrin-(2.0.2.0) is named as porphycene, and the central N4 Cavity forms a rectangle shape as shown in figure. Porphycenes showed interesting photophysical behavior and found versatile compound towards the photodynamic therapy. This inspired Vogel and Sessler to took up the challenge of preparing [18]porphyrin-(2.1.0.1) and named it as corrphycene or porphycerin. The third porphyrin that is [18]porphyrin-(2.1.1.0), was reported by Callot and Vogel-Sessler. Vogel and coworkers reported successful isolation of [18]porphyrin-(3.0.1.0) or isoporphycene. The Japanese scientist Furuta and Polish scientist Latos-Grażyński almost simultaneously reported the N-confused porphyrins. The inversion of one of the pyrrolic subunits in the macrocyclic ring resulted in one of the nitrogen atoms facing outwards from the core of the macrocycle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**English punctuation**
English punctuation:
Punctuation in the English language helps the reader to understand a sentence through visual means other than just the letters of the alphabet. English punctuation has always had two complementary aspects: on the one hand, phonological punctuation linked to how the sentence can be read aloud, particularly to pausing; and on the other hand, grammatical punctuation linked to the structure of the sentence. In popular discussion of language, incorrect punctuation is often seen as an indication of lack of education and of a decline of standards.
Variants:
British and American styles The two broad styles of punctuation in English are often called British (typically used in the UK, Ireland, and most of the Commonwealth of Nations) and American (also common in Canada and places with a strong American influence on local English, as in the Philippines). These two styles differ mainly in the way in which they handle quotation marks with adjacent punctuation, and the use or omission of the full point (period) with contraction abbreviations. (See subsections below on Quotation marks and Full stop, full point or period.) Open and closed punctuation The terms open and closed punctuation have been applied to minimizing versus comprehensively including punctuation, respectively, aside from any dialectal trends. Closed punctuation is used in scholarly, literary, general business, and "everyday" writing. Open style dominates in text messaging and other short-form online communication, where more formal or "closed" punctuation can be misinterpreted as aloofness or even hostility.
Variants:
Open punctuation Open punctuation eliminates the need for a period at the end of a stand-alone statement, in an abbreviation or acronym (including personal initials and post-nominal letters, and time-of-day abbreviations), as well as in components of postal addresses. This style also eschews optional commas in sentences, including the serial comma. Open punctuation also frequently drops apostrophes.Open punctuation is used primarily in certain forms of business writing, such as letterhead and envelope addressing, some business letters, and résumés and their cover letters.
Variants:
Closed punctuation In contrast, closed punctuation uses commas and periods in a strict manner.Closed style is common in presentations, especially in bulleted and numbered lists. It is also frequently used in advertising, marketing materials, news headlines, and signage.
Usage of different punctuation marks or symbols:
Frequency One analysis found the average frequencies for English punctuation marks, based on 723,000 words of assorted texts, to be as follows (as of 2013, but with some text corpora dating to 1998 and 1987): Apostrophe The apostrophe ( ’, ' ), sometimes called inverted comma in British English, is used to mark possession, as in "John's book", and to mark letters omitted in contractions, such as you're for you are.
Usage of different punctuation marks or symbols:
Brackets Brackets ( [...], (...), {...}, ⟨...⟩ ) are used for parenthesis, explanation or comment: such as "John Smith (the elder, not his son)..." Colon The colon ( : ) is used to start an enumeration, as in Her apartment needed a few things: a toaster, a new lamp, and a nice rug. It is used between two clauses when the second clause otherwise clarifies the first, as in I can barely keep my eyes open: I hardly got a wink of sleep.
Usage of different punctuation marks or symbols:
Comma The comma ( , ) is used to disambiguate the meaning of sentences, by providing boundaries between clauses and phrases. For example, "Man, without his cell phone, is nothing" (emphasizing the importance of cell phone) and "Man: without, his cell phone is nothing" (emphasizing the importance of men) have greatly different meanings, as do "eats shoots and leaves" (to mean "consumes plant growths") and "eats, shoots and leaves" (to mean "eats firstly, fires a weapon secondly, and leaves the scene thirdly").The comma is also used to group digits in numerals and dates: “2,000” and “January 7, 1985”. In many other languages, the comma is used as the decimal separator.
Usage of different punctuation marks or symbols:
Dash and hyphen The dash ( ‒, –, —, ― ) and hyphen or hyphen-minus ( ‐ ) is used: as a line continuation when a word is broken across two lines; to apply a prefix to a word for which there is no canonical compound word; as a replacement for a comma, when the subsequent clause significantly shifts the primary focus of the preceding text.
Usage of different punctuation marks or symbols:
Ellipsis An ellipsis ( ..., …, . . . ) is used to mark omitted text or when a sentence trails off.
Exclamation mark The exclamation mark ( ! ) is used to mark an exclamation.
Usage of different punctuation marks or symbols:
Full point, full stop, or period The character known as the full point or full stop in British and Commonwealth English and as the period in North American English ( . ) serves multiple purposes. As the full stop, it is used to mark the end of a sentence. It is also used, as the full point, to indicate abbreviation, including of names as initials: Dwight D. Eisenhower's home in Gettysburg, Pa., was not very far from Washington, D.C.
Usage of different punctuation marks or symbols:
The frequency and specifics of the latter use vary widely, over time and regionally. For example, these marks are usually left out of acronyms and initialisms today, and in many British publications they are omitted from contractions such as Dr for Doctor, where the abbreviation begins and ends with the same letters as the full word.
Usage of different punctuation marks or symbols:
Another use of this character, as the decimal point, is found in mathematics and computing (where it is often nicknamed the "dot"), dividing whole numbers from decimal fractions, as in 2,398.45. In many languages, the roles of the comma and decimal point are reversed, with the comma serving as the decimal separator and the dot used as a thousands separator (though a thin space is sometimes used for the latter purpose, especially in technical writing, regardless what the decimal separator is). In computing, the dot is used as a delimiter more broadly, as site and file names ("wikipedia.org", "192.168.0.1." "document.txt"), and serves special functions in various programming and scripting languages.
Usage of different punctuation marks or symbols:
Question marks The question mark ( ? ) is used to mark the end of a sentence which is a question.
Usage of different punctuation marks or symbols:
Quotation marks Quotation marks ( ‘...’, “...”, '...', "..." ) are used in pairs to set off quotation, with two levels for distinguishing nested quotations: single and double. North American publishers of English texts tend to favour double quotation marks for the primary quotation, switching to single for any quote-within-a-quote, while British and Commonwealth publishers may use either single or double for primary quotation, also switching to the alternative for any nested. Further nesting (quote-within-a-quote-within-a-quote) reverts to the primary marks, and so forth.
Usage of different punctuation marks or symbols:
Question marks, exclamation points, semicolons and colons are placed inside the quotation marks when they apply only to the quoted material; if they syntactically apply to the sentence containing or introducing the material, they are placed outside the marks. In British publications (and those throughout the Commonwealth of Nations more broadly), periods and commas are most often treated the same way, but usage varies widely. In American publications, periods and commas are usually placed inside the quotation marks regardless. The American system, also known as typographer's quotation, is also common in Canadian English, and in fiction broadly. A third system, known as logical quotation, is strict about not including terminal punctuation within the quotation marks unless it was also found in the quoted material. Some writers conflate logical quotation and the common British style (which actually permits some variation, such as replacement of an original full stop with a comma or vice versa, to suit the needs of the quoting sentence, rather than moving the non-original punctuation outside the quotation marks). For example, The Chicago Manual of Style, 14th ed.: "The British style is strongly advocated by some American language experts. Whereas there clearly is some risk with question marks and exclamation points, there seems little likelihood that readers will be misled concerning the period or comma." It goes on to recommend "British" or logical quotation for fields such as linguistics, literary criticism, and technical writing, and also notes its use in philosophy texts.
Usage of different punctuation marks or symbols:
Semicolon The semicolon ( ; ) is used to separate two independent but related clauses: My wife would like tea; I would prefer coffee. The semicolon is also used to separate list items when the list items contain commas: "She saw three men: Jamie, who came from New Zealand; John, the milkman's son; and George, a gaunt kind of man." Slash The slash or stroke or solidus ( /, ⁄ ) is often used to indicate alternatives, such as "his/her", or two equivalent meanings or spellings, such as "grey/gray". The slash is used in certain set phrases, such as the conjunction "and/or". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glucocorticoid**
Glucocorticoid:
Glucocorticoids (or, less commonly, glucocorticosteroids) are a class of corticosteroids, which are a class of steroid hormones. Glucocorticoids are corticosteroids that bind to the glucocorticoid receptor that is present in almost every vertebrate animal cell. The name "glucocorticoid" is a portmanteau (glucose + cortex + steroid) and is composed from its role in regulation of glucose metabolism, synthesis in the adrenal cortex, and its steroidal structure (see structure below).
Glucocorticoid:
Glucocorticoids are part of the feedback mechanism in the immune system, which reduces certain aspects of immune function, such as inflammation. They are therefore used in medicine to treat diseases caused by an overactive immune system, such as allergies, asthma, autoimmune diseases, and sepsis. Glucocorticoids have many diverse effects such as pleiotropy, including potentially harmful side effects. They also interfere with some of the abnormal mechanisms in cancer cells, so they are used in high doses to treat cancer. This includes inhibitory effects on lymphocyte proliferation, as in the treatment of lymphomas and leukemias, and the mitigation of side effects of anticancer drugs.
Glucocorticoid:
Glucocorticoids affect cells by binding to the glucocorticoid receptor. The activated glucocorticoid receptor-glucocorticoid complex up-regulates the expression of anti-inflammatory proteins in the nucleus (a process known as transactivation) and represses the expression of proinflammatory proteins in the cytosol by preventing the translocation of other transcription factors from the cytosol into the nucleus (transrepression).Glucocorticoids are distinguished from mineralocorticoids and sex steroids by their specific receptors, target cells, and effects. In technical terms, "corticosteroid" refers to both glucocorticoids and mineralocorticoids (as both are mimics of hormones produced by the adrenal cortex), but is often used as a synonym for "glucocorticoid". Glucocorticoids are chiefly produced in the zona fasciculata of the adrenal cortex, whereas mineralocorticoids are synthesized in the zona glomerulosa.
Glucocorticoid:
Cortisol (or hydrocortisone) is the most important human glucocorticoid. It is essential for life, and it regulates or supports a variety of important cardiovascular, metabolic, immunologic, and homeostatic functions. Various synthetic glucocorticoids are available; these are widely utilized in general medical practice and numerous specialties, either as replacement therapy in glucocorticoid deficiency or to suppress the body's immune system.
Effects:
Glucocorticoid effects may be broadly classified into two major categories: immunological and metabolic. In addition, glucocorticoids play important roles in fetal development and body fluid homeostasis.
Immune Glucocorticoids function via interaction with the glucocorticoid receptor (see details below): Upregulate the expression of anti-inflammatory proteins.
Downregulate the expression of proinflammatory proteins.Glucocorticoids are also shown to play a role in the development and homeostasis of T lymphocytes. This has been shown in transgenic mice with either increased or decreased sensitivity of T cell lineage to glucocorticoids.
Metabolic The name "glucocorticoid" derives from early observations that these hormones were involved in glucose metabolism. In the fasted state, cortisol stimulates several processes that collectively serve to increase and maintain normal concentrations of glucose in the blood.
Metabolic effects: Stimulation of gluconeogenesis, in particular, in the liver: This pathway results in the synthesis of glucose from non-hexose substrates, such as amino acids and glycerol from triglyceride breakdown, and is particularly important in carnivores and certain herbivores. Enhancing the expression of enzymes involved in gluconeogenesis is probably the best-known metabolic function of glucocorticoids.
Mobilization of amino acids from extrahepatic tissues: These serve as substrates for gluconeogenesis.
Inhibition of glucose uptake in muscle and adipose tissue: A mechanism to conserve glucose Stimulation of fat breakdown in adipose tissue: The fatty acids released by lipolysis are used for production of energy in tissues like muscle, and the released glycerol provide another substrate for gluconeogenesis.
Increase in sodium retention and potassium excretion leads to hypernatremia and hypokalemia Increase in hemoglobin concentration, likely due to hindrance of the ingestion of red blood cell by macrophage or other phagocyte.
Effects:
Increased urinary uric acid Increased urinary calcium and hypocalcemia Alkalosis LeukocytosisExcessive glucocorticoid levels resulting from administration as a drug or hyperadrenocorticism have effects on many systems. Some examples include inhibition of bone formation, suppression of calcium absorption (both of which can lead to osteoporosis), delayed wound healing, muscle weakness, and increased risk of infection. These observations suggest a multitude of less-dramatic physiologic roles for glucocorticoids.
Effects:
Developmental Glucocorticoids have multiple effects on fetal development. An important example is their role in promoting maturation of the lung and production of the surfactant necessary for extrauterine lung function. Mice with homozygous disruptions in the corticotropin-releasing hormone gene (see below) die at birth due to pulmonary immaturity. In addition, glucocorticoids are necessary for normal brain development, by initiating terminal maturation, remodeling axons and dendrites, and affecting cell survival and may also play a role in hippocampal development. Glucocorticoids stimulate the maturation of the Na+/K+/ATPase, nutrient transporters, and digestion enzymes, promoting the development of a functioning gastro-intestinal system. Glucocorticoids also support the development of the neonate's renal system by increasing glomerular filtration.
Effects:
Arousal and cognition Glucocorticoids act on the hippocampus, amygdala, and frontal lobes. Along with adrenaline, these enhance the formation of flashbulb memories of events associated with strong emotions, both positive and negative. This has been confirmed in studies, whereby blockade of either glucocorticoids or noradrenaline activity impaired the recall of emotionally relevant information. Additional sources have shown subjects whose fear learning was accompanied by high cortisol levels had better consolidation of this memory (this effect was more important in men). The effect that glucocorticoids have on memory may be due to damage specifically to the CA1 area of the hippocampal formation.
Effects:
In multiple animal studies, prolonged stress (causing prolonged increases in glucocorticoid levels) have shown destruction of the neurons in the hippocampus area of the brain, which has been connected to lower memory performance.Glucocorticoids have also been shown to have a significant impact on vigilance (attention deficit disorder) and cognition (memory). This appears to follow the Yerkes-Dodson curve, as studies have shown circulating levels of glucocorticoids vs. memory performance follow an upside-down U pattern, much like the Yerkes-Dodson curve. For example, long-term potentiation (LTP; the process of forming long-term memories) is optimal when glucocorticoid levels are mildly elevated, whereas significant decreases of LTP are observed after adrenalectomy (low-glucocorticoid state) or after exogenous glucocorticoid administration (high-glucocorticoid state). Elevated levels of glucocorticoids enhance memory for emotionally arousing events, but lead more often than not to poor memory for material unrelated to the source of stress/emotional arousal. In contrast to the dose-dependent enhancing effects of glucocorticoids on memory consolidation, these stress hormones have been shown to inhibit the retrieval of already stored information. Long-term exposure to glucocorticoid medications, such as asthma and anti-inflammatory medication, has been shown to create deficits in memory and attention both during and, to a lesser extent, after treatment, a condition known as "steroid dementia".
Effects:
Body fluid homeostasis Glucocorticoids could act centrally, as well as peripherally, to assist in the normalization of extracellular fluid volume by regulating body's action to atrial natriuretic peptide (ANP). Centrally, glucocorticoids could inhibit dehydration induce water intake; peripherally, glucocorticoids could induce a potent diuresis.
Mechanism of action:
Transactivation Glucocorticoids bind to the cytosolic glucocorticoid receptor, a type of nuclear receptor that is activated by ligand binding. After a hormone binds to the corresponding receptor, the newly formed complex translocates itself into the cell nucleus, where it binds to glucocorticoid response elements in the promoter region of the target genes resulting in the regulation of gene expression. This process is commonly referred to as transcriptional activation, or transactivation.The proteins encoded by these up-regulated genes have a wide range of effects, including, for example: Anti-inflammatory – lipocortin I, p11/calpactin binding protein, secretory leukocyte protease inhibitor 1 (SLPI), and Mitogen-activated protein kinase phosphatase (MAPK phosphatase) Increased gluconeogenesis – glucose 6-phosphatase and tyrosine aminotransferase Transrepression The opposite mechanism is called transcriptional repression, or transrepression. The classical understanding of this mechanism is that activated glucocorticoid receptor binds to DNA in the same site where another transcription factor would bind, which prevents the transcription of genes that are transcribed via the activity of that factor. While this does occur, the results are not consistent for all cell types and conditions; there is no generally accepted, general mechanism for transrepression.New mechanisms are being discovered where transcription is repressed, but the activated glucocorticoid receptor is not interacting with DNA, but rather with another transcription factor directly, thus interfering with it, or with other proteins that interfere with the function of other transcription factors. This latter mechanism appears to be the most likely way that activated glucocorticoid receptor interferes with NF-κB - namely by recruiting histone deacetylase, which deacetylate the DNA in the promoter region leading to closing of the chromatin structure where NF-κB needs to bind.
Mechanism of action:
Nongenomic effects Activated glucocorticoid receptor has effects that have been experimentally shown to be independent of any effects on transcription and can only be due to direct binding of activated glucocorticoid receptor with other proteins or with mRNA.For example, Src kinase which binds to inactive glucocorticoid receptor, is released when a glucocorticoid binds to glucocorticoid receptor, and phosphorylates a protein that in turn displaces an adaptor protein from a receptor important in inflammation, epidermal growth factor, reducing its activity, which in turn results in reduced creation of arachidonic acid - a key proinflammatory molecule. This is one mechanism by which glucocorticoids have an anti-inflammatory effect.
Pharmacology:
A variety of synthetic glucocorticoids, some far more potent than cortisol, have been created for therapeutic use. They differ in both pharmacokinetics (absorption factor, half-life, volume of distribution, clearance) and pharmacodynamics (for example the capacity of mineralocorticoid activity: retention of sodium (Na+) and water; renal physiology). Because they permeate the intestines easily, they are administered primarily per os (by mouth), but also by other methods, such as topically on skin. More than 90% of them bind different plasma proteins, though with a different binding specificity. Endogenous glucocorticoids and some synthetic corticoids have high affinity to the protein transcortin (also called corticosteroid-binding globulin), whereas all of them bind albumin. In the liver, they quickly metabolize by conjugation with a sulfate or glucuronic acid, and are secreted in the urine.
Pharmacology:
Glucocorticoid potency, duration of effect, and the overlapping mineralocorticoid potency vary. Cortisol is the standard of comparison for glucocorticoid potency. Hydrocortisone is the name used for pharmaceutical preparations of cortisol.
Pharmacology:
The data below refer to oral administration. Oral potency may be less than parenteral potency because significant amounts (up to 50% in some cases) may not reach the circulation. Fludrocortisone acetate and deoxycorticosterone acetate are, by definition, mineralocorticoids rather than glucocorticoids, but they do have minor glucocorticoid potency and are included in this table to provide perspective on mineralocorticoid potency.
Therapeutic use:
Glucocorticoids may be used in low doses in adrenal insufficiency. In much higher doses, oral or inhaled glucocorticoids are used to suppress various allergic, inflammatory, and autoimmune disorders. Inhaled glucocorticoids are the second-line treatment for asthma. They are also administered as post-transplantory immunosuppressants to prevent the acute transplant rejection and the graft-versus-host disease. Nevertheless, they do not prevent an infection and also inhibit later reparative processes. Newly emerging evidence showed that glucocorticoids could be used in the treatment of heart failure to increase the renal responsiveness to diuretics and natriuretic peptides. Glucocorticoids are historically used for pain relief in inflammatory conditions. However, corticosteroids show limited efficacy in pain relief and potential adverse events for their use in tendinopathies.
Therapeutic use:
Replacement Any glucocorticoid can be given in a dose that provides approximately the same glucocorticoid effects as normal cortisol production; this is referred to as physiologic, replacement, or maintenance dosing. This is approximately 6–12 mg/m2/day of hydrocortisone (m2 refers to body surface area (BSA), and is a measure of body size; an average man's BSA is 1.9 m2).
Therapeutic immunosuppression Glucocorticoids cause immunosuppression, and the therapeutic component of this effect is mainly the decreases in the function and numbers of lymphocytes, including both B cells and T cells.
Therapeutic use:
The major mechanism for this immunosuppression is through inhibition of nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB). NF-κB is a critical transcription factor involved in the synthesis of many mediators (i.e., cytokines) and proteins (i.e., adhesion proteins) that promote the immune response. Inhibition of this transcription factor, therefore, blunts the capacity of the immune system to mount a response.Glucocorticoids suppress cell-mediated immunity by inhibiting genes that code for the cytokines IL-1, IL-2, IL-3, IL-4, IL-5, IL-6, IL-8 and IFN-γ, the most important of which is IL-2. Smaller cytokine production reduces the T cell proliferation.Glucocorticoids, however, not only reduce T cell proliferation, but also lead to another well known effect - glucocorticoid-induced apoptosis. The effect is more prominent in immature T cells still inside in the thymus, but peripheral T cells are also affected. The exact mechanism regulating this glucocorticoid sensitivity lies in the Bcl-2 gene.Glucocorticoids also suppress the humoral immunity, thereby causing a humoral immune deficiency. Glucocorticoids cause B cells to express smaller amounts of IL-2 and of IL-2 receptors. This diminishes both B cell clone expansion and antibody synthesis. The diminished amounts of IL-2 also cause fewer T lymphocyte cells to be activated.
Therapeutic use:
The effect of glucocorticoids on Fc receptor expression in immune cells is complicated. Dexamethasone decreases IFN-gamma stimulated Fc gamma RI expression in neutrophils while conversely causing an increase in monocytes. Glucocorticoids may also decrease the expression of Fc receptors in macrophages, but the evidence supporting this regulation in earlier studies has been questioned. The effect of Fc receptor expression in macrophages is important since it is necessary for the phagocytosis of opsonised cells. This is because Fc receptors bind antibodies attached to cells targeted for destruction by macrophages.
Therapeutic use:
Anti-inflammatory Glucocorticoids are potent anti-inflammatories, regardless of the inflammation's cause; their primary anti-inflammatory mechanism is lipocortin-1 (annexin-1) synthesis. Lipocortin-1 both suppresses phospholipase A2, thereby blocking eicosanoid production, and inhibits various leukocyte inflammatory events (epithelial adhesion, emigration, chemotaxis, phagocytosis, respiratory burst, etc.). In other words, glucocorticoids not only suppress immune response, but also inhibit the two main products of inflammation, prostaglandins and leukotrienes. They inhibit prostaglandin synthesis at the level of phospholipase A2 as well as at the level of cyclooxygenase/PGE isomerase (COX-1 and COX-2), the latter effect being much like that of NSAIDs, thus potentiating the anti-inflammatory effect.
Therapeutic use:
In addition, glucocorticoids also suppress cyclooxygenase expression.Glucocorticoids marketed as anti-inflammatories are often topical formulations, such as nasal sprays for rhinitis or inhalers for asthma. These preparations have the advantage of only affecting the targeted area, thereby reducing side effects or potential interactions. In this case, the main compounds used are beclometasone, budesonide, fluticasone, mometasone and ciclesonide. In rhinitis, sprays are used. For asthma, glucocorticoids are administered as inhalants with a metered-dose or dry powder inhaler. In rare cases, symptoms of radiation induced thyroiditis has been treated with oral glucocorticoids.
Therapeutic use:
Hyperaldosteronism Glucocorticoids can be used in the management of familial hyperaldosteronism type 1. They are not effective, however, for use in the type 2 condition.
Heart failure Glucocorticoids could be used in the treatment of decompensated heart failure to potentiate renal responsiveness to diuretics, especially in heart failure patients with refractory diuretic resistance with large doses of loop diuretics.
Resistance:
Resistance to the therapeutic uses of glucocorticoids can present difficulty; for instance, 25% of cases of severe asthma may be unresponsive to steroids. This may be the result of genetic predisposition, ongoing exposure to the cause of the inflammation (such as allergens), immunological phenomena that bypass glucocorticoids, pharmacokinetic disturbances (incomplete absorption or accelerated excretion or metabolism) and viral and/or bacterial respiratory infections.
Side effects:
Glucocorticoid drugs currently being used act nonselectively, so in the long run they may impair many healthy anabolic processes. To prevent this, much research has been focused recently on the elaboration of selectively acting glucocorticoid drugs. Side effects include: Immunodeficiency (see section below) Hyperglycemia due to increased gluconeogenesis, insulin resistance, and impaired glucose tolerance ("steroid diabetes"); caution in those with diabetes mellitus Increased skin fragility, easy bruising Negative calcium balance due to reduced intestinal calcium absorption Steroid-induced osteoporosis: reduced bone density (osteoporosis, osteonecrosis, higher fracture risk, slower fracture repair) Weight gain due to increased visceral and truncal fat deposition (central obesity) and appetite stimulation; see corticosteroid-induced lipodystrophy Hypercortisolemia with prolonged or excessive use (also known as, exogenous Cushing's syndrome) Impaired memory and attention deficits See steroid dementia syndrome.
Side effects:
Adrenal insufficiency (if used for long time and stopped suddenly without a taper) Muscle and tendon breakdown (proteolysis), weakness, reduced muscle mass and repair Expansion of malar fat pads and dilation of small blood vessels in skin Lipomatosis within the epidural space Excitatory effect on central nervous system (euphoria, psychosis) Anovulation, irregularity of menstrual periods Growth failure, delayed puberty Increased plasma amino acids, increased urea formation, negative nitrogen balance Glaucoma due to increased ocular pressure Cataracts Topical steroid withdrawalIn high doses, hydrocortisone (cortisol) and those glucocorticoids with appreciable mineralocorticoid potency can exert a mineralocorticoid effect as well, although in physiologic doses this is prevented by rapid degradation of cortisol by 11β-hydroxysteroid dehydrogenase isoenzyme 2 (11β-HSD2) in mineralocorticoid target tissues. Mineralocorticoid effects can include salt and water retention, extracellular fluid volume expansion, hypertension, potassium depletion, and metabolic alkalosis.
Side effects:
Immunodeficiency Glucocorticoids cause immunosuppression, decreasing the function and/or numbers of neutrophils, lymphocytes (including both B cells and T cells), monocytes, macrophages, and the anatomical barrier function of the skin. This suppression, if large enough, can cause manifestations of immunodeficiency, including T cell deficiency, humoral immune deficiency and neutropenia.
Side effects:
Withdrawal In addition to the effects listed above, use of high-dose glucocorticoids for only a few days begins to produce suppression of the patient's adrenal glands suppressing hypothalamic corticotropin-releasing hormone (CRH) leading to suppressed production of adrenocorticotropic hormone (ACTH) by the anterior pituitary. With prolonged suppression, the adrenal glands atrophy (physically shrink), and can take months to recover full function after discontinuation of the exogenous glucocorticoid.
Side effects:
During this recovery time, the patient is vulnerable to adrenal insufficiency during times of stress, such as illness. While suppressive dose and time for adrenal recovery vary widely, clinical guidelines have been devised to estimate potential adrenal suppression and recovery, to reduce risk to the patient. The following is one example: If patients have been receiving daily high doses for five days or less, they can be abruptly stopped (or reduced to physiologic replacement if patients are adrenal-deficient). Full adrenal recovery can be assumed to occur by a week afterward.
Side effects:
If high doses were used for six to 10 days, reduce to replacement dose immediately and taper over four more days. Adrenal recovery can be assumed to occur within two to four weeks of completion of steroids.
If high doses were used for 11–30 days, cut immediately to twice replacement, and then by 25% every four days. Stop entirely when dose is less than half of replacement. Full adrenal recovery should occur within one to three months of completion of withdrawal.
Side effects:
If high doses were used more than 30 days, cut dose immediately to twice replacement, and reduce by 25% each week until replacement is reached. Then change to oral hydrocortisone or cortisone as a single morning dose, and gradually decrease by 2.5 mg each week. When the morning dose is less than replacement, the return of normal basal adrenal function may be documented by checking 0800 cortisol levels prior to the morning dose; stop drugs when 0800 cortisol is 10 μg/dl. Predicting the time to full adrenal recovery after prolonged suppressive exogenous steroids is difficult; some people may take nearly a year.
Side effects:
Flare-up of the underlying condition for which steroids are given may require a more gradual taper than outlined above. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Simmondsia chinensis (jojoba) seed powder**
Simmondsia chinensis (jojoba) seed powder:
Simmondsia chinensis (jojoba) seed powder is a powder of the ground seeds of the jojoba, Simmondsia chenensis. Simmondsia chinensis (jojoba) seed powder is commonly used in cosmetic formulations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Maritime mobile service**
Maritime mobile service:
A maritime mobile service (also MMS or maritime mobile radiocommunication service) is a mobile service between coast stations and ship stations, or between ship stations, or between associated on-board communication stations. The service may also be used by survival craft stations and emergency position-indicating radiobeacon stations.
Classification:
This radiocommunication service is classified in accordance with ITU Radio Regulations (article 1) as follows: Maritime mobile service Maritime mobile-satellite service (article 1.29) Port operations service (article 1.30) Ship movement service (article 1.31)
Frequency allocation:
The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012).In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is with-in the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared.
Frequency allocation:
primary allocation: is indicated by writing in capital letters (see example below) secondary allocation: is indicated by small letters exclusive or shared utilization: is within the responsibility of administrationsHowever, military usage, in bands where there is civil usage, will be in accordance with the ITU Radio Regulations. In NATO countries military utilizations will be in accordance with the NATO Joint Civil/Military Frequency Agreement (NJFA).
Frequency allocation:
Frequency range 415... 495 kHz 505...526,5 kHz 1606,5...1625 kHz 1635...1800 kHz 2045...2160 kHz 2170...2173,5 kHz 2190,5...2194 kHz 2625...2650 kHz 4000...4438 kHz 6200...6525 kHz 8100...8815 kHz 12230...13200 kHz 16360...17410 kHz 18780...18900 kHz 19680...19800 kHz 22000...22855 kHz 25070...25210 kHz 26100...26175 kHz | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Simple Function Point method**
The Simple Function Point method:
The Simple Function Point (SFP) method is a lightweight Functional Measurement Method.
The Simple Function Point method:
The Simple Function Point method was designed by Roberto Meli in 2010 to be compliant with the ISO14143-1 standard and compatible with the International Function Points User Group (IFPUG) Function Point Analysis (FPA) method. The original method (SiFP) was presented for the first time in a public conference in Rome (SMEF2011) The method was subsequently described in a manual produced by the Simple Function Point Association: the Simple Function Point Functional Size Measurement Method Reference Manual, available under the Creatives Commons Attribution-NoDerivatives 4.0 International Public License.
Adoption by IFPUG:
In 2019, the Simple Function Points Method was acquired by the IFPUG, to provide its user community with a simplified Function Point counting method, to make functional size measurement easier yet reliable in the early stages of software projects. The short name became SFP. The SPM (Simple Function Point Practices Manual) was published by IFPUG in late 2021.
Basic concept:
When the SFP method was proposed, the most widely used software functional size measurement method was IFPUG FPA. However, IFPUG FPA had (and still has) a few shortcomings: It is not easy to apply. It requires certified personnel, and the productivity of measurement is relatively low (between 400 and 600 Function Points per day, according to Capers Jones, between 200 and 300 Function Points per day according to experts from Total Metrics ).
Basic concept:
The measurement is partly subjective, since some of its measurement rules have to be suitably interpreted by the person who performs the measurement.
Basic concept:
The diffusion of the method in the software development community is quite limited.To overcome at least some of these problems, the SFP method was defined to provide the following characteristics: Easy to apply; Less subject to interpretation, being based on quite straightforward definitions; Easy to learn: specifically, people familiar with IFPUG FPA could learn SFP very quickly with very little effort; Compatible with the IFPUG FPA; specifically Size[UFP]=Size[SiFP] , that is, a measure of size expressed in UFP should be equal to the measure expressed in SiFP (In this article we use “UFP” for unadjusted Function Point to designate the unit of measure defined by IFPUG FPA and SiFP the unit of measure defined by SFP).The sought characteristics were achieved as follows: IFPUG FPA requires that logical data files and transactions are identified, logical data files are classified into Internal Logical Files (ILF) and External Interface Files (EIF), every transaction is classified as External Input (EI), External Output (EO), External Query (EQ), every ILF and EIF is weighted, based on its Record Element Types (RET) and Data Element Types (DET), every EI, EO and EQ is weighted, based on its File Types Referenced (FTR) and DET exchanged through the borders of the application being measured.Of these activities, SFP requires only the first two, i.e., the identification of logical data files and transactions. Activities 4) and 5) are the most time consuming, since they require that every data file and transaction is examined in detail: skipping these phases makes the SFP method both quicker and easier to apply than IFPUG FPA. In addition, most of the subjective interpretation is due to activities 4) and 5), and partly also to activity 3): skipping these activities makes the SFP method also less prone to subjective interpretation.
Basic concept:
The concepts used in the definition of SFP are a small subset of those used in the definition of IFPUG FPA, therefore learning SFP is easier than learning IFPUG FPA, and it is immediate for those who already know IFPUG FPA. In practice, only the concepts of logical data file and transaction have to be known.
Finally, the weights assigned to data files and transactions make the size in SFP very close to the size expressed in Function Points, on average.
Definition:
The logical data files are named Logical Files (LF) in the SFP method. Similarly, transactions are named Elementary Process (EP). Unlike in IFPUG FPA, there is no classification or weighting of the Base Functional Components (BFC as defined in ISO14143-1 standard).
The size of an EP is 4.6 SFP, while the size of a LF is 7.0 SFP. Therefore the size expressed in SFP is based on the number of data files (#LF) and the number of transactions (#EP). Belonging to the software application being measured: 4.6 #EP+7#LF
Empirical evaluation of the SFP method:
Empirical studies have been carried out, aiming at evaluating the convertibility of SFP and UFP measures comparing the SFP and UFP measures in supporting the estimation of software development effort Convertibility between SFP and FPA measures In the original proposal of the SiFP method, a dataset from the ISBSG, including data from 768 projects, was used to evaluate the convertibility among UFP and SiFP measures. This study showed that on average 1.0005 Size[SiFP] Another study also used an ISBSG dataset to evaluate the convertibility among UFP and SiFP measures. The dataset included data from 766 software applications. Via ordinary least square regression, it was found that 0.998 Size[UFP] Based on these empirical studies, it seems that Size[SiFP]≈Size[UFP] (note that this approximate equivalence holds on average: in both studies an average relative error around 12% was observed).
Empirical evaluation of the SFP method:
However, a third study found 0.815 Size[SiFP] . This study used data from only 25 Web applications, so it is possible that the conversion rate is affected by the specific application type or by the relatively small size of the dataset.
In 2017, a study evaluated the convertibility between UFP and SiFP measures using seven different datasets. Every dataset was characterized by a specific conversion rate. Specifically, it was found that Size[SiFP]=kSize=[UFP] , with 0.957 1.221 ] . Noticeably, for a dataset, no linear model could be found; instead the statistically significant model 1.033 was found.
In conclusion, available evidence shows that one SiFP is approximately equivalent to one UFP, but this equivalence depends on the data being considered, besides being true only on average.
Considering that the IFPUG SFP basic elements (EP, LF) are totaly equivalent to the original SiFP elements (UGEP, UGDG), the previous results hold for the IFPUG SFP method as well.
Empirical evaluation of the SFP method:
Using SFP for software development effort estimation IFPUG FPA is mainly used for estimating software development effort. Therefore, any alternative method that aims at measuring the functional size of software should support effort estimation with the same level of accuracy as IFPUG FPA. In other words, it is necessary to verify that effort estimates based on SFP are at least as good as the estimates based on UFP.
Empirical evaluation of the SFP method:
To perform this verification, an ISBSG dataset was analyzed, and models of effort vs. size were derived, using ordinary least squares regression, after log-log transformations. The effort estimation errors were then compared. It turned out that the two models yielded extremely similar estimation accuracy.
A following study analyzed a dataset containing data from 25 Web applications. Ordinary least squares regression was used to derive UFP-based and SiFP-based effort models. Also in this case, no statistically significant estimation differences could be observed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Brachydactyly-preaxial hallux varus syndrome**
Brachydactyly-preaxial hallux varus syndrome:
Brachydactyly-preaxial hallux varus syndrome, also known as 'Christian brachydactyly, is a rare congenital and genetic limb malformation syndrome which is characterized by hallux varus, brachydactyly type D and Morton's toe, alongside the adduction of said digits. Intellectual disabilities have also been reported. 10 cases have been described in medical literature. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Something Else (book)**
Something Else (book):
Something Else is a 1994 children's picture book written by Kathryn Cave and illustrated by Chris Riddell.
Cave and Riddell were awarded the very first international UNESCO prize for Children's and Young People's Literature in the Service of Tolerance for Something Else.The book was later made into a TV comic series by TV Loonland since 2001.
Plot:
Something Else (the name of the protagonist and Something's best friend) is excluded from everything because he looks different. He does not play the same games, eat the same food or draw the same pictures.
Plot:
Then one day Something turns up and wants to be friends. However, Something Else does not want to be friends with this creature as he believes that they are not the same and he refuses to eat sandwiches with 'Urgy stuff' in them. He sends Something away and then suddenly realizes that he acts like all the other people who always sent him away.
Plot:
Eventually Something Else and Something become best friends.
Translations:
German: Irgendwie Anders Greek: Το Κάτι Άλλο Italian: "Qualcos'altro" Hebrew "משהו אחר" Slovenian: "Drugačen" Finnish: "Karvaiset Kaverit" Spanish: "Lets Anders" Dutch: "Andertje" Swedish: "Hårigt grabbarPersian
Theatrical adaptation:
The book was adapted for the stage by Tall Stories Theatre Company, touring between 2002 and 2010. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interlobular veins**
Interlobular veins:
The stellate veins join to form the interlobular veins, which pass inward between the rays, receive branches from the plexuses around the convoluted tubules, and, having arrived at the bases of the renal pyramids, join with the venae rectae. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reverberation mapping**
Reverberation mapping:
Reverberation mapping (or Echo mapping) is an astrophysical technique for measuring the structure of the broad-line region (BLR) around a supermassive black hole at the center of an active galaxy, and thus estimating the hole's mass. It is considered a "primary" mass estimation technique, i.e., the mass is measured directly from the motion that its gravitational force induces in the nearby gas.Newton's law of gravity defines a direct relation between the mass of a central object and the speed of a smaller object in orbit around the central mass. Thus, for matter orbiting a black hole, the black-hole mass M∙ is related by the formula BLR (ΔV)2 to the RMS velocity ΔV of gas moving near the black hole in the broad emission-line region, measured from the Doppler broadening of the gaseous emission lines. In this formula, RBLR is the radius of the broad-line region; G is the constant of gravitation; and f is a poorly known "form factor" that depends on the shape of the BLR.
Reverberation mapping:
While ΔV can be measured directly using spectroscopy, the necessary determination of RBLR is much less straightforward. This is where reverberation mapping comes into play. It utilizes the fact that the emission-line fluxes vary strongly in response to changes in the continuum, i.e., the light from the accretion disk near the black hole. Put simply, if the brightness of the accretion disk varies, the emission lines, which are excited in response to the accretion disk's light, will "reverberate", that is, vary in response. But it will take some time for light from the accretion disk to reach the broad-line region. Thus, the emission-line response is delayed with respect to changes in the continuum. Assuming that this delay is solely due to light travel times, the distance traveled by the light, corresponding to the radius of the broad emission-line region, can be measured.
Reverberation mapping:
Only a small handful (less than 40) of active galactic nuclei have been accurately "mapped" in this way. An alternative approach is to use an empirical correlation between RBLR and the continuum luminosity.Another uncertainty is the value of f. In principle, the response of the BLR to variations in the continuum could be used to map out the three-dimensional structure of the BLR. In practice, the amount and quality of data required to carry out such a deconvolution is prohibitive. Until about 2004, f was estimated ab initio based on simple models for the structure of the BLR. More recently, the value of f has been determined so as to bring the M–sigma relation for active galaxies into the best possible agreement with the M–sigma relation for quiescent galaxies. When f is determined in this way, reverberation mapping becomes a "secondary", rather than "primary", mass estimation technique. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Legendre's formula**
Legendre's formula:
In mathematics, Legendre's formula gives an expression for the exponent of the largest power of a prime p that divides the factorial n!. It is named after Adrien-Marie Legendre. It is also sometimes known as de Polignac's formula, after Alphonse de Polignac.
Statement:
For any prime number p and any positive integer n, let νp(n) be the exponent of the largest power of p that divides n (that is, the p-adic valuation of n). Then νp(n!)=∑i=1∞⌊npi⌋, where ⌊x⌋ is the floor function. While the sum on the right side is an infinite sum, for any particular values of n and p it has only finitely many nonzero terms: for every i large enough that pi>n , one has ⌊npi⌋=0 . This reduces the infinite sum above to νp(n!)=∑i=1L⌊npi⌋, where log pn⌋ Example For n = 6, one has 720 =24⋅32⋅51 . The exponents ν2(6!)=4,ν3(6!)=2 and ν5(6!)=1 can be computed by Legendre's formula as follows: 1.
Statement:
Proof Since n! is the product of the integers 1 through n, we obtain at least one factor of p in n! for each multiple of p in {1,2,…,n} , of which there are ⌊np⌋ . Each multiple of p2 contributes an additional factor of p, each multiple of p3 contributes yet another factor of p, etc. Adding up the number of these factors gives the infinite sum for νp(n!)
Alternate form:
One may also reformulate Legendre's formula in terms of the base-p expansion of n. Let sp(n) denote the sum of the digits in the base-p expansion of n; then νp(n!)=n−sp(n)p−1.
For example, writing n = 6 in binary as 610 = 1102, we have that s2(6)=1+1+0=2 and so 4.
Similarly, writing 6 in ternary as 610 = 203, we have that s3(6)=2+0=2 and so 2.
Proof Write n=nℓpℓ+⋯+n1p+n0 in base p. Then ⌊npi⌋=nℓpℓ−i+⋯+ni+1p+ni , and therefore νp(n!)=∑i=1ℓ⌊npi⌋=∑i=1ℓ(nℓpℓ−i+⋯+ni+1p+ni)=∑i=1ℓ∑j=iℓnjpj−i=∑j=1ℓ∑i=1jnjpj−i=∑j=1ℓnj⋅pj−1p−1=∑j=0ℓnj⋅pj−1p−1=1p−1∑j=0ℓ(njpj−nj)=1p−1(n−sp(n)).
Applications:
Legendre's formula can be used to prove Kummer's theorem. As one special case, it can be used to prove that if n is a positive integer then 4 divides (2nn) if and only if n is not a power of 2.
It follows from Legendre's formula that the p-adic exponential function has radius of convergence p−1/(p−1) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TRIZ**
TRIZ:
TRIZ (; Russian: теория решения изобретательских задач, teoriya resheniya izobretatelskikh zadach, literally theory of inventive problem solving) is an approach that combines an organized and systematic method for problem solving with analysis and forecasting techniques derived from the study of patterns of invention in the global patent literature. The development and improvement of products and technologies in accordance with TRIZ are guided by the objective laws of technical systems evolution, forming the basis for TRIZ problem solving tools and methods. It was developed by Genrich Altshuller, a Soviet inventor, and science-fiction author, along with his colleagues, starting in 1946. In English the name is typically rendered as the theory of inventive problem solving, and occasionally goes by the English acronym TIPS.Following Altshuller's insight, the theory developed on a foundation of extensive research covering hundreds of thousands of inventions across many different fields to produce an approach that defines generalizable patterns like inventive solutions and the distinguishing characteristics of the problems these inventions have overcome.The research has produced three primary findings: Problems and solutions are repeated across industries and sciences Patterns of technical evolution are also replicated across industries and sciences The innovations used scientific effects outside the field in which they were developedTRIZ practitioners apply all these findings to create and improve products, services, and systems.
History:
TRIZ in its classical form, was developed by the Soviet inventor and science fiction writer Genrich Altshuller and his associates. He started developing TRIZ in 1946 while working in the "Inventions Inspection" department of the Caspian Sea flotilla of the Soviet Navy. His job was to help with the initiation of invention proposals, to rectify and document them, and to prepare applications to the patent office. During this time, he realized that a problem requires an inventive solution if there is an unresolved contradiction in the sense that improving one parameter negatively impacts another. He later called these "technical contradictions".
History:
His work on what later resulted in TRIZ was interrupted in 1950 by his arrest and sentencing to 25 years in the Vorkuta Gulag labor camps. The arrest was partially triggered by letters that he and Raphael Shapiro sent to Stalin, ministers, and newspapers about certain decisions made by the Soviet Government, which they believed were erroneous. Altshuller and Shapiro were freed during the Khrushchev Thaw following Stalin's death in 1953 and returned to Baku.
History:
The first paper on TRIZ titled "On the psychology of inventive creation" was published in 1956 in "Issues in Psychology" (Voprosi Psichologii) journal.Altshuller also observed clever and creative people at work: he uncovered patterns in their thinking and developed thinking tools and techniques to model this "talented thinking". These tools include Smart Little People and Thinking in Time and Scale (or the Screens of Talented Thought).In 1986, Altshuller switched his attention away from technical TRIZ, and started investigating the development of individual creativity. He also developed a version of TRIZ for children, which was trialed in various schools. In 1989 the TRIZ Association was formed, with Altshuller chosen as president.
History:
Following the end of the Cold War, the waves of emigrants from the former Soviet Union brought TRIZ to other countries. They drew attention to it overseas. In 1995 the Altshuller Institute for TRIZ Studies was established in Boston.
Basic principles:
One of the tools which evolved as an extension of the 40 principles was a contradiction matrix.
Basic terms Ideal final result (IFR) - the ultimate romantic solution of a problem when the desired result is achieved by itself.
Inventive principles and the matrix of contradictions Altshuller screened patents to discover what kind of contradictions were resolved or dissolved by the invention and how this had been achieved. From this, he developed a set of 40 inventive principles and later a matrix of contradictions.
Basic principles:
Use of TRIZ on management problems Although TRIZ was developed from the analysis of technical systems, it has been used widely to understand and solve complex management problems. Examples include finding additional cost savings for the legal department of a local government body: the inventive solution generated was to generate additional revenue [insert reference to cost-cutting in local government case study]. The results of the TRIZ work are expected to generate £1.7 m in profit in the first 5 years.
Use of TRIZ methods in industry:
Case studies on the use of TRIZ are challenging to acquire as many companies believe TRIZ gives them a competitive advantage and are reluctant to publicize their adoption of the method. However, some examples are available: Samsung is the most famous success story and has invested heavily in embedding TRIZ use throughout the company, right up to and including the CEO; "In 2003 TRIZ led to 50 new patents for Samsung and in 2004 one project alone, a DVD pick-up innovation, saved Samsung over $100 million. TRIZ is now an obligatory skill set if you want to advance within Samsung".Rolls-Royce, BAE Systems and GE are all documented users of TRIZ; Mars has documented how applying TRIZ led to a new patent for chocolate packaging.TRIZ has also been used successfully by Leafield Engineering, Smart Stabilizer Systems, and Buro Happold to solve problems and generate new patents.Various promoters of TRIZ reported that car companies Rolls-Royce, Ford, and Daimler-Chrysler, Johnson & Johnson, aeronautics companies Boeing, NASA, technology companies Hewlett-Packard, Motorola, General Electric, Xerox, IBM, LG, Samsung, Intel, Procter & Gamble, Expedia and Kodak have used TRIZ methods in some projects.
European TRIZ Association:
The European TRIZ Association is a nonprofit association; based in Germany, founded in 2000. it holds conferences with associated publications.
European TRIZ Association:
Modifications and derivatives SIT (systematic inventive thinking) & SIT Company - A company developed based on this method USIT (unified structured inventive thinking) TOP-TRIZ (a modern version of further developed and integrated TRIZ methods.) “TOP-TRIZ includes further development of problem formulation and problem modeling, development of Standard Solutions into Standard Techniques, further development of ARIZ and Technology Forecasting. TOP-TRIZ has integrated its methods into a universal and user-friendly system for innovation.” In 1992, several TRIZ practitioners fleeing the collapsing Soviet Union relocated and formed a company named Ideation International, Inc. Under the Ideation banner, they continued to develop their version of TRIZ and named it I-TRIZ. I-TRIZ consists of four methodologies: Inventive Problem Solving (IPS), Anticipatory Failure Determination (AFD), Intellectual Property (IP), and Directed Evolution (DE), as well as a knowledge base of over 400 "operators" where each operator is an innovative concept gleaned from the study of international patents stemming from Altshuller's original work.
Books on TRIZ:
Altshuller, Genrich (1999). The Innovation Algorithm: TRIZ, systematic innovation, and technical creativity. Worcester, MA: Technical Innovation Center. ISBN 978-0-9640740-4-0.
Altshuller, Genrich (1984). Creativity as an Exact Science. New York, NY: Gordon & Breach. ISBN 978-0-677-21230-2.
Altshuller, Genrich (1994). And Suddenly the Inventor Appeared. translated by Lev Shulyak. Worcester, MA: Technical Innovation Center. ISBN 978-0-9640740-2-6.
Altshuller, Genrich (2005). 40 Principles:Extended Edition. translated by Lev Shulyak with additions by Dana Clarke, Sr. Worcester, MA: Technical Innovation Center. ISBN 978-0-9640740-5-7.
Gadd, Karen (2011). TRIZ for Engineers: Enabling Inventive Problem Solving. UK: John Wiley & Sons. ISBN 978-0-4707418-8-7.
Haines-Gadd, Lilly (2016). TRIZ for Dummies. UK: John Wiley & Sons. ISBN 978-1-1191074-7-7.
Royzen, Zinovy (2009), Designing and Manufacturing Better Products Faster Using TRIZ, TRIZ Consulting, Inc.
Books on TRIZ:
Royzen, Zinovy (2020). Systematic engineering innovation. Seattle, WA. ISBN 978-0-9728543-4-4. OCLC 1297849736.{{cite book}}: CS1 maint: location missing publisher (link) Karasik, Yevgeny B. (2021). Duality revolution : discovery of new types and mechanisms of duality that are revolutionizing science and technology as well as our ability to solve problems. [place of publication not identified]. ISBN 979-8-5044-3426-1. OCLC 1363847265.{{cite book}}: CS1 maint: location missing publisher (link) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**On the fly**
On the fly:
On the fly is a phrase used to describe something that is being changed while the process that the change affects is ongoing. It is used in the automotive, computer, and culinary industries. In cars, on the fly can be used to describe the changing of the cars configuration while it is still driving. Processes that can occur while the car is still driving include switching between two wheel drive and four wheel drive on some cars and opening and closing the roof on some convertible cars. In computing, on the fly CD writers can read from one CD and write the data to another without saving it on a computer's memory. Switching programs or applications on the fly in multi-tasking operating systems means the ability to switch between native and/or emulated programs or applications that are still running and running in parallel while performing their tasks or processes, but without pausing, freezing, or delaying any, or other unwanted events. Switching computer parts on the fly means computer parts are replaced while the computer is still running. It can also be used in programming to describe changing a program while it is still running. In restaurants and other places involved in the preparation of food, the term is used to indicate that an order needs to be made right away.
Colloquial usage:
In colloquial use, "on the fly" means something created when needed. The phrase is used to mean: something that was not planned ahead changes that are made during the execution of same activity: ex tempore, impromptu.
Automotive usage:
In the automotive industry, the term refers to the circumstance of performing certain operations while a vehicle is driven by the engine and moving. In reference to four-wheel drive vehicles, this term describes the ability to change from two to four-wheel drive while the car is in gear and moving. In some convertible models, the roof can be folded electrically on the fly, whereas in other cases the car must be stopped.
Automotive usage:
In harvesting machines, newer monitoring systems let the driver track the quality of the grain, while enabling them to adjust the rotor speed on the fly as harvesting progresses.
Computer usage:
In multitasking computing an operating system can handle several programs, both native applications or emulated software, that are running independent, parallel, together in the same time in the same device, using separated or shared resources and/or data, executing their tasks separately or together, while a user can switch on the fly between them or groups of them to use obtained effects or supervise purposes, without waste of time or waste of performance. In operating systems using GUI very often it is done by switching from an active window (or an object playing similar role) of a particular software piece to another one but of another software. A computer can compute results on the fly, or retrieve a previously stored result.
Computer usage:
It can mean to make a copy of a removable media (CD-ROM, DVD, etc.) directly, without first saving the source on an intermediate medium (a harddisk); for example, copying a CD-ROM from a CD-ROM drive to a CD-Writer drive. The copy process requires each block of data to be retrieved and immediately written to the destination, so that there is room in the working memory to retrieve the next block of data.When used for encrypted data storage, on the fly the data stream is automatically encrypted as it is written and decrypted when read back again, transparently to software. The acronym OTFE is typically used. On-the-fly programming is the technique of modifying a program without stopping it.A similar concept, hot swapping, refers to on-the-fly replacement of computer hardware.
On-the-fly computing:
On-the-fly computing (OTF computing) is about automating and customizing software tailored to the needs of a user. According to a requirement specification, this software is composed of basic components, so-called basic services, and a user-specific setting of these basic components is made. Accordingly, the requested services are compiled only at the request of the user and then run in a specially designed data center to make the user the functions of the (on-the-fly) created service accessible.
Restaurant usage:
In restaurants, cafes, banquet halls, and other places involved in the preparation of food, the term is used to indicate that an order needs to be made right away. This is often because a previously-served dish is inedible, because a waiter has made a mistake or delayed, or because a guest has to leave promptly.
Usage in sports:
In ice hockey, it is both legal and common for teams to make line changes (player substitutions) when the puck is in play. Such line changes are referred to as being done "on the fly". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Propositional calculus**
Propositional calculus:
Propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions (which can be true or false) and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives. Propositions that contain no logical connectives are called atomic propositions.
Propositional calculus:
Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.
Explanation:
Logical connectives are found in natural languages. In English for example, some examples are "and" (conjunction), "or" (disjunction), "not" (negation) and "if" (but only when used to denote material conditional).
The following is an example of a very simple inference within the scope of propositional logic: Premise 1: If it's raining then it's cloudy.
Premise 2: It's raining.
Conclusion: It's cloudy.Both premises and the conclusion are propositions. The premises are taken for granted, and with the application of modus ponens (an inference rule), the conclusion follows.
Explanation:
As propositional logic is not concerned with the structure of propositions beyond the point where they can't be decomposed any more by logical connectives, this inference can be restated replacing those atomic statements with statement letters, which are interpreted as variables representing statements: Premise 1: P→Q Premise 2: P Conclusion: Q The same can be stated succinctly in the following way: P→Q,PQ When P is interpreted as "It's raining" and Q as "it's cloudy" the above symbolic expressions can be seen to correspond exactly with the original expression in natural language. Not only that, but they will also correspond with any other inference of this form, which will be valid on the same basis this inference is.
Explanation:
Propositional logic may be studied through a formal system in which formulas of a formal language may be interpreted to represent propositions. A system of axioms and inference rules allows certain formulas to be derived. These derived formulas are called theorems and may be interpreted to be true propositions. A constructed sequence of such formulas is known as a derivation or proof and the last formula of the sequence is the theorem. The derivation may be interpreted as proof of the proposition represented by the theorem.
Explanation:
When a formal system is used to represent formal logic, only statement letters (usually capital roman letters such as P , Q and R ) are represented directly. The natural language propositions that arise when they're interpreted are outside the scope of the system, and the relation between the formal system and its interpretation is likewise outside the formal system itself.
Explanation:
In classical truth-functional propositional logic, formulas are interpreted as having precisely one of two possible truth values, the truth value of true or the truth value of false. The principle of bivalence and the law of excluded middle are upheld. Truth-functional propositional logic defined as such and systems isomorphic to it are considered to be zeroth-order logic. However, alternative propositional logics are also possible. For more, see Other logical calculi below.
History:
Although propositional logic (which is interchangeable with propositional calculus) had been hinted by earlier philosophers, it was developed into a formal logic (Stoic logic) by Chrysippus in the 3rd century BC and expanded by his successor Stoics. The logic was focused on propositions. This advancement was different from the traditional syllogistic logic, which was focused on terms. However, most of the original writings were lost and the propositional logic developed by the Stoics was no longer understood later in antiquity. Consequently, the system was essentially reinvented by Peter Abelard in the 12th century.Propositional logic was eventually refined using symbolic logic. The 17th/18th-century mathematician Gottfried Leibniz has been credited with being the founder of symbolic logic for his work with the calculus ratiocinator. Although his work was the first of its kind, it was unknown to the larger logical community. Consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan—completely independent of Leibniz.Just as propositional logic can be considered an advancement from the earlier syllogistic logic, Gottlob Frege's predicate logic can be also considered an advancement from the earlier propositional logic. One author describes predicate logic as combining "the distinctive features of syllogistic logic and propositional logic." Consequently, predicate logic ushered in a new era in logic's history; however, advances in propositional logic were still made after Frege, including natural deduction, truth trees and truth tables. Natural deduction was invented by Gerhard Gentzen and Stanisław Jaśkowski. Truth trees were invented by Evert Willem Beth. The invention of truth tables, however, is of uncertain attribution.
History:
Within works by Frege and Bertrand Russell, are ideas influential to the invention of truth tables. The actual tabular structure (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or Emil Post (or both, independently). Besides Frege and Russell, others credited with having ideas preceding truth tables include Philo, Boole, Charles Sanders Peirce, and Ernst Schröder. Others credited with the tabular structure include Jan Łukasiewicz, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving Lewis. Ultimately, some have concluded, like John Shosky, that "It is far from clear that any one person should be given the title of 'inventor' of truth-tables.".
Terminology:
In general terms, a calculus is a formal system that consists of a set of syntactic expressions (well-formed formulas), a distinguished subset of these expressions (axioms), plus a set of formal rules that define a specific binary relation, intended to be interpreted as logical equivalence, on the space of expressions.
When the formal system is intended to be a logical system, the expressions are meant to be interpreted as statements, and the rules, known to be inference rules, are typically intended to be truth-preserving. In this setting, the rules, which may include axioms, can then be used to derive ("infer") formulas representing true statements—from given formulas representing true statements.
The set of axioms may be empty, a nonempty finite set, or a countably infinite set (see axiom schema). A formal grammar recursively defines the expressions and well-formed formulas of the language. In addition a semantics may be given which defines truth and valuations (or interpretations).
Terminology:
The language of a propositional calculus consists of a set of primitive symbols, variously referred to as atomic formulas, placeholders, proposition letters, or variables, and a set of operator symbols, variously interpreted as logical operators or logical connectives.A well-formed formula is any atomic formula, or any formula that can be built up from atomic formulas by means of operator symbols according to the rules of the grammar.
Terminology:
Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propositional constants represent some particular proposition, while propositional variables range over the set of all atomic propositions. Schemata, however, range over all propositions. It is common to represent propositional constants by A, B, and C, propositional variables by P, Q, and R, and schematic letters are often Greek letters, most often φ, ψ, and χ.
Basic concepts:
The following outlines a standard propositional calculus. Many different formulations exist which are all more or less equivalent, but differ in the details of: their language (i.e., the particular collection of primitive symbols and operator symbols), the set of axioms, or distinguished formulas, and the set of inference rules.Any given proposition may be represented with a letter called a 'propositional constant', analogous to representing a number by a letter in mathematics (e.g., a = 5). All propositions require exactly one of two truth-values: true or false. For example, let P be the proposition that it is raining outside. This will be true (P) if it is raining outside, and false otherwise (¬P).
Basic concepts:
We then define truth-functional operators, beginning with negation. ¬P represents the negation of P, which can be thought of as the denial of P. In the example above, ¬P expresses that it is not raining outside, or by a more standard reading: "It is not the case that it is raining outside." When P is true, ¬P is false; and when P is false, ¬P is true. As a result, ¬ ¬P always has the same truth-value as P.
Basic concepts:
Conjunction is a truth-functional connective which forms a proposition out of two simpler propositions, for example, P and Q. The conjunction of P and Q is written P ∧ Q, and expresses that each are true. We read P ∧ Q as "P and Q". For any two propositions, there are four possible assignments of truth values: P is true and Q is true P is true and Q is false P is false and Q is true P is false and Q is falseThe conjunction of P and Q is true in case 1, and is false otherwise. Where P is the proposition that it is raining outside and Q is the proposition that a cold-front is over Kansas, P ∧ Q is true when it is raining outside and there is a cold-front over Kansas. If it is not raining outside, then P ∧ Q is false; and if there is no cold-front over Kansas, then P ∧ Q is also false.Disjunction resembles conjunction in that it forms a proposition out of two simpler propositions. We write it P ∨ Q, and it is read "P or Q". It expresses that either P or Q is true. Thus, in the cases listed above, the disjunction of P with Q is true in all cases—except case 4. Using the example above, the disjunction expresses that it is either raining outside, or there is a cold front over Kansas. (Note, this use of disjunction is supposed to resemble the use of the English word "or". However, it is most like the English inclusive "or", which can be used to express the truth of at least one of two propositions. It is not like the English exclusive "or", which expresses the truth of exactly one of two propositions. In other words, the exclusive "or" is false when both P and Q are true (case 1), and similarly is false when both P and Q are false (case 4). An example of the exclusive or is: You may have a bagel or a pastry, but not both. Often in natural language, given the appropriate context, the addendum "but not both" is omitted—but implied. In mathematics, however, "or" is always inclusive or; if exclusive or is meant it will be specified, possibly by "xor".) Material conditional also joins two simpler propositions, and we write P → Q, which is read "if P then Q". The proposition to the left of the arrow is called the antecedent, and the proposition to the right is called the consequent. (There is no such designation for conjunction or disjunction, since they are commutative operations.) It expresses that Q is true whenever P is true. Thus P → Q is true in every case above except case 2, because this is the only case when P is true but Q is not. Using the example, if P then Q expresses that if it is raining outside, then there is a cold-front over Kansas. The material conditional is often confused with physical causation. The material conditional, however, only relates two propositions by their truth-values—which is not the relation of cause and effect. It is contentious in the literature whether the material implication represents logical causation.
Basic concepts:
Biconditional joins two simpler propositions, and we write P ↔ Q, which is read "P if and only if Q". It expresses that P and Q have the same truth-value, and in cases 1 and 4. 'P is true if and only if Q' is true, and is false otherwise.It is very helpful to look at the truth tables for these different operators, as well as the method of analytic tableaux.
Basic concepts:
Closure under operations Propositional logic is closed under truth-functional connectives. That is to say, for any proposition φ, ¬φ is also a proposition. Likewise, for any propositions φ and ψ, φ ∧ ψ is a proposition, and similarly for disjunction, conditional, and biconditional. This implies that, for instance, φ ∧ ψ is a proposition, and so it can be conjoined with another proposition. In order to represent this, we need to use parentheses to indicate which proposition is conjoined with which. For instance, P ∧ Q ∧ R is not a well-formed formula, because we do not know if we are conjoining P ∧ Q with R or if we are conjoining P with Q ∧ R. Thus we must write either (P ∧ Q) ∧ R to represent the former, or P ∧ (Q ∧ R) to represent the latter. By evaluating the truth conditions, we see that both expressions have the same truth conditions (will be true in the same cases), and moreover that any proposition formed by arbitrary conjunctions will have the same truth conditions, regardless of the location of the parentheses. This means that conjunction is associative, however, one should not assume that parentheses never serve a purpose. For instance, the sentence P ∧ (Q ∨ R) does not have the same truth conditions of (P ∧ Q) ∨ R, so they are different sentences distinguished only by the parentheses. One can verify this by the truth-table method referenced above.
Basic concepts:
Note: For any arbitrary number of propositional constants, we can form a finite number of cases which list their possible truth-values. A simple way to generate this is by truth-tables, in which one writes P, Q, ..., Z, for any list of k propositional constants—that is to say, any list of propositional constants with k entries. Below this list, one writes 2k rows, and below P one fills in the first half of the rows with true (or T) and the second half with false (or F). Below Q one fills in one-quarter of the rows with T, then one-quarter with F, then one-quarter with T and the last quarter with F. The next column alternates between true and false for each eighth of the rows, then sixteenths, and so on, until the last propositional constant varies between T and F for each row. This will give a complete listing of cases or truth-value assignments possible for those propositional constants.
Basic concepts:
Argument The propositional calculus then defines an argument to be a list of propositions. A valid argument is a list of propositions, the last of which follows from—or is implied by—the rest. All other arguments are invalid. The simplest valid argument is modus ponens, one instance of which is the following list of propositions: 1.
2.
Basic concepts:
P∴Q This is a list of three propositions, each line is a proposition, and the last follows from the rest. The first two lines are called premises, and the last line the conclusion. We say that any proposition C follows from any set of propositions (P1,...,Pn) , if C must be true whenever every member of the set (P1,...,Pn) is true. In the argument above, for any P and Q, whenever P → Q and P are true, necessarily Q is true. Notice that, when P is true, we cannot consider cases 3 and 4 (from the truth table). When P → Q is true, we cannot consider case 2. This leaves only case 1, in which Q is also true. Thus Q is implied by the premises.
Basic concepts:
This generalizes schematically. Thus, where φ and ψ may be any propositions at all, 1.
2.
Basic concepts:
φ∴ψ Other argument forms are convenient, but not necessary. Given a complete set of axioms (see below for one such set), modus ponens is sufficient to prove all other argument forms in propositional logic, thus they may be considered to be a derivative. Note, this is not true of the extension of propositional logic to other logics like first-order logic. First-order logic requires at least one additional rule of inference in order to obtain completeness.
Basic concepts:
The significance of argument in formal logic is that one may obtain new truths from established truths. In the first example above, given the two premises, the truth of Q is not yet known or stated. After the argument is made, Q is deduced. In this way, we define a deduction system to be a set of all propositions that may be deduced from another set of propositions. For instance, given the set of propositions A={P∨Q,¬Q∧R,(P∨Q)→R} , we can define a deduction system, Γ, which is the set of all propositions which follow from A. Reiteration is always assumed, so P∨Q,¬Q∧R,(P∨Q)→R∈Γ . Also, from the first element of A, last element, as well as modus ponens, R is a consequence, and so R∈Γ . Because we have not included sufficiently complete axioms, though, nothing else may be deduced. Thus, even though most deduction systems studied in propositional logic are able to deduce (P∨Q)↔(¬P→Q) , this one is too weak to prove such a proposition.
Generic description of a propositional calculus:
A propositional calculus is a formal system L=L(A,Ω,Z,I) , where: The language of L , also known as its set of formulas, well-formed formulas, is inductively defined by the following rules: Base: Any element of the alpha set A is a formula of L If p1,p2,…,pj are formulas and f is in Ωj , then (fp1p2…pj) is a formula.
Closed: Nothing else is a formula of L .Repeated applications of these rules permits the construction of complex formulas. For example: By rule 1, p is a formula.
By rule 2, ¬p is a formula.
By rule 1, q is a formula.
By rule 2, (¬p∨q) is a formula.
Example 1. Simple axiom system:
Let L1=L(A,Ω,Z,I) , where A , Ω , Z , I are defined as follows: The set A , the countably infinite set of symbols that serve to represent logical propositions: A={p,q,r,s,t,u,p2,…}.
Example 1. Simple axiom system:
The functionally complete set Ω of logical operators (logical connectives and negation) is as follows. Of the three connectives for conjunction, disjunction, and implication ( ∧,∨ , and →), one can be taken as primitive and the other two can be defined in terms of it and negation (¬). Alternatively, all of the logical operators may be defined in terms of a sole sufficient operator, such as the Sheffer stroke (nand). The biconditional ( a↔b ) can of course be defined in terms of conjunction and implication as (a→b)∧(b→a) . Adopting negation and implication as the two primitive operations of a propositional calculus is tantamount to having the omega set Ω=Ω1∪Ω2 partition as follows: Ω1={¬}, Ω2={→}.
Example 1. Simple axiom system:
Then a∨b is defined as ¬a→b , and a∧b is defined as ¬(a→¬b) The set I (the set of initial points of logical deduction, i.e., logical axioms) is the axiom system proposed by Jan Łukasiewicz, and used as the propositional-calculus part of a Hilbert system. The axioms are all substitution instances of: p→(q→p) (p→(q→r))→((p→q)→(p→r)) (¬p→¬q)→(q→p) The set Z of transformation rules (rules of inference) is the sole rule modus ponens (i.e., from any formulas of the form φ and (φ→ψ) , infer ψ ).This system is used in Metamath set.mm formal proof database.
Example 2. Natural deduction system:
Let L2=L(A,Ω,Z,I) , where A , Ω , Z , I are defined as follows: The alpha set A , is a countably infinite set of symbols, for example: A={p,q,r,s,t,u,p2,…}.
The omega set Ω=Ω1∪Ω2 partitions as follows: Ω1={¬}, Ω2={∧,∨,→,↔}.
In the following example of a propositional calculus, the transformation rules are intended to be interpreted as the inference rules of a so-called natural deduction system. The particular system presented here has no initial points, which means that its interpretation for logical applications derives its theorems from an empty axiom set.
Example 2. Natural deduction system:
The set of initial points is empty, that is, I=∅ The set of transformation rules, Z , is described as follows:Our propositional calculus has eleven inference rules. These rules allow us to derive other true formulas given a set of formulas that are assumed to be true. The first ten simply state that we can infer certain well-formed formulas from other well-formed formulas. The last rule however uses hypothetical reasoning in the sense that in the premise of the rule we temporarily assume an (unproven) hypothesis to be part of the set of inferred formulas to see if we can infer a certain other formula. Since the first ten rules don't do this they are usually described as non-hypothetical rules, and the last one as a hypothetical rule.
Example 2. Natural deduction system:
In describing the transformation rules, we may introduce a metalanguage symbol ⊢ . It is basically a convenient shorthand for saying "infer that". The format is Γ⊢ψ , in which Γ is a (possibly empty) set of formulas called premises, and ψ is a formula called conclusion. The transformation rule Γ⊢ψ means that if every proposition in Γ is a theorem (or has the same truth value as the axioms), then ψ is also a theorem. Note that considering the following rule Conjunction introduction, we will know whenever Γ has more than one formula, we can always safely reduce it into one formula using conjunction. So for short, from that time on we may represent Γ as one formula instead of a set. Another omission for convenience is when Γ is an empty set, in which case Γ may not appear.
Example 2. Natural deduction system:
Negation introduction From (p→q) and (p→¬q) , infer ¬p That is, {(p→q),(p→¬q)}⊢¬p Negation elimination From ¬p , infer (p→r) That is, {¬p}⊢(p→r) Double negation elimination From ¬¬p , infer p.
That is, ¬¬p⊢p Conjunction introduction From p and q, infer (p∧q) That is, {p,q}⊢(p∧q) Conjunction elimination From (p∧q) , infer p.
From (p∧q) , infer q.
That is, (p∧q)⊢p and (p∧q)⊢q Disjunction introduction From p, infer (p∨q) From q, infer (p∨q) That is, p⊢(p∨q) and q⊢(p∨q) Disjunction elimination From (p∨q) and (p→r) and (q→r) , infer r.
That is, {p∨q,p→r,q→r}⊢r Biconditional introduction From (p→q) and (q→p) , infer (p↔q) That is, {p→q,q→p}⊢(p↔q) Biconditional elimination From (p↔q) , infer (p→q) From (p↔q) , infer (q→p) That is, (p↔q)⊢(p→q) and (p↔q)⊢(q→p) Modus ponens (conditional elimination) From p and (p→q) , infer q.
That is, {p,p→q}⊢q Conditional proof (conditional introduction) From [accepting p allows a proof of q], infer (p→q) That is, (p⊢q)⊢(p→q)
Proofs in propositional calculus:
One of the main uses of a propositional calculus, when interpreted for logical applications, is to determine relations of logical equivalence between propositional formulas. These relationships are determined by means of the available transformation rules, sequences of which are called derivations or proofs.
Proofs in propositional calculus:
In the discussion to follow, a proof is presented as a sequence of numbered lines, with each line consisting of a single formula followed by a reason or justification for introducing that formula. Each premise of the argument, that is, an assumption introduced as an hypothesis of the argument, is listed at the beginning of the sequence and is marked as a "premise" in lieu of other justification. The conclusion is listed on the last line. A proof is complete if every line follows from the previous ones by the correct application of a transformation rule. (For a contrasting approach, see proof-trees).
Proofs in propositional calculus:
Example of a proof in natural deduction system To be shown that A → A.
One possible proof of this (which, though valid, happens to contain more steps than are necessary) may be arranged as follows:Interpret A⊢A as "Assuming A, infer A". Read ⊢A→A as "Assuming nothing, infer that A implies A", or "It is a tautology that A implies A", or "It is always true that A implies A".
Example of a proof in a classical propositional calculus system We now prove the same theorem A→A in the axiomatic system by Jan Łukasiewicz described above, which is an example of a Hilbert-style deductive system for the classical propositional calculus.
The axioms are: (A1) (p→(q→p)) (A2) ((p→(q→r))→((p→q)→(p→r))) (A3) ((¬p→¬q)→(q→p)) And the proof is as follows: A→((B→A)→A) (instance of (A1)) (A→((B→A)→A))→((A→(B→A))→(A→A)) (instance of (A2)) (A→(B→A))→(A→A) (from (1) and (2) by modus ponens) A→(B→A) (instance of (A1)) A→A (from (4) and (3) by modus ponens)
Soundness and completeness of the rules:
The crucial properties of this set of rules are that they are sound and complete. Informally this means that the rules are correct and that no other rules are required. These claims can be made more formal as follows.
Note that the proofs for the soundness and completeness of the propositional logic are not themselves proofs in propositional logic ; these are theorems in ZFC used as a metatheory to prove properties of propositional logic.
Soundness and completeness of the rules:
We define a truth assignment as a function that maps propositional variables to true or false. Informally such a truth assignment can be understood as the description of a possible state of affairs (or possible world) where certain statements are true and others are not. The semantics of formulas can then be formalized by defining for which "state of affairs" they are considered to be true, which is what is done by the following definition.
Soundness and completeness of the rules:
We define when such a truth assignment A satisfies a certain well-formed formula with the following rules: A satisfies the propositional variable P if and only if A(P) = true A satisfies ¬φ if and only if A does not satisfy φ A satisfies (φ ∧ ψ) if and only if A satisfies both φ and ψ A satisfies (φ ∨ ψ) if and only if A satisfies at least one of either φ or ψ A satisfies (φ → ψ) if and only if it is not the case that A satisfies φ but not ψ A satisfies (φ ↔ ψ) if and only if A satisfies both φ and ψ or satisfies neither one of themWith this definition we can now formalize what it means for a formula φ to be implied by a certain set S of formulas. Informally this is true if in all worlds that are possible given the set of formulas S the formula φ also holds. This leads to the following formal definition: We say that a set S of well-formed formulas semantically entails (or implies) a certain well-formed formula φ if all truth assignments that satisfy all the formulas in S also satisfy φ.
Soundness and completeness of the rules:
Finally we define syntactical entailment such that φ is syntactically entailed by S if and only if we can derive it with the inference rules that were presented above in a finite number of steps. This allows us to formulate exactly what it means for the set of inference rules to be sound and complete: Soundness: If the set of well-formed formulas S syntactically entails the well-formed formula φ then S semantically entails φ.
Soundness and completeness of the rules:
Completeness: If the set of well-formed formulas S semantically entails the well-formed formula φ then S syntactically entails φ.
For the above set of rules this is indeed the case.
Sketch of a soundness proof (For most logical systems, this is the comparatively "simple" direction of proof) Notational conventions: Let G be a variable ranging over sets of sentences. Let A, B and C range over sentences. For "G syntactically entails A" we write "G proves A". For "G semantically entails A" we write "G implies A".
We want to show: (A)(G) (if G proves A, then G implies A).
We note that "G proves A" has an inductive definition, and that gives us the immediate resources for demonstrating claims of the form "If G proves A, then ...". So our proof proceeds by induction.
Notice that Basis Step II can be omitted for natural deduction systems because they have no axioms. When used, Step II involves showing that each of the axioms is a (semantic) logical truth.
Soundness and completeness of the rules:
The Basis steps demonstrate that the simplest provable sentences from G are also implied by G, for any G. (The proof is simple, since the semantic fact that a set implies any of its members, is also trivial.) The Inductive step will systematically cover all the further sentences that might be provable—by considering each case where we might reach a logical conclusion using an inference rule—and shows that if a new sentence is provable, it is also logically implied. (For example, we might have a rule telling us that from "A" we can derive "A or B". In III.a We assume that if A is provable it is implied. We also know that if A is provable then "A or B" is provable. We have to show that then "A or B" too is implied. We do so by appeal to the semantic definition and the assumption we just made. A is provable from G, we assume. So it is also implied by G. So any semantic valuation making all of G true makes A true. But any valuation making A true makes "A or B" true, by the defined semantics for "or". So any valuation which makes all of G true makes "A or B" true. So "A or B" is implied.) Generally, the Inductive step will consist of a lengthy but simple case-by-case analysis of all the rules of inference, showing that each "preserves" semantic implication.
Soundness and completeness of the rules:
By the definition of provability, there are no sentences provable other than by being a member of G, an axiom, or following by a rule; so if all of those are semantically implied, the deduction calculus is sound.
Sketch of completeness proof (This is usually the much harder direction of proof.) We adopt the same notational conventions as above.
Soundness and completeness of the rules:
We want to show: If G implies A, then G proves A. We proceed by contraposition: We show instead that if G does not prove A then G does not imply A. If we show that there is a model where A does not hold despite G being true, then obviously G does not imply A. The idea is to build such a model out of our very assumption that G does not prove A.
Soundness and completeness of the rules:
Thus every system that has modus ponens as an inference rule, and proves the following theorems (including substitutions thereof) is complete: p→(¬p→q) (p→q)→((¬p→q)→q) p→(q→(p→q)) p→(¬q→¬(p→q)) ¬p→(p→q) p→p p→(q→p) (p→(q→r))→((p→q)→(p→r)) The first five are used for the satisfaction of the five conditions in stage III above, and the last three for proving the deduction theorem.
Soundness and completeness of the rules:
Example As an example, it can be shown that as any other tautology, the three axioms of the classical propositional calculus system described earlier can be proven in any system that satisfies the above, namely that has modus ponens as an inference rule, and proves the above eight theorems (including substitutions thereof). Out of the eight theorems, the last two are two of the three axioms; the third axiom, (¬q→¬p)→(p→q) , can be proven as well, as we now show.
Soundness and completeness of the rules:
For the proof we may use the hypothetical syllogism theorem (in the form relevant for this axiomatic system), since it only relies on the two axioms that are already in the above set of eight theorems.
Soundness and completeness of the rules:
The proof then is as follows: q→(p→q) (instance of the 7th theorem) (q→(p→q))→((¬q→¬p)→(q→(p→q))) (instance of the 7th theorem) (¬q→¬p)→(q→(p→q)) (from (1) and (2) by modus ponens) (¬p→(p→q))→((¬q→¬p)→(¬q→(p→q))) (instance of the hypothetical syllogism theorem) (¬p→(p→q)) (instance of the 5th theorem) (¬q→¬p)→(¬q→(p→q)) (from (5) and (4) by modus ponens) (q→(p→q))→((¬q→(p→q))→(p→q)) (instance of the 2nd theorem) ((q→(p→q))→((¬q→(p→q))→(p→q)))→((¬q→¬p)→((q→(p→q))→((¬q→(p→q))→(p→q)))) (instance of the 7th theorem) (¬q→¬p)→((q→(p→q))→((¬q→(p→q))→(p→q))) (from (7) and (8) by modus ponens) ((¬q→¬p)→((q→(p→q))→((¬q→(p→q))→(p→q))))→ (((¬q→¬p)→(q→(p→q)))→((¬q→¬p)→((¬q→(p→q))→(p→q)))) (instance of the 8th theorem) ((¬q→¬p)→(q→(p→q)))→((¬q→¬p)→((¬q→(p→q))→(p→q))) (from (9) and (10) by modus ponens) (¬q→¬p)→((¬q→(p→q))→(p→q)) (from (3) and (11) by modus ponens) ((¬q→¬p)→((¬q→(p→q))→(p→q)))→(((¬q→¬p)→(¬q→(p→q)))→((¬q→¬p)→(p→q))) (instance of the 8th theorem) ((¬q→¬p)→(¬q→(p→q)))→((¬q→¬p)→(p→q)) (from (12) and (13) by modus ponens) (¬q→¬p)→(p→q) (from (6) and (14) by modus ponens) Verifying completeness for the classical propositional calculus system We now verify that the classical propositional calculus system described earlier can indeed prove the required eight theorems mentioned above. We use several lemmas proven here: (DN1) ¬¬p→p - Double negation (one direction) (DN2) p→¬¬p - Double negation (another direction) (HS1) (q→r)→((p→q)→(p→r)) - one form of Hypothetical syllogism (HS2) (p→q)→((q→r)→(p→r)) - another form of Hypothetical syllogism (TR1) (p→q)→(¬q→¬p) - Transposition (TR2) (¬p→q)→(¬q→p) - another form of transposition.
Soundness and completeness of the rules:
(L1) p→((p→q)→q) (L3) (¬p→p)→p We also use the method of the hypothetical syllogism metatheorem as a shorthand for several proof steps.
Soundness and completeness of the rules:
p→(¬p→q) - proof: p→(¬q→p) (instance of (A1)) (¬q→p)→(¬p→¬¬q) (instance of (TR1)) p→(¬p→¬¬q) (from (1) and (2) using the hypothetical syllogism metatheorem) ¬¬q→q (instance of (DN1)) (¬¬q→q)→((¬p→¬¬q)→(¬p→q)) (instance of (HS1)) (¬p→¬¬q)→(¬p→q) (from (4) and (5) using modus ponens) p→(¬p→q) (from (3) and (6) using the hypothetical syllogism metatheorem) (p→q)→((¬p→q)→q) - proof: (p→q)→((¬q→p)→(¬q→q)) (instance of (HS1)) (¬q→q)→q (instance of (L3)) ((¬q→q)→q)→(((¬q→p)→(¬q→q))→((¬q→p)→q)) (instance of (HS1)) ((¬q→p)→(¬q→q))→((¬q→p)→q) (from (2) and (3) by modus ponens) (p→q)→((¬q→p)→q) (from (1) and (4) using the hypothetical syllogism metatheorem) (¬p→q)→(¬q→p) (instance of (TR2)) ((¬p→q)→(¬q→p))→(((¬q→p)→q)→((¬p→q)→q)) (instance of (HS2)) ((¬q→p)→q)→((¬p→q)→q) (from (6) and (7) using modus ponens) (p→q)→((¬p→q)→q) (from (5) and (8) using the hypothetical syllogism metatheorem) p→(q→(p→q)) - proof: q→(p→q) (instance of (A1)) (q→(p→q))→(p→(q→(p→q))) (instance of (A1)) p→(q→(p→q)) (from (1) and (2) using modus ponens) p→(¬q→¬(p→q)) - proof: p→((p→q)→q) (instance of (L1)) ((p→q)→q)→(¬q→¬(p→q)) (instance of (TR1)) p→(¬q→¬(p→q)) (from (1) and (2) using the hypothetical syllogism metatheorem) ¬p→(p→q) - proof: ¬p→(¬q→¬p) (instance of (A1)) (¬q→¬p)→(p→q) (instance of (A3)) ¬p→(p→q) (from (1) and (2) using the hypothetical syllogism metatheorem) p→p - proof given in the proof example above p→(q→p) - axiom (A1) (p→(q→r))→((p→q)→(p→r)) - axiom (A2) Another outline for a completeness proof If a formula is a tautology, then there is a truth table for it which shows that each valuation yields the value true for the formula. Consider such a valuation. By mathematical induction on the length of the subformulas, show that the truth or falsity of the subformula follows from the truth or falsity (as appropriate for the valuation) of each propositional variable in the subformula. Then combine the lines of the truth table together two at a time by using "(P is true implies S) implies ((P is false implies S) implies S)". Keep repeating this until all dependencies on propositional variables have been eliminated. The result is that we have proved the given tautology. Since every tautology is provable, the logic is complete.
Interpretation of a truth-functional propositional calculus:
An interpretation of a truth-functional propositional calculus P is an assignment to each propositional symbol of P of one or the other (but not both) of the truth values truth (T) and falsity (F), and an assignment to the connective symbols of P of their usual truth-functional meanings. An interpretation of a truth-functional propositional calculus may also be expressed in terms of truth tables.For n distinct propositional symbols there are 2n distinct possible interpretations. For any particular symbol a , for example, there are 21=2 possible interpretations: a is assigned T, or a is assigned F.For the pair a , b there are 22=4 possible interpretations: both are assigned T, both are assigned F, a is assigned T and b is assigned F, or a is assigned F and b is assigned T.Since P has ℵ0 , that is, denumerably many propositional symbols, there are 2ℵ0=c , and therefore uncountably many distinct possible interpretations of P Interpretation of a sentence of truth-functional propositional logic If φ and ψ are formulas of P and I is an interpretation of P then the following definitions apply: A sentence of propositional logic is true under an interpretation I if I assigns the truth value T to that sentence. If a sentence is true under an interpretation, then that interpretation is called a model of that sentence.
Interpretation of a truth-functional propositional calculus:
φ is false under an interpretation I if φ is not true under I A sentence of propositional logic is logically valid if it is true under every interpretation.
⊨ φ means that φ is logically valid.
A sentence ψ of propositional logic is a semantic consequence of a sentence φ if there is no interpretation under which φ is true and ψ is false.
A sentence of propositional logic is consistent if it is true under at least one interpretation. It is inconsistent if it is not consistent.Some consequences of these definitions: For any given interpretation a given formula is either true or false.
No formula is both true and false under the same interpretation.
φ is false for a given interpretation iff ¬ϕ is true for that interpretation; and φ is true under an interpretation iff ¬ϕ is false under that interpretation.
If φ and (ϕ→ψ) are both true under a given interpretation, then ψ is true under that interpretation.
Interpretation of a truth-functional propositional calculus:
If ⊨Pϕ and ⊨P(ϕ→ψ) , then ⊨Pψ .¬ϕ is true under I iff φ is not true under I .(ϕ→ψ) is true under I iff either φ is not true under I or ψ is true under I A sentence ψ of propositional logic is a semantic consequence of a sentence φ iff (ϕ→ψ) is logically valid, that is, ϕ⊨Pψ iff ⊨P(ϕ→ψ)
Alternative calculus:
It is possible to define another version of propositional calculus, which defines most of the syntax of the logical operators by means of axioms, and which uses only one inference rule.
Alternative calculus:
Axioms Let φ, χ, and ψ stand for well-formed formulas. (The well-formed formulas themselves would not contain any Greek letters, but only capital Roman letters, connective operators, and parentheses.) Then the axioms are as follows: Axiom THEN-2 may be considered to be a "distributive property of implication with respect to implication." Axioms AND-1 and AND-2 correspond to "conjunction elimination". The relation between AND-1 and AND-2 reflects the commutativity of the conjunction operator.
Alternative calculus:
Axiom AND-3 corresponds to "conjunction introduction." Axioms OR-1 and OR-2 correspond to "disjunction introduction." The relation between OR-1 and OR-2 reflects the commutativity of the disjunction operator.
Alternative calculus:
Axiom NOT-1 corresponds to "reductio ad absurdum." Axiom NOT-2 says that "anything can be deduced from a contradiction." Axiom NOT-3 is called "tertium non-datur" (Latin: "a third is not given") and reflects the semantic valuation of propositional formulas: a formula can have a truth-value of either true or false. There is no third truth-value, at least not in classical logic. Intuitionistic logicians do not accept the axiom NOT-3.
Alternative calculus:
Inference rule The inference rule is modus ponens: ϕ,ϕ→χχ Meta-inference rule Let a demonstration be represented by a sequence, with hypotheses to the left of the turnstile and the conclusion to the right of the turnstile. Then the deduction theorem can be stated as follows: If the sequence ϕ1,ϕ2,...,ϕn,χ⊢ψ has been demonstrated, then it is also possible to demonstrate the sequence ϕ1,ϕ2,...,ϕn⊢χ→ψ .This deduction theorem (DT) is not itself formulated with propositional calculus: it is not a theorem of propositional calculus, but a theorem about propositional calculus. In this sense, it is a meta-theorem, comparable to theorems about the soundness or completeness of propositional calculus.
Alternative calculus:
On the other hand, DT is so useful for simplifying the syntactical proof process that it can be considered and used as another inference rule, accompanying modus ponens. In this sense, DT corresponds to the natural conditional proof inference rule which is part of the first version of propositional calculus introduced in this article.
Alternative calculus:
The converse of DT is also valid: If the sequence ϕ1,ϕ2,...,ϕn⊢χ→ψ has been demonstrated, then it is also possible to demonstrate the sequence ϕ1,ϕ2,...,ϕn,χ⊢ψ in fact, the validity of the converse of DT is almost trivial compared to that of DT: If ϕ1,...,ϕn⊢χ→ψ then 1: ϕ1,...,ϕn,χ⊢χ→ψ 2: ϕ1,...,ϕn,χ⊢χ and from (1) and (2) can be deduced 3: ϕ1,...,ϕn,χ⊢ψ by means of modus ponens, Q.E.D.The converse of DT has powerful implications: it can be used to convert an axiom into an inference rule. For example, by axiom AND-1 we have, ⊢ϕ∧χ→ϕ, which can be transformed by means of the converse of the deduction theorem into ϕ∧χ⊢ϕ, which tells us that the inference rule ϕ∧χϕ is admissible. This inference rule is conjunction elimination, one of the ten inference rules used in the first version (in this article) of the propositional calculus.
Alternative calculus:
Example of a proof The following is an example of a (syntactical) demonstration, involving only axioms THEN-1 and THEN-2: Prove: A→A (Reflexivity of implication).
Proof: (A→((B→A)→A))→((A→(B→A))→(A→A)) Axiom THEN-2 with ϕ=A,χ=B→A,ψ=A A→((B→A)→A) Axiom THEN-1 with ϕ=A,χ=B→A (A→(B→A))→(A→A) From (1) and (2) by modus ponens.
A→(B→A) Axiom THEN-1 with ϕ=A,χ=B A→A From (3) and (4) by modus ponens.
Equivalence to equational logics:
The preceding alternative calculus is an example of a Hilbert-style deduction system. In the case of propositional systems the axioms are terms built with logical connectives and the only inference rule is modus ponens. Equational logic as standardly used informally in high school algebra is a different kind of calculus from Hilbert systems. Its theorems are equations and its inference rules express the properties of equality, namely that it is a congruence on terms that admits substitution.
Equivalence to equational logics:
Classical propositional calculus as described above is equivalent to Boolean algebra, while intuitionistic propositional calculus is equivalent to Heyting algebra. The equivalence is shown by translation in each direction of the theorems of the respective systems. Theorems ϕ of classical or intuitionistic propositional calculus are translated as equations ϕ=1 of Boolean or Heyting algebra respectively. Conversely theorems x=y of Boolean or Heyting algebra are translated as theorems (x→y)∧(y→x) of classical or intuitionistic calculus respectively, for which x≡y is a standard abbreviation. In the case of Boolean algebra x=y can also be translated as (x∧y)∨(¬x∧¬y) , but this translation is incorrect intuitionistically.
Equivalence to equational logics:
In both Boolean and Heyting algebra, inequality x≤y can be used in place of equality. The equality x=y is expressible as a pair of inequalities x≤y and y≤x . Conversely the inequality x≤y is expressible as the equality x∧y=x , or as x∨y=y . The significance of inequality for Hilbert-style systems is that it corresponds to the latter's deduction or entailment symbol ⊢ . An entailment ϕ1,ϕ2,…,ϕn⊢ψ is translated in the inequality version of the algebraic framework as ϕ1∧ϕ2∧…∧ϕn≤ψ Conversely the algebraic inequality x≤y is translated as the entailment x⊢y .The difference between implication x→y and inequality or entailment x≤y or x⊢y is that the former is internal to the logic while the latter is external. Internal implication between two terms is another term of the same kind. Entailment as external implication between two terms expresses a metatruth outside the language of the logic, and is considered part of the metalanguage. Even when the logic under study is intuitionistic, entailment is ordinarily understood classically as two-valued: either the left side entails, or is less-or-equal to, the right side, or it is not.
Equivalence to equational logics:
Similar but more complex translations to and from algebraic logics are possible for natural deduction systems as described above and for the sequent calculus. The entailments of the latter can be interpreted as two-valued, but a more insightful interpretation is as a set, the elements of which can be understood as abstract proofs organized as the morphisms of a category. In this interpretation the cut rule of the sequent calculus corresponds to composition in the category. Boolean and Heyting algebras enter this picture as special categories having at most one morphism per homset, i.e., one proof per entailment, corresponding to the idea that existence of proofs is all that matters: any proof will do and there is no point in distinguishing them.
Graphical calculi:
It is possible to generalize the definition of a formal language from a set of finite sequences over a finite basis to include many other sets of mathematical structures, so long as they are built up by finitary means from finite materials. What's more, many of these families of formal structures are especially well-suited for use in logic.
Graphical calculi:
For example, there are many families of graphs that are close enough analogues of formal languages that the concept of a calculus is quite easily and naturally extended to them. Many species of graphs arise as parse graphs in the syntactic analysis of the corresponding families of text structures. The exigencies of practical computation on formal languages frequently demand that text strings be converted into pointer structure renditions of parse graphs, simply as a matter of checking whether strings are well-formed formulas or not. Once this is done, there are many advantages to be gained from developing the graphical analogue of the calculus on strings. The mapping from strings to parse graphs is called parsing and the inverse mapping from parse graphs to strings is achieved by an operation that is called traversing the graph.
Other logical calculi:
Propositional calculus is about the simplest kind of logical calculus in current use. It can be extended in several ways. (Aristotelian "syllogistic" calculus, which is largely supplanted in modern logic, is in some ways simpler – but in other ways more complex – than propositional calculus.) The most immediate way to develop a more complex logical calculus is to introduce rules that are sensitive to more fine-grained details of the sentences being used.
Other logical calculi:
First-order logic (a.k.a. first-order predicate logic) results when the "atomic sentences" of propositional logic are broken up into terms, variables, predicates, and quantifiers, all keeping the rules of propositional logic with some new ones introduced. (For example, from "All dogs are mammals" we may infer "If Rover is a dog then Rover is a mammal".) With the tools of first-order logic it is possible to formulate a number of theories, either with explicit axioms or by rules of inference, that can themselves be treated as logical calculi. Arithmetic is the best known of these; others include set theory and mereology. Second-order logic and other higher-order logics are formal extensions of first-order logic. Thus, it makes sense to refer to propositional logic as "zeroth-order logic", when comparing it with these logics.
Other logical calculi:
Modal logic also offers a variety of inferences that cannot be captured in propositional calculus. For example, from "Necessarily p" we may infer that p. From p we may infer "It is possible that p". The translation between modal logics and algebraic logics concerns classical and intuitionistic logics but with the introduction of a unary operator on Boolean or Heyting algebras, different from the Boolean operations, interpreting the possibility modality, and in the case of Heyting algebra a second operator interpreting necessity (for Boolean algebra this is redundant since necessity is the De Morgan dual of possibility). The first operator preserves 0 and disjunction while the second preserves 1 and conjunction.
Other logical calculi:
Many-valued logics are those allowing sentences to have values other than true and false. (For example, neither and both are standard "extra values"; "continuum logic" allows each sentence to have any of an infinite number of "degrees of truth" between true and false.) These logics often require calculational devices quite distinct from propositional calculus. When the values form a Boolean algebra (which may have more than two or even infinitely many values), many-valued logic reduces to classical logic; many-valued logics are therefore only of independent interest when the values form an algebra that is not Boolean.
Solvers:
One notable difference between propositional calculus and predicate calculus is that satisfiability of a propositional formula is decidable. Deciding satisfiability of propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g., DPLL algorithm, 1962; Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hypnozygote**
Hypnozygote:
A hypnozygote is a resting cyst resulting from sexual fusion; it is commonly thick-walled. A synonym of zygotic cyst. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Image fusion**
Image fusion:
The image fusion process is defined as gathering all the important information from multiple images, and their inclusion into fewer images, usually a single one. This single image is more informative and accurate than any single source image, and it consists of all the necessary information. The purpose of image fusion is not only to reduce the amount of data but also to construct images that are more appropriate and understandable for the human and machine perception. In computer vision, multisensor image fusion is the process of combining relevant information from two or more images into a single image. The resulting image will be more informative than any of the input images.In remote sensing applications, the increasing availability of space borne sensors gives a motivation for different image fusion algorithms. Several situations in image processing require high spatial and high spectral resolution in a single image. Most of the available equipment is not capable of providing such data convincingly. Image fusion techniques allow the integration of different information sources. The fused image can have complementary spatial and spectral resolution characteristics. However, the standard image fusion techniques can distort the spectral information of the multispectral data while merging.
Image fusion:
In satellite imaging, two types of images are available. The panchromatic image acquired by satellites is transmitted with the maximum resolution available and the multispectral data are transmitted with coarser resolution. This will usually be two or four times lower. At the receiver station, the panchromatic image is merged with the multispectral data to convey more information.
Many methods exist to perform image fusion. The very basic one is the high-pass filtering technique. Later techniques are based on Discrete Wavelet Transform, uniform rational filter bank, and Laplacian pyramid.
Multi-focus image fusion:
Multi-focus image fusion is used to collect useful and necessary information from input images with different focus depths in order to create an output image that ideally has all information from input images. In visual sensor network (VSN), sensors are cameras which record images and video sequences. In many applications of VSN, a camera can’t give a perfect illustration including all details of the scene. This is because of the limited depth of focus exists in the optical lens of cameras. Therefore, just the object located in the focal length of camera is focused and cleared and the other parts of image are blurred. VSN has an ability to capture images with different depth of focuses in the scene using several cameras. Due to the large amount of data generated by camera compared to other sensors such as pressure and temperature sensors and some limitation such as limited band width, energy consumption and processing time, it is essential to process the local input images to decrease the amount of transmission data. The aforementioned reasons emphasize the necessary of multi-focus images fusion. Multi-focus image fusion is a process which combines the input multi-focus images into a single image including all important information of the input images and it’s more accurate explanation of the scene than every single input image.
Why image fusion:
Multi sensor data fusion has become a discipline which demands more general formal solutions to a number of application cases. Several situations in image processing require both high spatial and high spectral information in a single image. This is important in remote sensing. However, the instruments are not capable of providing such information either by design or because of observational constraints. One possible solution for this is data fusion.
Standard image fusion methods:
Image fusion methods can be broadly classified into two groups – spatial domain fusion and transform domain fusion.
Standard image fusion methods:
The fusion methods such as averaging, Brovey method, principal component analysis (PCA) and IHS based methods fall under spatial domain approaches. Another important spatial domain fusion method is the high-pass filtering based technique. Here the high frequency details are injected into upsampled version of MS images. The disadvantage of spatial domain approaches is that they produce spatial distortion in the fused image. Spectral distortion becomes a negative factor while we go for further processing, such as classification problem. Spatial distortion can be very well handled by frequency-domain approaches on image fusion. The multiresolution analysis has become a very useful tool for analysing remote sensing images. The discrete wavelet transform has become a very useful tool for fusion. Some other fusion methods are also there, such as Laplacian pyramid based, curvelet transform based etc. These methods show a better performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion.
Standard image fusion methods:
The images used in image fusion should already be registered. Misregistration is a major source of error in image fusion. Some well-known image fusion methods are: High-pass filtering technique IHS transform based image fusion PCA-based image fusion Wavelet transform image fusion Pair-wise spatial frequency matching
Remote sensing image fusion:
Image fusion in remote sensing has several application domains. An important domain is the multi-resolution image fusion (commonly referred to pan-sharpening). In satellite imagery we can have two types of images: Panchromatic images – An image collected in the broad visual wavelength range but rendered in black and white.
Remote sensing image fusion:
Multispectral images – Images optically acquired in more than one spectral or wavelength interval. Each individual image is usually of the same physical area and scale but of a different spectral band.The SPOT PAN satellite provides high resolution (10m pixel) panchromatic data. While the LANDSAT TM satellite provides low resolution (30m pixel) multispectral images. Image fusion attempts to merge these images and produce a single high resolution multispectral image.
Remote sensing image fusion:
The standard merging methods of image fusion are based on Red–Green–Blue (RGB) to Intensity–Hue–Saturation (IHS) transformation. The usual steps involved in satellite image fusion are as follows: Resize the low resolution multispectral images to the same size as the panchromatic image.
Transform the R, G and B bands of the multispectral image into IHS components.
Modify the panchromatic image with respect to the multispectral image. This is usually performed by histogram matching of the panchromatic image with Intensity component of the multispectral images as reference.
Replace the intensity component by the panchromatic image and perform inverse transformation to obtain a high resolution multispectral image.Pan-sharpening can be done with Photoshop. Other applications of image fusion in remote sensing are available.
Medical image fusion:
Image fusion has become a common term used within medical diagnostics and treatment. The term is used when multiple images of a patient are registered and overlaid or merged to provide additional information. Fused images may be created from multiple images from the same imaging modality, or by combining information from multiple modalities, such as magnetic resonance image (MRI), computed tomography (CT), positron emission tomography (PET), and single-photon emission computed tomography (SPECT). In radiology and radiation oncology, these images serve different purposes. For example, CT images are used more often to ascertain differences in tissue density while MRI images are typically used to diagnose brain tumors.
Medical image fusion:
For accurate diagnosis, radiologists must integrate information from multiple image formats. Fused, anatomically consistent images are especially beneficial in diagnosing and treating cancer. With the advent of these new technologies, radiation oncologists can take full advantage of intensity modulated radiation therapy (IMRT). Being able to overlay diagnostic images into radiation planning images results in more accurate IMRT target tumor volumes.
Image fusion metrics:
Comparative analysis of image fusion methods demonstrates that different metrics support different user needs, sensitive to different image fusion methods, and need to be tailored to the application. Categories of image fusion metrics are based on information theory features, structural similarity, or human perception. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pseudo-spectral method**
Pseudo-spectral method:
Pseudo-spectral methods, also known as discrete variable representation (DVR) methods, are a class of numerical methods used in applied mathematics and scientific computing for the solution of partial differential equations. They are closely related to spectral methods, but complement the basis by an additional pseudo-spectral basis, which allows representation of functions on a quadrature grid. This simplifies the evaluation of certain operators, and can considerably speed up the calculation when using fast algorithms such as the fast Fourier transform.
Motivation with a concrete example:
Take the initial-value problem i∂∂tψ(x,t)=[−∂2∂x2+V(x)]ψ(x,t),ψ(t0)=ψ0 with periodic conditions ψ(x+1,t)=ψ(x,t) . This specific example is the Schrödinger equation for a particle in a potential V(x) , but the structure is more general. In many practical partial differential equations, one has a term that involves derivatives (such as a kinetic energy contribution), and a multiplication with a function (for example, a potential).
Motivation with a concrete example:
In the spectral method, the solution ψ is expanded in a suitable set of basis functions, for example plane waves, ψ(x,t)=12π∑ncn(t)e2πinx.
Insertion and equating identical coefficients yields a set of ordinary differential equations for the coefficients, iddtcn(t)=(2πn)2cn+∑kVn−kck, where the elements Vn−k are calculated through the explicit Fourier-transform Vn−k=∫01V(x)e2πi(k−n)xdx.
Motivation with a concrete example:
The solution would then be obtained by truncating the expansion to N basis functions, and finding a solution for the cn(t) . In general, this is done by numerical methods, such as Runge–Kutta methods. For the numerical solutions, the right-hand side of the ordinary differential equation has to be evaluated repeatedly at different time steps. At this point, the spectral method has a major problem with the potential term V(x) In the spectral representation, the multiplication with the function V(x) transforms into a vector-matrix multiplication, which scales as N2 . Also, the matrix elements Vn−k need to be evaluated explicitly before the differential equation for the coefficients can be solved, which requires an additional step.
Motivation with a concrete example:
In the pseudo-spectral method, this term is evaluated differently. Given the coefficients cn(t) , an inverse discrete Fourier transform yields the value of the function ψ at discrete grid points xj=2πj/N . At these grid points, the function is then multiplied, ψ′(xi,t)=V(xi)ψ(xi,t) , and the result Fourier-transformed back. This yields a new set of coefficients cn′(t) that are used instead of the matrix product ∑kVn−kck(t) It can be shown that both methods have similar accuracy. However, the pseudo-spectral method allows the use of a fast Fourier transform, which scales as ln N) , and is therefore significantly more efficient than the matrix multiplication. Also, the function V(x) can be used directly without evaluating any additional integrals.
Technical discussion:
In a more abstract way, the pseudo-spectral method deals with the multiplication of two functions V(x) and f(x) as part of a partial differential equation. To simplify the notation, the time-dependence is dropped. Conceptually, it consists of three steps: f(x),f~(x)=V(x)f(x) are expanded in a finite set of basis functions (this is the spectral method).
For a given set of basis functions, a quadrature is sought that converts scalar products of these basis functions into a weighted sum over grid points.
The product is calculated by multiplying V,f at each grid point.
Technical discussion:
Expansion in a basis The functions f,f~ can be expanded in a finite basis {ϕn}n=0,…,N as f(x)=∑n=0Ncnϕn(x) f~(x)=∑n=0Nc~nϕn(x) For simplicity, let the basis be orthogonal and normalized, ⟨ϕn,ϕm⟩=δnm using the inner product ⟨f,g⟩=∫abf(x)g(x)¯dx with appropriate boundaries a,b . The coefficients are then obtained by cn=⟨f,ϕn⟩ c~n=⟨f~,ϕn⟩ A bit of calculus yields then c~n=∑m=0NVn−mcm with Vn−m=⟨Vϕm,ϕn⟩ . This forms the basis of the spectral method. To distinguish the basis of the ϕn from the quadrature basis, the expansion is sometimes called Finite Basis Representation (FBR).
Technical discussion:
Quadrature For a given basis {ϕn} and number of N+1 basis functions, one can try to find a quadrature, i.e., a set of N+1 points and weights such that ⟨ϕn,ϕm⟩=∑i=0Nwiϕn(xi)ϕm(xi)¯n,m=0,…,N Special examples are the Gaussian quadrature for polynomials and the Discrete Fourier Transform for plane waves. It should be stressed that the grid points and weights, xi,wi are a function of the basis and the number N The quadrature allows an alternative numerical representation of the function f(x),f~(x) through their value at the grid points. This representation is sometimes denoted Discrete Variable Representation (DVR), and is completely equivalent to the expansion in the basis.
Technical discussion:
f(xi)=∑n=0Ncnϕn(xi) cn=⟨f,ϕn⟩=∑i=0Nwif(xi)ϕn(xi)¯ Multiplication The multiplication with the function V(x) is then done at each grid point, f~(xi)=V(xi)f(xi).
This generally introduces an additional approximation. To see this, we can calculate one of the coefficients c~n :c~n=⟨f~,ϕn⟩=∑iwif~(xi)ϕn(xi)¯=∑iwiV(xi)f(xi)ϕn(xi)¯ However, using the spectral method, the same coefficient would be c~n=⟨Vf,ϕn⟩ . The pseudo-spectral method thus introduces the additional approximation ⟨Vf,ϕn⟩≈∑iwiV(xi)f(xi)ϕn(xi)¯.
If the product Vf can be represented with the given finite set of basis functions, the above equation is exact due to the chosen quadrature.
Special pseudospectral schemes:
The Fourier method If periodic boundary conditions with period [0,L] are imposed on the system, the basis functions can be generated by plane waves, ϕn(x)=1Le−ıknx with kn=(−1)n⌈n/2⌉2π/L , where ⌈⋅⌉ is the ceiling function.
Special pseudospectral schemes:
The quadrature for a cut-off at max =N is given by the discrete Fourier transformation. The grid points are equally spaced, xi=iΔx with spacing Δx=L/(N+1) , and the constant weights are wi=Δx For the discussion of the error, note that the product of two plane waves is again a plane wave, ϕa+ϕb=ϕc with c≤a+b . Thus, qualitatively, if the functions f(x),V(x) can be represented sufficiently accurately with Nf,NV basis functions, the pseudo-spectral method gives accurate results if Nf+NV basis functions are used.
Special pseudospectral schemes:
An expansion in plane waves often has a poor quality and needs many basis functions to converge. However, the transformation between the basis expansion and the grid representation can be done using a Fast Fourier transform, which scales favorably as ln N . As a consequence, plane waves are one of the most common expansion that is encountered with pseudo-spectral methods.
Special pseudospectral schemes:
Polynomials Another common expansion is into classical polynomials. Here, the Gaussian quadrature is used, which states that one can always find weights wi and points xi such that ∫abw(x)p(x)dx=∑i=0Nwip(xi) holds for any polynomial p(x) of degree 2N+1 or less. Typically, the weight function w(x) and ranges a,b are chosen for a specific problem, and leads to one of the different forms of the quadrature. To apply this to the pseudo-spectral method, we choose basis functions ϕn(x)=w(x)Pn(x) , with Pn being a polynomial of degree n with the property ∫abw(x)Pn(x)Pm(x)dx=δmn.
Special pseudospectral schemes:
Under these conditions, the ϕn form an orthonormal basis with respect to the scalar product ⟨f,g⟩=∫abf(x)g(x)¯dx . This basis, together with the quadrature points can then be used for the pseudo-spectral method.
For the discussion of the error, note that if f is well represented by Nf basis functions and V is well represented by a polynomial of degree NV , their product can be expanded in the first Nf+NV basis functions, and the pseudo-spectral method will give accurate results for that many basis functions.
Such polynomials occur naturally in several standard problems. For example, the quantum harmonic oscillator is ideally expanded in Hermite polynomials, and Jacobi-polynomials can be used to define the associated Legendre functions typically appearing in rotational problems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Artificial intelligence in fraud detection**
Artificial intelligence in fraud detection:
Artificial intelligence is used by many different businesses and organizations. It is widely used in the financial sector, especially by accounting firms, to help detect fraud.
Artificial intelligence in fraud detection:
In 2022, PricewaterhouseCoopers reported that fraud has impacted 46% of all businesses in the world. The shift from working in person to working from home has brought increased access to data. According to an FTC (Federal Trade Commission) study from 2022, customers reported fraud of approximately $5.8 billion in 2021, an increase of 70% from the year before. The majority of these scams were imposter scams and online shopping frauds.
Tools:
Expert systems Expert systems were first designed in the 1970s as an expansion into artificial intelligence technologies. Their design is based on the premise of decreasing potential user error in decision-making and emulating mental reasoning used by experts in a particular field. They differentiate themselves from traditional linear reasoning models by separating identified points in data and processing them individually at the same time. Though, these systems do not rely purely on machine-learned intelligence.Information regarding rules, practices, and procedures in the form of "if-then" statements are implemented into the programming of the system. Users interact with the system by feeding information into the system either through direct entry or import of external data. An inference system compares the information provided by the user with corresponding rules that are believed to specifically apply to the situation. Using this information and the corresponding rules will be used to create a solution to the user's query. Expert systems will generally not operate properly when the common procedures for a specified situation are ambiguous due to the need for well-defined rules.Implementation of expert systems in accounting procedures is feasible in areas where professional judgment is required. Situations where expert systems are applicable include investigations into transactions that involve potential fraudulent entries, instances of going concern, and the evaluation of risk in the planning stages of an audit.
Tools:
Continuous auditing Continuous auditing is a set of processes that assess various aspects of information gathered in an audit to classify areas of risk and potential weaknesses in financial Internal controls at a more frequent rate than traditional methods. Instead of analyzing recorded transactions and journal entries periodically, continuous auditing focuses on interpreting the character of these actions more frequently. The frequency of these processes being undertaken as well as highlighting areas of importance is up to the discretion of their implementer, who commonly makes such decisions based on the level of risk in the accounts being evaluated and the goals of implementing the system. Performance of these processes can occur as frequently as being nearly instantaneous with an entry being posted.The processes involved with analyzing financial data in continuous auditing can include the creation of spreadsheets to allow for interactive information gathering, calculation of financial ratios for comparison with previously created models, and detection of errors in entered figures. A primary goal of this practice is to allow for quicker and easier detection of instances of faulty controls, errors, and instances of fraud.
Tools:
Machine learning and deep learning The ability of machine learning and deep learning to swiftly and effectively sort through vast volumes of data in the forms of various documents relevant to companies and documents being audited makes them applicable to the domains of audit and fraud detection. Examples of this include recognizing key language in contracts, identifying levels of risk of fraud in transactions, and assessing journal entries for misstatement.
Applications:
'Big 4' Accounting Firms Deloitte created an Al-enabled document-reviewing system in 2014. The system automates the method of reviewing and extracting relevant information from different business documents. Deloitte claims that this innovation has made a difference by reducing time spent going through lawful contract documents, invoices, money-related articulations, and board minutes by up to 50%. Working with IBM's Watson, Deloitte is developing cognitive-technology-enhanced commerce arrangements for its clients. LeasePoint is fueled by IBM Tririga and uses Deloitte's industrial information to create an end-to-end leasing portfolio. Automated Cognitive Resource Assessment employs IBM's Maximo innovation to progress the proficiency of asset inspection.Ernst and Young (EY) connected Al to the investigation of lease contracts. EY (Australia) has also received Al-enabled auditing technology.Collaborating with H20.ai, PwC developed an Al-enabled framework (GL.ai) capable of analyzing reports and preparing reports. PwC claims to have made a significant investment in normal dialect processing (NLP), an Al-enabled innovation to process unstructured information efficiently.KPMG built a portfolio of Al instruments, called KPMG Ignite, to upgrade trade decisions and forms. Working with Microsoft and IBM Watson, KPMG is creating instruments to coordinate Al, data analytics, Cognitive Technologies, and RPA.
Advantages:
Efficiency The process of auditing an entity in an attempt to detect fraudulent activity requires the repeating of investigatory processes until an error or misstatement may be identified. Under traditional methods, these processes would be carried out by a human being. Proponents of artificial intelligence in fraud detection have stated that these traditional methods are inefficient and can be more quickly accomplished with the aid of an intelligent computing system. A survey of 400 chief executive officers created by KPMG in 2016 found that approximately 58% believed that artificial intelligence would play a key role in making audits more efficient in the future.
Advantages:
Data interpretation Higher levels of fraud detection entail the use of professional judgement to interpret data. Supporters of artificial intelligence being used in financial audits have claimed that increased risks from instances of higher data interpretation can be minimized through such technologies. One necessary element of an audit of financial statements that requires professional judgement is the implementation of thresholds for materiality. Materiality entails the distinction between errors and transactions in financial statements that would impact decisions made by users of those financial statements. The threshold for materiality in an audit is set by the auditor based on various factors. Artificial intelligence has been used to interpret data and suggest materiality thresholds to be implemented through the use of expert systems.
Advantages:
Decreased costs Those in favor of using artificial intelligence to complete investigations of fraud have stated that such technologies decrease the amount of time required to complete tasks that are repetitive. The claim further states that such efficiencies allow for lowered resource requirements, which can then be further spent on tasks that have not been fully automated. The audit firm Ernst & Young has posited these claims by declaring that their deep learning systems have been used to reduce time spent on administrative tasks by analyzing relevant audit documents. According to the firm, this has allowed their employees to focus more on judgement and analysis.
Disadvantages:
Job Displacement The inescapable reception of computer based intelligence and robotization advancements might prompt critical work relocation across different enterprises. As artificial intelligence frameworks become more equipped for performing undertakings customarily completed by people, there is a worry that specific work jobs could become out of date, prompting joblessness and financial imbalance.
Disadvantages:
Initial investment requirement Along with a knowledge of coding and building systems through computer programs, we are seeing the advantages of these systems, but since they are so new, they require a large investment to start building your system. Any firm that is planning on implementing an AI system to detect fraud must hire a team of data scientists, along with upgrading their cloud system and data storage as well. The system must be consistently monitored and updated to be the most efficient form of itself, otherwise the likelihood of fraud being involved in those transactions increases. If you do not initially invest in your system and are certain it will detect a large percentage of fraud, you will reap the consequences of large transactions of fraud, along with chargeback fees. It is a very large initial investment, but when the investment is made by the company and the data scientists invest in the work they do, you should save yourself a lot of money because you will never have to pay a robot to detect fraud. You may need people to help build the robot, but over time the costs will minimize.
Disadvantages:
Technical expertise Data analytics is a new science at many companies, and firms are heavily researching it to analyze their business as a whole and find where they can improve. Data analytics tells the story of a business through numbers. Many people in this world are experienced with reading data, but there are also more people who are not as experienced with data at all. The discipline of data analytics is expanding rapidly. It is frequently challenging to become an expert in such a profession. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Non-structured programming**
Non-structured programming:
Non-structured programming is the historically earliest programming paradigm capable of creating Turing-complete algorithms. It is often contrasted with the structured programming paradigm, in particular with the use of unstructured control flow using goto statements or equivalent. The distinction was particularly stressed by the publication of the influential "Go To Statement Considered Harmful" open letter in 1968 by Dutch computer scientist Edsger W. Dijkstra, who coined the term "structured programming".Unstructured programming has been heavily criticized for producing hardly readable ("spaghetti") code. There are both high- and low-level programming languages that use non-structured programming. Some languages commonly cited as being non-structured include JOSS, FOCAL, TELCOMP, assembly languages, MS-DOS batch files, and early versions of BASIC, Fortran, COBOL, and MUMPS.
Features and typical concepts:
Basic concepts A program in a non-structured language uses unstructured jumps to labels or instruction addresses. The lines are usually numbered or may have labels: this allows the flow of execution to jump to any line in the program. This is in contrast to structured programming which uses sequential constructs of statements, selection (if/then/else) and repetition (while and for). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ADP/ATP translocase 4**
ADP/ATP translocase 4:
ADP/ATP translocase 4 (ANT4) is an enzyme that in humans is encoded by the SLC25A31 gene on chromosome 4. This enzyme inhibits apoptosis by catalyzing ADP/ATP exchange across the mitochondrial membranes and regulating membrane potential. In particular, ANT4 is essential to spermatogenesis, as it imports ATP into sperm mitochondria to support their development and survival. Outside this role, the SLC25AC31 gene has not been implicated in any human disease.
Structure:
The ANT4 protein contains six transmembrane helices, and a homodimer functional unit, which serves as an ADP/ATP channel protein. Unlike the other three ANT isoforms, ANT4 has additional amino acids at its N- and C-terminals. These amino acid sequences may interact with different factors for specialized functions such as localization to sperm flagella. The SLC25A31 gene is composed of 6 exons over a stretch of 44 kbp of DNA.
Function:
The ANT4 protein is a mitochondrial ADP/ATP carrier that catalyzes the exchange of ADP and ATP between the mitochondrial matrix and cytoplasm during ATP synthesis. In addition, ANT4 stabilizes the mitochondrial membrane potential and decreases the permeability transition pore complex (PTPC) opening in order to prevent nuclear chromatin fragmentation and resulting cell death. In humans, the protein localizes to the liver, brain and testis, though in adult males, it is expressed primarily in the testis. Studies on Ant4-deficient mice reveal increased apoptosis in the testis leading to infertility, thus indicating that Ant4 is required as for spermatogenesis. In this case, the anti-apoptotic function for ANT4 is attributed to its importing of cytosolic ATP into the mitochondria. In other cells, the isoform ANT2 carries out this role; however, since sperm lack the X chromosome on which the ANT2 gene resides, survival of the sperm is dependent on ANT4.
Clinical significance:
The SLC25A31 enzyme is an important constituent in apoptotic signaling and oxidative stress, most notably as part of the mitochondrial death pathway and cardiac myocyte apoptosis signaling. Programmed cell death is a distinct genetic and biochemical pathway essential to metazoans. An intact death pathway is required for successful embryonic development and the maintenance of normal tissue homeostasis. Apoptosis has proven to be tightly interwoven with other essential cell pathways. The identification of critical control points in the cell death pathway has yielded fundamental insights for basic biology, as well as provided rational targets for new therapeutics a normal embryologic processes, or during cell injury (such as ischemia-reperfusion injury during heart attacks and strokes) or during developments and processes in cancer, an apoptotic cell undergoes structural changes including cell shrinkage, plasma membrane blebbing, nuclear condensation, and fragmentation of the DNA and nucleus. This is followed by fragmentation into apoptotic bodies that are quickly removed by phagocytes, thereby preventing an inflammatory response. It is a mode of cell death defined by characteristic morphological, biochemical and molecular changes. It was first described as a "shrinkage necrosis", and then this term was replaced by apoptosis to emphasize its role opposite mitosis in tissue kinetics. In later stages of apoptosis the entire cell becomes fragmented, forming a number of plasma membrane-bounded apoptotic bodies which contain nuclear and or cytoplasmic elements. The ultrastructural appearance of necrosis is quite different, the main features being mitochondrial swelling, plasma membrane breakdown and cellular disintegration. Apoptosis occurs in many physiological and pathological processes. It plays an important role during embryonal development as programmed cell death and accompanies a variety of normal involutional processes in which it serves as a mechanism to remove "unwanted" cells.
Clinical significance:
The SLC25A31 gene is important for the coding of the most abundant mitochondrial protein Ancp which represents 10% of the proteins of the inner membrane of bovine heart mitochondria. Ancp is encoded by four different genes: SLC25A4 (also known as ANC1 or ANT1), SLC25A5 (ANC3 or ANT2), SLC25A6 (ANC2 or ANT3) and SLC25A31 (ANC4 or ANT4). Their expression is tissue specific and highly regulated and adapted to particular cellular energetic demand. Indeed, human ANC expression patterns depend on the tissue and cell types, the developmental stage and the status of cell proliferation. Furthermore, expression of the genes is modulated by different transcriptional elements in the promoter regions. Therefore, Ancp emerges as a logical candidate to regulate the cellular dependence on oxidative energy metabolism.To date, there is no evidence of SLC25A31 gene mutations associated with human disease, though they have been associated with male infertility in mice. In addition, ANT4 overexpression has been observed to protect cancer cells from induced apoptosis by anti-cancer drugs such as lonidamine and staurosporine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**EIF4E2**
EIF4E2:
Eukaryotic translation initiation factor 4E type 2 is a protein that in humans is encoded by the EIF4E2 gene. It belongs to the eukaryotic translation initiation factor 4E family.
Interactions:
EIF4E2 has been shown to interact with ARIH1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JOVE**
JOVE:
JOVE (Jonathan's Own Version of Emacs) is an open-source, Emacs-like text editor, primarily intended for Unix-like operating systems. It also supports MS-DOS and Microsoft Windows. JOVE was inspired by Gosling Emacs but is much smaller and simpler, lacking Mocklisp. It was originally created in 1983 by Jonathan Payne while at Lincoln-Sudbury Regional High School in Massachusetts, United States on a PDP-11 minicomputer.
JOVE:
JOVE was distributed with several releases of BSD Unix, including 2.9BSD, 4.3BSD-Reno and 4.4BSD-Lite2.
As of 2022, the latest development release of JOVE is version 4.17.4.4; the stable version is 4.16. Unlike GNU Emacs, JOVE does not support UTF-8. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.