text
stringlengths
60
353k
source
stringclasses
2 values
**Enterprise output management** Enterprise output management: Enterprise output management (EOM) is an information technology practice that deals with the organization, formatting, management and distribution of data that is created by enterprise applications like banking information systems, insurance information systems, ERP (enterprise resource planning systems), CRM (customer relationship management), retail systems and many others. Enterprise output management: In 2006, Gartner research estimated the market of EOM solutions at $441 million with 5% growth rate between 2006 and 2010. Gartner defined Distributed output management as middleware that drives the output process and supports the automated creation and delivery of business process and ad hoc documents. Middleware is software that is bridging between different software applications in terms of data formats, languages, communication protocols, etc.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Green card (IBM/360)** Green card (IBM/360): Green card was the abbreviated name given to the IBM/360 Reference data card that served as the shorthand "bible" for programmers during the late 1960s and 1970s. It rapidly became an icon of the 360 era of programming and was later replaced by the "yellow card" for the IBM/370 product line. The same concept was also later used for an "orange card" for CICS application programming - that showed some internal CICS data structures and their relationships. Green card (IBM/360): The card was published by IBM and was available by mail order directly from IBM, from university book stores associated with IBM 360 systems, some technical book stores, and other sellers of published technical material. Page 8 of the card provides both the then mailing address to contact for pricing and the part number of GX20-1703. Card contents: The reference card contained details of all assembler instructions and other 360 "essential facts" condensed to a very convenient fold-up, pocket sized format: IBM/360 instructions (e.g. LR, ZAP, CLC) Assembler directives (e.g. START, CSECT, DC, LTORG, EQU, AIF, END) EBCDIC codes Condition code summary I/O "channel commands" for various devices Hexadecimal conversion
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UK Archaeological Sciences Conference** UK Archaeological Sciences Conference: The United Kingdom Archaeological Sciences Conference is a biennial conference established in 1987 at the University of Glasgow.From 1987 to 1999 the conference proceedings were published. Major topics discussed at the conference include stable isotope analysis, proteomics, ancient genetics and material analysis. The 2017 conference at UCL was attended by 190 delegates from 20 countries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital permanence** Digital permanence: Digital permanence addresses the history and development of digital storage techniques, specifically quantifying the expected lifetime of data stored on various digital media and the factors which influence the permanence of digital data. It is often a mix of ensuring the data itself can be retained on a particular form of media and that the technology remains viable. Where possible, as well as describing expected lifetimes, factors affecting data retention will be detailed, including potential technology issues. Since the inception of computers, a key concept differentiating computers from other calculating machines has been their ability to store information. Over the years, various hardware devices have been designed to store ever larger quantities of data. With the development of the Internet the quantity of information available appears to continue to grow at an ever-increasing rate often characterised as an information explosion. As information stored on traditional media such as hand-written documents, printed books, photographic images and the likes is being replaced by digital files, so humanity's social and cultural legacy to future generations will depend more and more on the permanence of digital information. Digital permanence: However, not all this information is worth saving for any length of time; sometimes its value can be very short-lived. Other data, such as legal contracts, literature, scientific studies, are frequently expected to last for centuries. This article describes how reliable different types of storage media are at storing data over time and factors affecting this reliability. Librarians and archivists responsible for large repositories of information take a deeper view of electronic archives. Data format Data must be stored in a format which can be meaningfully accessed now and in the future. Technology reliance If data requires a special program to view it, say, as an image, then software must also be available to both interpret the basic data file and also render it appropriately. In some cases, this might also require special hardware. Archival strategy Data must remain available in the long term. Digital permanence: At present, a growing problem is the time taken to reproduce an archive, for instance following a hardware or system upgrade. Since the sheer volume of archive data continues to grow, new hardware is always required to maintain the archive and so regular migration of data to a new system must be performed on a regular basis. The time taken to migrate data is starting to approach the frequency of system upgrade, such that archive transfer will become a continuous, never-ending process. Digital permanence: Digital rights management Maintaining digital information in an accurate and accessible format over an extended retention period also must address the requirements of the authors' digital rights. Digital permanence: In many cases, the data may include proprietary information that should not be accessible to all, but only to a defined group of users who understand or have legally agreed to only utilize the information in limited ways so as to protect the proprietary rights of the original authoring team. Maintaining this requirement over decades can be a challenge that requires processes and tools to ensure total compliance. Digital permanence: Reproducibility Digital information must be able to be reproduced as originally intended or available. Digital permanence: This is significant especially where the original data was produced on technology at a lower level than currently possible. For example, archivists try to maintain the distinction between listening to a gramophone record played on a gramophone as opposed to a digitally cleaned version of the same recording through a modern hi-fi system.Given that individuals' personal data has been growing at a rapid rate in the 21st century, these archiving issues affecting professional repositories will soon be manifest in small organisations and even the home. Types of storage: Solid-state memory devices Digital computers, in particular, make use of two forms of memory known as RAM or ROM and although the most common form today is RAM, designed to retain data while the computer is powered on, this was not always the case. Nor is active memory the only form used; passive memory devices are now in common use in digital cameras. Magnetic, or ferrite core, data retention is dependent on the magnetic properties of iron and its compounds. Types of storage: PROM, or programmable read-only memory, stores data in a fixed form during the manufacturing process, with data retention dependent on the life expectancy of the device itself. EPROM, or erasable programmable read-only memory, is similar to PROM but can be cleared by exposure to ultraviolet light. Types of storage: EEPROM, or electrically erasable programmable read-only memory, is the format used by flash memory devices and can be erased and rewritten electronically. These devices tend to be extraordinarily resilient; in a 2005 destructive test, a USB key survived boiling in a custard pie, being run-over by a truck and fired from a mortar at a brick wall. Although physically damaged after the final test, some deft soldering restored the device and data was successfully retrieved. Types of storage: Magnetic media Magnetic tapes consist of narrow bands of a magnetic medium bonded in paper or plastic. The magnetic medium passes across a semi-fixed head which reads or writes data. Typically, magnetic media has maximum lifetime of about 50 years although this assumes optimal storage conditions; life expectancy can decrease rapidly depending on storage conditions and the resilience and reliability of hardware components. Types of storage: magnetic tape reels magnetic stripe cards magnetic cards cassette tapes video cassette tapesMagnetic disks and drums include a rotating magnetic medium combined with a movable read/write head. floppy disks zip drives hard disks and drums Non-magnetic media punched paper-tape punched cards optical media (rotating media combined with a moveable read/write head comprising a laser), such as: pressed CD-ROMs and DVD-ROMs Write once read many (WORM) media such as CD-R, DVD±R, BD-R. Rewriteable media such as CD-RW, DVD±RW, BD-RE. Some disc types can have multiple data layers for greater storage capacity. Types of storage: Printing technology Although not a digital storage medium in itself, printing hard-copies of documents and images remains a popular means of representing digital data and possibly acquires the qualities associated with original documents especially their potential for endurance. More recent advances in printer technology have raised the quality of photographic images in particular. Unfortunately the permanence of printed documents cannot be easily discerned from the documents themselves. Types of storage: wet-ribbon inked printers heat-sensitive papers, such as FAX rolls NCR and other carbon technologies ink-jet printers wax-based inks e.g. DataProducts SI810 water-based inks other bases mono laser printers colour laser printers Financial Driven Resources A way of preserving digital content through means of financial trusts. The data is driven with financial investments typically assigned to a Trust Company which pay traditional storage providers to house data for long periods of time with the interest gained on the principal. In 2008 a series of companies such as LivingStory.com and Orysa.com started offering these services to store point in time accounting data and provide consumer archive services. Types of storage: Soft storage technology The short-comings of some storage media is already well recognised and various attempts have been made to supplement the permanence of an under-lying technology. These "soft storage technologies" enhance their base technology by applying software or system techniques often within quite narrow fields of data storage and not always with the explicit intention of improving digital permanence. Types of storage: RAID systems Distributed systems, such as BitTorrent networked backup services public archive repositories web-site archives financial trust resources
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**False precision** False precision: False precision (also called overprecision, fake precision, misplaced precision and spurious precision) occurs when numerical data are presented in a manner that implies better precision than is justified; since precision is a limit to accuracy (in the ISO definition of accuracy), this often leads to overconfidence in the accuracy, named precision bias. Overview: Madsen Pirie defines the term "false precision" in a more general way: when exact numbers are used for notions that cannot be expressed in exact terms. For example, "We know that 90% of the difficulty in writing is getting started." Often false precision is abused to produce an unwarranted confidence in the claim: "our mouthwash is twice as good as our competitor's".In science and engineering, convention dictates that unless a margin of error is explicitly stated, the number of significant figures used in the presentation of data should be limited to what is warranted by the precision of those data. For example, if an instrument can be read to tenths of a unit of measurement, results of calculations using data obtained from that instrument can only be confidently stated to the tenths place, regardless of what the raw calculation returns or whether other data used in the calculation are more accurate. Even outside these disciplines, there is a tendency to assume that all the non-zero digits of a number are meaningful; thus, providing excessive figures may lead the viewer to expect better precision than exists. Overview: However, in contrast, it is good practice to retain more significant figures than this in the intermediate stages of a calculation, in order to avoid accumulated rounding errors. False precision commonly arises when high-precision and low-precision data are combined, when using an electronic calculator, and in conversion of units. Examples: False precision is the gist of numerous variations of a joke which can be summarized as follows: A tour guide at a museum says a dinosaur skeleton is 100,000,005 years old, because an expert told him that it was 100 million years old when he started working there 5 years ago. If a car's speedometer indicates the vehicle is travelling at 60 mph and that is converted to km/h, it would equal 96.5606 km/h. The conversion from the whole number in one system to the precise result in another makes it seem like the measurement was very precise, when in fact it was not. Measures that rely on statistical sampling, such as IQ tests, are often reported with false precision.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Emulsion test** Emulsion test: The emulsion test is a method to determine the presence of lipids using wet chemistry. The procedure is for the sample to be suspended in ethanol, allowing lipids present to dissolve (lipids are soluble in alcohols). The liquid (alcohol with dissolved fat) is then decanted into water. Since lipids do not dissolve in water while ethanol does, when the ethanol is diluted, it falls out of the solution to give a cloudy white emulsion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fisher's exact test** Fisher's exact test: Fisher's exact test is a statistical significance test used in the analysis of contingency tables. Although in practice it is employed when sample sizes are small, it is valid for all sample sizes. It is named after its inventor, Ronald Fisher, and is one of a class of exact tests, so called because the significance of the deviation from a null hypothesis (e.g., p-value) can be calculated exactly, rather than relying on an approximation that becomes exact in the limit as the sample size grows to infinity, as with many statistical tests. Fisher's exact test: Fisher is said to have devised the test following a comment from Muriel Bristol, who claimed to be able to detect whether the tea or the milk was added first to her cup. He tested her claim in the "lady tasting tea" experiment. Purpose and scope: The test is useful for categorical data that result from classifying objects in two different ways; it is used to examine the significance of the association (contingency) between the two kinds of classification. So in Fisher's original example, one criterion of classification could be whether milk or tea was put in the cup first; the other could be whether Bristol thinks that the milk or tea was put in first. We want to know whether these two classifications are associated—that is, whether Bristol really can tell whether milk or tea was poured in first. Most uses of the Fisher test involve, like this example, a 2 × 2 contingency table (discussed below). The p-value from the test is computed as if the margins of the table are fixed, i.e. as if, in the tea-tasting example, Bristol knows the number of cups with each treatment (milk or tea first) and will therefore provide guesses with the correct number in each category. As pointed out by Fisher, this leads under a null hypothesis of independence to a hypergeometric distribution of the numbers in the cells of the table. Purpose and scope: With large samples, a chi-squared test (or better yet, a G-test) can be used in this situation. However, the significance value it provides is only an approximation, because the sampling distribution of the test statistic that is calculated is only approximately equal to the theoretical chi-squared distribution. The approximation is inadequate when sample sizes are small, or the data are very unequally distributed among the cells of the table, resulting in the cell counts predicted on the null hypothesis (the "expected values") being low. The usual rule for deciding whether the chi-squared approximation is good enough is that the chi-squared test is not suitable when the expected values in any of the cells of a contingency table are below 5, or below 10 when there is only one degree of freedom (this rule is now known to be overly conservative). In fact, for small, sparse, or unbalanced data, the exact and asymptotic p-values can be quite different and may lead to opposite conclusions concerning the hypothesis of interest. In contrast the Fisher exact test is, as its name states, exact as long as the experimental procedure keeps the row and column totals fixed, and it can therefore be used regardless of the sample characteristics. It becomes difficult to calculate with large samples or well-balanced tables, but fortunately these are exactly the conditions where the chi-squared test is appropriate. Purpose and scope: For hand calculations, the test is feasible only in the case of a 2 × 2 contingency table. However the principle of the test can be extended to the general case of an m × n table, and some statistical packages provide a calculation (sometimes using a Monte Carlo method to obtain an approximation) for the more general case.The test can also be used to quantify the overlap between two sets. For example, in enrichment analyses in statistical genetics one set of genes may be annotated for a given phenotype and the user may be interested in testing the overlap of their own set with those. Purpose and scope: In this case a 2 × 2 contingency table may be generated and Fisher's exact test applied through identifying Genes that are provided in both lists Genes that are provided in the first list and not the second Genes that are provided in the second list and not the first Genes that are not provided in either listThe test assumes genes in either list are taken from a broader set of genes (e.g. all remaining genes). Purpose and scope: A p-value may then be calculated, summarizing the significance of the overlap between the two lists. Example: For example, a sample of teenagers might be divided into male and female on one hand and those who are and are not currently studying for a statistics exam on the other. For example, we hypothesize that the proportion of studying students is higher among the women than among the men, and we want to test whether any difference in proportions that we observe is significant. The data might look like this: The question we ask about these data is: Knowing that 10 of these 24 teenagers are studying and that 12 of the 24 are female, and assuming the null hypothesis that men and women are equally likely to study, what is the probability that these 10 teenagers who are studying would be so unevenly distributed between the women and the men? If we were to choose 10 of the teenagers at random, what is the probability that 9 or more of them would be among the 12 women and only 1 or fewer from among the 12 men? Before we proceed with the Fisher test, we first introduce some notations. We represent the cells by the letters a, b, c and d, call the totals across rows and columns marginal totals, and represent the grand total by n. So the table now looks like this: Fisher showed that conditional on the margins of the table, a is distributed as a hypergeometric distribution with a+c draws from a population with a+b successes and c+d failures. The probability of obtaining such set of values is given by: where (nk) is the binomial coefficient and the symbol ! indicates the factorial operator. Example: This can be seen as follows. If the marginal totals (i.e. a+b , c+d , a+c , and b+d ) are known, only a single degree of freedom is left: the value e.g. of a suffices to deduce the other values. Now, p=p(a) is the probability that a elements are positive in a random selection (without replacement) of a+c elements from a larger set containing n elements in total out of which a+b are positive, which is precisely the definition of the hypergeometric distribution. Example: With the data above (using the first of the equivalent forms), this gives: The formula above gives the exact hypergeometric probability of observing this particular arrangement of the data, assuming the given marginal totals, on the null hypothesis that men and women are equally likely to be studiers. To put it another way, if we assume that the probability that a man is a studier is p , the probability that a woman is a studier is also p , and we assume that both men and women enter our sample independently of whether or not they are studiers, then this hypergeometric formula gives the conditional probability of observing the values a, b, c, d in the four cells, conditionally on the observed marginals (i.e., assuming the row and column totals shown in the margins of the table are given). This remains true even if men enter our sample with different probabilities than women. The requirement is merely that the two classification characteristics—gender, and studier (or not)—are not associated. Example: For example, suppose we knew probabilities P,Q,p,q with P+Q=p+q=1 such that (male studier, male non-studier, female studier, female non-studier) had respective probabilities (Pp,Pq,Qp,Qq) for each individual encountered under our sampling procedure. Then still, were we to calculate the distribution of cell entries conditional given marginals, we would obtain the above formula in which neither p nor P occurs. Thus, we can calculate the exact probability of any arrangement of the 24 teenagers into the four cells of the table, but Fisher showed that to generate a significance level, we need consider only the cases where the marginal totals are the same as in the observed table, and among those, only the cases where the arrangement is as extreme as the observed arrangement, or more so. (Barnard's test relaxes this constraint on one set of the marginal totals.) In the example, there are 11 such cases. Of these only one is more extreme in the same direction as our data; it looks like this: For this table (with extremely unequal studying proportions) the probability is 10 14 12 24 12 0.000033652 In order to calculate the significance of the observed data, i.e. the total probability of observing data as extreme or more extreme if the null hypothesis is true, we have to calculate the values of p for both these tables, and add them together. This gives a one-tailed test, with p approximately 0.001346076 + 0.000033652 = 0.001379728. For example, in the R statistical computing environment, this value can be obtained as fisher.test(rbind(c(1,9),c(11,3)), alternative="less")$p.value, or in python, using scipy.stats.fisher_exact(table=[[1,9],[11,3]], alternative="less") (where one receives both the prior odds ratio and the p-value). This value can be interpreted as the sum of evidence provided by the observed data—or any more extreme table—for the null hypothesis (that there is no difference in the proportions of studiers between men and women). The smaller the value of p, the greater the evidence for rejecting the null hypothesis; so here the evidence is strong that men and women are not equally likely to be studiers. Example: For a two-tailed test we must also consider tables that are equally extreme, but in the opposite direction. Unfortunately, classification of the tables according to whether or not they are 'as extreme' is problematic. An approach used by the fisher.test function in R is to compute the p-value by summing the probabilities for all tables with probabilities less than or equal to that of the observed table. In the example here, the 2-sided p-value is twice the 1-sided value—but in general these can differ substantially for tables with small counts, unlike the case with test statistics that have a symmetric sampling distribution. Example: As noted above, most modern statistical packages will calculate the significance of Fisher tests, in some cases even where the chi-squared approximation would also be acceptable. The actual computations as performed by statistical software packages will as a rule differ from those described above, because numerical difficulties may result from the large values taken by the factorials. A simple, somewhat better computational approach relies on a gamma function or log-gamma function, but methods for accurate computation of hypergeometric and binomial probabilities remains an active research area. Controversies: Despite the fact that Fisher's test gives exact p-values, some authors have argued that it is conservative, i.e. that its actual rejection rate is below the nominal significance level. The apparent contradiction stems from the combination of a discrete statistic with fixed significance levels. To be more precise, consider the following proposal for a significance test at the 5%-level: reject the null hypothesis for each table to which Fisher's test assigns a p-value equal to or smaller than 5%. Because the set of all tables is discrete, there may not be a table for which equality is achieved. If αe is the largest p-value smaller than 5% which can actually occur for some table, then the proposed test effectively tests at the αe -level. For small sample sizes, αe might be significantly lower than 5%. While this effect occurs for any discrete statistic (not just in contingency tables, or for Fisher's test), it has been argued that the problem is compounded by the fact that Fisher's test conditions on the marginals. To avoid the problem, many authors discourage the use of fixed significance levels when dealing with discrete problems.The decision to condition on the margins of the table is also controversial. The p-values derived from Fisher's test come from the distribution that conditions on the margin totals. In this sense, the test is exact only for the conditional distribution and not the original table where the margin totals may change from experiment to experiment. It is possible to obtain an exact p-value for the 2×2 table when the margins are not held fixed. Barnard's test, for example, allows for random margins. However, some authors (including, later, Barnard himself) have criticized Barnard's test based on this property. They argue that the marginal success total is an (almost) ancillary statistic, containing (almost) no information about the tested property. Controversies: The act of conditioning on the marginal success rate from a 2×2 table can be shown to ignore some information in the data about the unknown odds ratio. The argument that the marginal totals are (almost) ancillary implies that the appropriate likelihood function for making inferences about this odds ratio should be conditioned on the marginal success rate. Whether this lost information is important for inferential purposes is the essence of the controversy. Alternatives: An alternative exact test, Barnard's exact test, has been developed and proponents of it suggest that this method is more powerful, particularly in 2×2 tables. Furthermore, Boschloo's test is an exact test that is uniformly more powerful than Fisher's exact test by construction. Another alternative is to use maximum likelihood estimates to calculate a p-value from the exact binomial or multinomial distributions and reject or fail to reject based on the p-value.For stratified categorical data the Cochran–Mantel–Haenszel test must be used instead of Fisher's test. Alternatives: Choi et al. propose a p-value derived from the likelihood ratio test based on the conditional distribution of the odds ratio given the marginal success rate. This p-value is inferentially consistent with classical tests of normally distributed data as well as with likelihood ratios and support intervals based on this conditional likelihood function. It is also readily computable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fountain** Fountain: A fountain, from the Latin "fons" (genitive "fontis"), meaning source or spring, is a decorative reservoir used for discharging water. It is also a structure that jets water into the air for a decorative or dramatic effect. Fountain: Fountains were originally purely functional, connected to springs or aqueducts and used to provide drinking water and water for bathing and washing to the residents of cities, towns and villages. Until the late 19th century most fountains operated by gravity, and needed a source of water higher than the fountain, such as a reservoir or aqueduct, to make the water flow or jet into the air. Fountain: In addition to providing drinking water, fountains were used for decoration and to celebrate their builders. Roman fountains were decorated with bronze or stone masks of animals or heroes. In the Middle Ages, Moorish and Muslim garden designers used fountains to create miniature versions of the gardens of paradise. King Louis XIV of France used fountains in the Gardens of Versailles to illustrate his power over nature. The baroque decorative fountains of Rome in the 17th and 18th centuries marked the arrival point of restored Roman aqueducts and glorified the Popes who built them.By the end of the 19th century, as indoor plumbing became the main source of drinking water, urban fountains became purely decorative. Mechanical pumps replaced gravity and allowed fountains to recycle water and to force it high into the air. The Jet d'Eau in Lake Geneva, built in 1951, shoots water 140 metres (460 ft) in the air. The highest such fountain in the world is King Fahd's Fountain in Jeddah, Saudi Arabia, which spouts water 260 metres (850 ft) above the Red Sea.Fountains are used today to decorate city parks and squares; to honor individuals or events; for recreation and for entertainment. A splash pad or spray pool allows city residents to enter, get wet and cool off in summer. The musical fountain combines moving jets of water, colored lights and recorded music, controlled by a computer, for dramatic effects. Fountains can themselves also be musical instruments played by obstruction of one or more of their water jets. Fountain: Drinking fountains provide clean drinking water in public buildings, parks and public spaces. History: Ancient fountains Ancient civilizations built stone basins to capture and hold precious drinking water. A carved stone basin, dating to around 2000 BC, was discovered in the ruins of the ancient Sumerian city of Lagash in modern Iraq. The ancient Assyrians constructed a series of basins in the gorge of the Comel River, carved in solid rock, connected by small channels, descending to a stream. The lowest basin was decorated with carved reliefs of two lions. The ancient Egyptians had ingenious systems for hoisting water up from the Nile for drinking and irrigation, but without a higher source of water it was not possible to make water flow by gravity, There are lion-shaped fountains in the Temple of Dendera in Qena. History: The ancient Greeks used aqueducts and gravity-powered fountains to distribute water. According to ancient historians, fountains existed in Athens, Corinth, and other ancient Greek cities in the 6th century BC as the terminating points of aqueducts which brought water from springs and rivers into the cities. In the 6th century BC, the Athenian ruler Peisistratos built the main fountain of Athens, the Enneacrounos, in the Agora, or main square. It had nine large cannons, or spouts, which supplied drinking water to local residents. History: Greek fountains were made of stone or marble, with water flowing through bronze pipes and emerging from the mouth of a sculpted mask that represented the head of a lion or the muzzle of an animal. Most Greek fountains flowed by simple gravity, but they also discovered how to use principle of a siphon to make water spout, as seen in pictures on Greek vases. History: Ancient Roman fountains The Ancient Romans built an extensive system of aqueducts from mountain rivers and lakes to provide water for the fountains and baths of Rome. The Roman engineers used lead pipes instead of bronze to distribute the water throughout the city. The excavations at Pompeii, which revealed the city as it was when it was destroyed by Mount Vesuvius in 79 AD, uncovered free-standing fountains and basins placed at intervals along city streets, fed by siphoning water upwards from lead pipes under the street. The excavations of Pompeii also showed that the homes of wealthy Romans often had a small fountain in the atrium, or interior courtyard, with water coming from the city water supply and spouting into a small bowl or basin. History: Ancient Rome was a city of fountains. According to Sextus Julius Frontinus, the Roman consul who was named curator aquarum or guardian of the water of Rome in 98 AD, Rome had nine aqueducts which fed 39 monumental fountains and 591 public basins, not counting the water supplied to the Imperial household, baths and owners of private villas. Each of the major fountains was connected to two different aqueducts, in case one was shut down for service.The Romans were able to make fountains jet water into the air, by using the pressure of water flowing from a distant and higher source of water to create hydraulic head, or force. Illustrations of fountains in gardens spouting water are found on wall paintings in Rome from the 1st century BC, and in the villas of Pompeii. The Villa of Hadrian in Tivoli featured a large swimming basin with jets of water. Pliny the Younger described the banquet room of a Roman villa where a fountain began to jet water when visitors sat on a marble seat. The water flowed into a basin, where the courses of a banquet were served in floating dishes shaped like boats.Roman engineers built aqueducts and fountains throughout the Roman Empire. Examples can be found today in the ruins of Roman towns in Vaison-la-Romaine and Glanum in France, in Augst, Switzerland, and other sites. History: Medieval fountains In Nepal there were public drinking fountains at least as early as 550 AD. They are called dhunge dharas or hitis. They consist of intricately carved stone spouts through which water flows uninterrupted from underground water sources. They are found extensively in Nepal and some of them are still operational. Construction of water conduits like hitis and dug wells are considered as pious acts in Nepal.During the Middle Ages, Roman aqueducts were wrecked or fell into decay, and many fountains throughout Europe stopped working, so fountains existed mainly in art and literature, or in secluded monasteries or palace gardens. Fountains in the Middle Ages were associated with the source of life, purity, wisdom, innocence, and the Garden of Eden. In illuminated manuscripts like the Tres Riches Heures du Duc de Berry (1411–1416), the Garden of Eden was shown with a graceful gothic fountain in the center (see illustration). The Ghent Altarpiece by Jan van Eyck, finished in 1432, also shows a fountain as a feature of the adoration of the mystic lamb, a scene apparently set in Paradise. History: The cloister of a monastery was supposed to be a replica of the Garden of Eden, protected from the outside world. Simple fountains, called lavabos, were placed inside Medieval monasteries such as Le Thoronet Abbey in Provence and were used for ritual washing before religious services.Fountains were also found in the enclosed medieval jardins d'amour, "gardens of courtly love" – ornamental gardens used for courtship and relaxation. The medieval romance The Roman de la Rose describes a fountain in the center of an enclosed garden, feeding small streams bordered by flowers and fresh herbs. History: Some Medieval fountains, like the cathedrals of their time, illustrated biblical stories, local history and the virtues of their time. The Fontana Maggiore in Perugia, dedicated in 1278, is decorated with stone carvings representing prophets and saints, allegories of the arts, labors of the months, the signs of the zodiac, and scenes from Genesis and Roman history.Medieval fountains could also provide amusement. The gardens of the Counts of Artois at the Château de Hesdin, built in 1295, contained famous fountains, called Les Merveilles de Hesdin ("The Wonders of Hesdin") which could be triggered to drench surprised visitors. History: Fountains of the Islamic World Shortly after the spread of Islam, the Arabs incorporated into their city planning the famous Islamic gardens. Islamic gardens after the 7th century were traditionally enclosed by walls and were designed to represent paradise. The paradise gardens, were laid out in the form of a cross, with four channels representing the rivers of Paradise, dividing the four parts of the world. Water sometimes spouted from a fountain in the center of the cross, representing the spring or fountain, Salsabil, described in the Qur'an as the source of the rivers of Paradise.In the 9th century, the Banū Mūsā brothers, a trio of Persian Inventors, were commissioned by the Caliph of Baghdad to summarize the engineering knowledge of the ancient Greek and Roman world. They wrote a book entitled the Book of Ingenious Devices, describing the works of the 1st century Greek Engineer Hero of Alexandria and other engineers, plus many of their own inventions. They described fountains which formed water into different shapes and a wind-powered water pump, but it is not known if any of their fountains were ever actually built.The Persian rulers of the Middle Ages had elaborate water distribution systems and fountains in their palaces and gardens. Water was carried by a pipe into the palace from a source at a higher elevation. Once inside the palace or garden it came up through a small hole in a marble or stone ornament and poured into a basin or garden channels. The gardens of Pasargades had a system of canals which flowed from basin to basin, both watering the garden and making a pleasant sound. The Persian engineers also used the principle of the syphon (called shotor-gelu in Persian, literally 'neck of the camel) to create fountains which spouted water or made it resemble a bubbling spring. The garden of Fin, near Kashan, used 171 spouts connected to pipes to create a fountain called the Howz-e jush, or "boiling basin".The 11th century Persian poet Azraqi described a Persian fountain: From a marvelous faucet of gold pours a wave whose clarity is more pure than a soul; The turquoise and silver form ribbons in the basin coming from this faucet of gold ...Reciprocating motion was first described in 1206 by Arab Muslim engineer and inventor al-Jazari when the kings of the Artuqid dynasty in Turkey commissioned him to manufacture a machine to raise water for their palaces. The finest result was a machine called the double-acting reciprocating piston pump, which translated rotary motion to reciprocating motion via the crankshaft-connecting rod mechanism. History: The palaces of Moorish Spain, particularly the Alhambra in Granada, had famous fountains. The patio of the Sultan in the gardens of Generalife in Granada (1319) featured spouts of water pouring into a basin, with channels which irrigated orange and myrtle trees. The garden was modified over the centuries – the jets of water which cross the canal today were added in the 19th century.The fountain in the Court of the Lions of the Alhambra, built from 1362 to 1391, is a large vasque mounted on twelve stone statues of lions. Water spouts upward in the vasque and pours from the mouths of the lions, filling four channels dividing the courtyard into quadrants. The basin dates to the 14th century, but the lions spouting water are believed to be older, dating to the 11th century.The design of the Islamic garden spread throughout the Islamic world, from Moorish Spain to the Mughal Empire in the Indian subcontinent. The Shalimar Gardens built by Emperor Shah Jahan in 1641, were said to be ornamented with 410 fountains, which fed into a large basin, canal and marble pools. History: In the Ottoman Empire, rulers often built fountains next to mosques so worshippers could do their ritual washing. Examples include the Fountain of Qasim Pasha (1527), Temple Mount, Jerusalem, an ablution and drinking fountain built during the Ottoman reign of Suleiman the Magnificent; the Fountain of Ahmed III (1728) at the Topkapı Palace, Istanbul, another Fountain of Ahmed III in Üsküdar (1729) and Tophane Fountain (1732). Palaces themselves often had small decorated fountains, which provided drinking water, cooled the air, and made a pleasant splashing sound. One surviving example is the Fountain of Tears (1764) at the Bakhchisarai Palace, in Crimea; which was made famous by a poem of Alexander Pushkin. History: The sebil was a decorated fountain that was often the only source of water for the surrounding neighborhood. It was often commissioned as an act of Islamic piety by a rich person. History: Renaissance fountains (15th–17th centuries) In the 14th century, Italian humanist scholars began to rediscover and translate forgotten Roman texts on architecture by Vitruvius, on hydraulics by Hero of Alexandria, and descriptions of Roman gardens and fountains by Pliny the Younger, Pliny the Elder, and Varro. The treatise on architecture, De re aedificatoria, by Leon Battista Alberti, which described in detail Roman villas, gardens and fountains, became the guidebook for Renaissance builders.In Rome, Pope Nicholas V (1397–1455), himself a scholar who commissioned hundreds of translations of ancient Greek classics into Latin, decided to embellish the city and make it a worthy capital of the Christian world. In 1453, he began to rebuild the Acqua Vergine, the ruined Roman aqueduct which had brought clean drinking water to the city from eight miles (13 km) away. He also decided to revive the Roman custom of marking the arrival point of an aqueduct with a mostra, a grand commemorative fountain. He commissioned the architect Leon Battista Alberti to build a wall fountain where the Trevi Fountain is now located. The aqueduct he restored, with modifications and extensions, eventually supplied water to the Trevi Fountain and the famous baroque fountains in the Piazza del Popolo and Piazza Navona.One of the first new fountains to be built in Rome during the Renaissance was the fountain in the piazza in front of the church of Santa Maria in Trastevere (1472), which was placed on the site of an earlier Roman fountain. Its design, based on an earlier Roman model, with a circular vasque on a pedestal pouring water into a basin below, became the model for many other fountains in Rome, and eventually for fountains in other cities, from Paris to London.In 1503, Pope Julius II decided to recreate a classical pleasure garden in the same place. The new garden, called the Cortile del Belvedere, was designed by Donato Bramante. The garden was decorated with the Pope's famous collection of classical statues, and with fountains. The Venetian Ambassador wrote in 1523, "... On one side of the garden is a most beautiful loggia, at one end of which is a lovely fountain that irrigates the orange trees and the rest of the garden by a little canal in the center of the loggia ... The original garden was split in two by the construction of the Vatican Library in the 16th century, but a new fountain by Carlo Maderno was built in the Cortile del Belvedere, with a jet of water shooting up from a circular stone bowl on an octagonal pedestal in a large basin.In 1537, in Florence, Cosimo I de' Medici, who had become ruler of the city at the age of only 17, also decided to launch a program of aqueduct and fountain building. The city had previously gotten all its drinking water from wells and reservoirs of rain water, which meant that there was little water or water pressure to run fountains. Cosimo built an aqueduct large enough for the first continually-running fountain in Florence, the Fountain of Neptune in the Piazza della Signoria (1560–1567). This fountain featured an enormous white marble statue of Neptune, resembling Cosimo, by sculptor Bartolomeo Ammannati.Under the Medicis, fountains were not just sources of water, but advertisements of the power and benevolence of the city's rulers. They became central elements not only of city squares, but of the new Italian Renaissance garden. The great Medici Villa at Castello, built for Cosimo by Benedetto Varchi, featured two monumental fountains on its central axis; one showing with two bronze figures representing Hercules slaying Antaeus, symbolizing the victory of Cosimo over his enemies; and a second fountain, in the middle of a circular labyrinth of cypresses, laurel, myrtle and roses, had a bronze statue by Giambologna which showed the goddess Venus wringing her hair. The planet Venus was governed by Capricorn, which was the emblem of Cosimo; the fountain symbolized that he was the absolute master of Florence.By the middle Renaissance, fountains had become a form of theater, with cascades and jets of water coming from marble statues of animals and mythological figures. The most famous fountains of this kind were found in the Villa d'Este (1550–1572), at Tivoli near Rome, which featured a hillside of basins, fountains and jets of water, as well as a fountain which produced music by pouring water into a chamber, forcing air into a series of flute-like pipes. The gardens also featured giochi d'acqua, water jokes, hidden fountains which suddenly soaked visitors. History: Between 1546 and 1549, the merchants of Paris built the first Renaissance-style fountain in Paris, the Fontaine des Innocents, to commemorate the ceremonial entry of the King into the city. The fountain, which originally stood against the wall of the church of the Holy Innocents, as rebuilt several times and now stands in a square near Les Halles. It is the oldest fountain in Paris.Henry constructed an Italian-style garden with a fountain shooting a vertical jet of water for his favorite mistress, Diane de Poitiers, next to the Château de Chenonceau (1556–1559). At the royal Château de Fontainebleau, he built another fountain with a bronze statue of Diane, goddess of the hunt, modeled after Diane de Poitiers.Later, after the death of Henry II, his widow, Catherine de Medici, expelled Diane de Poitiers from Chenonceau and built her own fountain and garden there. History: King Henry IV of France made an important contribution to French fountains by inviting an Italian hydraulic engineer, Tommaso Francini, who had worked on the fountains of the villa at Pratalino, to make fountains in France. Francini became a French citizen in 1600, built the Medici Fountain, and during the rule of the young King Louis XIII, he was raised to the position of Intendant général des Eaux et Fontaines of the king, a position which was hereditary. His descendants became the royal fountain designers for Louis XIII and for Louis XIV at Versailles.In 1630, another Medici, Marie de Medici, the widow of Henry IV, built her own monumental fountain in Paris, the Medici Fountain, in the garden of the Palais du Luxembourg. That fountain still exists today, with a long basin of water and statues added in 1866. History: Baroque fountains (17th–18th century) Baroque Fountains of Rome The 17th and 18th centuries were a golden age for fountains in Rome, which began with the reconstruction of ruined Roman aqueducts and the construction by the Popes of mostra, or display fountains, to mark their termini. The new fountains were expressions of the new Baroque art, which was officially promoted by the Catholic Church as a way to win popular support against the Protestant Reformation; the Council of Trent had declared in the 16th century that the Church should counter austere Protestantism with art that was lavish, animated and emotional. The fountains of Rome, like the paintings of Rubens, were examples of the principles of Baroque art. They were crowded with allegorical figures, and filled with emotion and movement. In these fountains, sculpture became the principal element, and the water was used simply to animate and decorate the sculptures. They, like baroque gardens, were "a visual representation of confidence and power."The first of the Fountains of St. Peter's Square, by Carlo Maderno, (1614) was one of the earliest Baroque fountains in Rome, made to complement the lavish Baroque façade he designed for St. Peter's Basilica behind it. It was fed by water from the Paola aqueduct, restored in 1612, whose source was 266 feet (81 m) above sea level, which meant it could shoot water twenty feet up from the fountain. Its form, with a large circular vasque on a pedestal pouring water into a basin and an inverted vasque above it spouting water, was imitated two centuries later in the Fountains of the Place de la Concorde in Paris. History: The Triton Fountain in the Piazza Barberini (1642), by Gian Lorenzo Bernini, is a masterpiece of Baroque sculpture, representing Triton, half-man and half-fish, blowing his horn to calm the waters, following a text by the Roman poet Ovid in the Metamorphoses. The Triton fountain benefited from its location in a valley, and the fact that it was fed by the Aqua Felice aqueduct, restored in 1587, which arrived in Rome at an elevation of 194 feet (59 m) above sea level (fasl), a difference of 130 feet (40 m) in elevation between the source and the fountain, which meant that the water from this fountain jetted sixteen feet straight up into the air from the conch shell of the triton.The Piazza Navona became a grand theater of water, with three fountains, built in a line on the site of the Stadium of Domitian. The fountains at either end are by Giacomo della Porta; the Neptune fountain to the north, (1572) shows the God of the Sea spearing an octopus, surrounded by tritons, sea horses and mermaids. At the southern end is Il Moro, possibly also a figure of Neptune riding a fish in a conch shell. In the center is the Fontana dei Quattro Fiumi, (The Fountain of the Four Rivers) (1648–51), a highly theatrical fountain by Bernini, with statues representing rivers from the four continents; the Nile, Danube, Plate River and Ganges. Over the whole structure is a 54-foot (16 m) Egyptian obelisk, crowned by a cross with the emblem of the Pamphili family, representing Pope Innocent X, whose family palace was on the piazza. The theme of a fountain with statues symbolizing great rivers was later used in the Place de la Concorde (1836–40) and in the Fountain of Neptune in the Alexanderplatz in Berlin (1891). The fountains of Piazza Navona had one drawback - their water came from the Acqua Vergine, which had only a 23-foot (7.0 m) drop from the source to the fountains, which meant the water could only fall or trickle downwards, not jet very high upwards.The Trevi Fountain is the largest and most spectacular of Rome's fountains, designed to glorify the three different Popes who created it. It was built beginning in 1730 at the terminus of the reconstructed Acqua Vergine aqueduct, on the site of Renaissance fountain by Leon Battista Alberti. It was the work of architect Nicola Salvi and the successive project of Pope Clement XII, Pope Benedict XIV and Pope Clement XIII, whose emblems and inscriptions are carried on the attic story, entablature and central niche. The central figure is Oceanus, the personification of all the seas and oceans, in an oyster-shell chariot, surrounded by Tritons and Sea Nymphs. History: In fact, the fountain had very little water pressure, because the source of water was, like the source for the Piazza Navona fountains, the Acqua Vergine, with a 23-foot (7.0 m) drop. Salvi compensated for this problem by sinking the fountain down into the ground, and by carefully designing the cascade so that the water churned and tumbled, to add movement and drama. Wrote historians Maria Ann Conelli and Marilyn Symmes, "On many levels the Trevi altered the appearance, function and intent of fountains and was a watershed for future designs." Baroque fountains of Versailles Beginning in 1662, King Louis XIV of France began to build a new kind of garden, the Garden à la française, or French formal garden, at the Palace of Versailles. In this garden, the fountain played a central role. He used fountains to demonstrate the power of man over nature, and to illustrate the grandeur of his rule. In the Gardens of Versailles, instead of falling naturally into a basin, water was shot into the sky, or formed into the shape of a fan or bouquet. Dancing water was combined with music and fireworks to form a grand spectacle. These fountains were the work of the descendants of Tommaso Francini, the Italian hydraulic engineer who had come to France during the time of Henry IV and built the Medici Fountain and the Fountain of Diana at Fontainebleau. History: Two fountains were the centerpieces of the Gardens of Versailles, both taken from the myths about Apollo, the sun god, the emblem of Louis XIV, and both symbolizing his power. The Fontaine Latone (1668–70) designed by André Le Nôtre and sculpted by Gaspard and Balthazar Marsy, represents the story of how the peasants of Lycia tormented Latona and her children, Diana and Apollo, and were punished by being turned into frogs. This was a reminder of how French peasants had abused Louis's mother, Anne of Austria, during the uprising called the Fronde in the 1650s. When the fountain is turned on, sprays of water pour down on the peasants, who are frenzied as they are transformed into creatures.The other centerpiece of the Gardens, at the intersection of the main axes of the Gardens of Versailles, is the Bassin d'Apollon (1668–71), designed by Charles Le Brun and sculpted by Jean Baptiste Tuby. This statue shows a theme also depicted in the painted decoration in the Hall of Mirrors of the Palace of Versailles: Apollo in his chariot about to rise from the water, announced by Tritons with seashell trumpets. Historians Mary Anne Conelli and Marilyn Symmes wrote, "Designed for dramatic effect and to flatter the king, the fountain is oriented so that the Sun God rises from the west and travels east toward the chateau, in contradiction to nature."Besides these two monumental fountains, the Gardens over the years contained dozens of other fountains, including thirty-nine animal fountains in the labyrinth depicting the fables of Jean de La Fontaine. History: There were so many fountains at Versailles that it was impossible to have them all running at once; when Louis XIV made his promenades, his fountain-tenders turned on the fountains ahead of him and turned off those behind him. Louis built an enormous pumping station, the Machine de Marly, with fourteen water wheels and 253 pumps to raise the water three hundred feet from the River Seine, and even attempted to divert the River Eure to provide water for his fountains, but the water supply was never enough. History: Baroque fountains of Peterhof In Russia, Peter the Great founded a new capital at St. Petersburg in 1703 and built a small Summer Palace and gardens there beside the Neva River. The gardens featured a fountain of two sea monsters spouting water, among the earliest fountains in Russia. History: In 1709, he began constructing a larger palace, Peterhof Palace, alongside the Gulf of Finland, Peter visited France in 1717 and saw the gardens and fountains of Louis XIV at Versailles, Marly and Fontainebleau. When he returned he began building a vast Garden à la française with fountains at Peterhof. The central feature of the garden was a water cascade, modeled after the cascade at the Château de Marly of Louis XIV, built in 1684. The gardens included trick fountains designed to drench unsuspecting visitors, a popular feature of the Italian Renaissance garden.,In 1800–1802 the Emperor Paul I of Russia and his successor, Alexander I of Russia, built a new fountain at the foot of the cascade depicting Samson prying open the mouth of a lion, representing Peter's victory over Sweden in the Great Northern War in 1721. The fountains were fed by reservoirs in the upper garden, while the Samson fountain was fed by a specially-constructed aqueduct four kilometers in length. History: 19th century fountains In the early 19th century, London and Paris built aqueducts and new fountains to supply clean drinking water to their exploding populations. Napoleon Bonaparte started construction on the first canals bringing drinking water to Paris, fifteen new fountains, the most famous being the Fontaine du Palmier in the Place du Châtelet, (1896–1808), celebrating his military victories. History: He also restored and put back into service some of the city's oldest fountains, such as the Medici Fountain. Two of Napoleon's fountains, the Chateau d'Eau and the fountain in the Place des Vosges, were the first purely decorative fountains in Paris, without water taps for drinking water.Louis-Philippe (1830–1848) continued Napoleon's work, and added some of Paris's most famous fountains, notably the Fontaines de la Concorde (1836–1840) and the fountains in the Place des Vosges.Following a deadly cholera epidemic in 1849, Louis Napoleon decided to completely rebuild the Paris water supply system, separating the water supply for fountains from the water supply for drinking. The most famous fountain built by Louis Napoleon was the Fontaine Saint-Michel, part of his grand reconstruction of Paris boulevards. Louis Napoleon relocated and rebuilt several earlier fountains, such as the Medici Fountain and the Fontaine de Leda, when their original sites were destroyed by his construction projects.In the mid-nineteenth century the first fountains were built in the United States, connected to the first aqueducts bringing drinking water from outside the city. The first fountain in Philadelphia, at Centre Square, opened in 1809, and featured a statue by sculptor William Rush. The first fountain in New York City, in City Hall Park, opened in 1842, and the first fountain in Boston was turned on in 1848. The first famous American decorative fountain was the Bethesda Fountain in Central Park in New York City, opened in 1873.The 19th century also saw the introduction of new materials in fountain construction; cast iron (the Fontaines de la Concorde); glass (the Crystal Fountain in London (1851)) and even aluminium (the Shaftesbury Memorial Fountain in Piccadilly Circus, London, (1897)).The invention of steam pumps meant that water could be supplied directly to homes, and pumped upward from fountains. The new fountains in Trafalgar Square (1845) used steam pumps from an artesian well. By the end of the 19th century fountains in big cities were no longer used to supply drinking water, and were simply a form of art and urban decoration.Another fountain innovation of the 19th century was the illuminated fountain: The Bartholdi Fountain at the Philadelphia Exposition of 1876 was illuminated by gas lamps. In 1884 a fountain in Britain featured electric lights shining upward through the water. The Exposition Universelle (1889) which celebrated the 100th anniversary of the French Revolution featured a fountain illuminated by electric lights shining up though the columns of water. The fountains, located in a basin forty meters in diameter, were given color by plates of colored glass inserted over the lamps. The Fountain of Progress gave its show three times each evening, for twenty minutes, with a series of different colors. History: 20th century fountains Paris fountains in the 20th century no longer had to supply drinking water - they were purely decorative; and, since their water usually came from the river and not from the city aqueducts, their water was no longer drinkable. Twenty-eight new fountains were built in Paris between 1900 and 1940; nine new fountains between 1900 and 1910; four between 1920 and 1930; and fifteen between 1930 and 1940.The biggest fountains of the period were those built for the International Expositions of 1900, 1925 and 1937, and for the Colonial Exposition of 1931. Of those, only the fountains from the 1937 exposition at the Palais de Chaillot still exist. (See Fountains of International Expositions). History: Only a handful of fountains were built in Paris between 1940 and 1980. The most important ones built during that period were on the edges of the city, on the west, just outside the city limits, at La Défense, and to the east at the Bois de Vincennes. History: Between 1981 and 1995, during the terms of President François Mitterrand and Culture Minister Jack Lang, and of Mitterrand's bitter political rival, Paris Mayor Jacques Chirac (Mayor from 1977 until 1995), the city experienced a program of monumental fountain building that exceeded that of Napoleon Bonaparte or Louis Philippe. More than one hundred fountains were built in Paris in the 1980s, mostly in the neighborhoods outside the center of Paris, where there had been few fountains before These included the Fontaine Cristaux, homage to Béla Bartók by Jean-Yves Lechevallier (1980); the Stravinsky Fountain next to the Pompidou Center, by sculptors Niki de Saint Phalle and Jean Tinguely (1983); the fountain of the Pyramid of the Louvre by I.M. Pei, (1989), the Buren Fountain by sculptor Daniel Buren, Les Sphérades fountain, both in the Palais-Royal, and the fountains of Parc André-Citroën. The Mitterrand-Chirac fountains had no single style or theme. Many of the fountains were designed by famous sculptors or architects, such as Jean Tinguely, I.M. Pei, Claes Oldenburg and Daniel Buren, who had radically different ideas of what a fountain should be. Some were solemn, and others were whimsical. Most made little effort to blend with their surroundings - they were designed to attract attention. History: Fountains built in the United States between 1900 and 1950 mostly followed European models and classical styles. The Samuel Francis Dupont Memorial Fountain, in Dupont Circle, Washington D.C., was designed and created by Henry Bacon and Daniel Chester French, the architect and sculptor of the Lincoln Memorial, in 1921, in a pure neoclassical style. The Buckingham Fountain in Grant Park in Chicago was one of the first American fountains to use powerful modern pumps to shoot water as high as 150 feet (46 meters) into the air. The Fountain of Prometheus, built at the Rockefeller Center in New York City in 1933, was the first American fountain in the Art-Deco style. History: After World War II, fountains in the United States became more varied in form. Some, like Ruth Asawa's Andrea (1968) and the Vaillancourt Fountain (1971), both located in San Francisco, were pure works of sculpture. Other fountains, like the Frankin Roosevelt Memorial Waterfall (1997), by architect Lawrence Halprin, were designed as landscapes to illustrate themes. This fountain is part of the Franklin Delano Roosevelt Memorial in Washington D.C., which has four outdoor "rooms" illustrating his presidency. Each "room" contains a cascade or waterfall; the cascade in the third room illustrates the turbulence of the years of the World War II. Halprin wrote at an early stage of the design; "the whole environment of the memorial becomes sculpture: to touch, feel, hear and contact - with all the senses."The end of the 20th century the development of high-shooting fountains, beginning with the Jet d'eau in Geneva in 1951, and followed by taller and taller fountains in the United States and the Middle East. The highest fountain today in the King Fahd's Fountain in Jeddah, Saudi Arabia. History: It also saw the increasing popularity of the musical fountain, which combined water, music and light, choreographed by computers. (See Musical fountain below). History: Contemporary fountains (2001–present) The fountain called 'Bit.Fall' by German artist Julius Popp (2005) uses digital technologies to spell out words with water. The fountain is run by a statistical program which selects words at random from news stories on the Internet. It then recodes these words into pictures. Then 320 nozzles inject the water into electromagnetic valves. The program uses rasterization and bitmap technologies to synchronize the valves so drops of water form an image of the words as they fall. According to Popp, the sheet of water is "a metaphor for the constant flow of information from which we cannot escape."Crown Fountain is an interactive fountain and video sculpture feature in Chicago's Millennium Park. Designed by Catalan artist Jaume Plensa, it opened in July 2004. The fountain is composed of a black granite reflecting pool placed between a pair of glass brick towers. The towers are 50 feet (15 m) tall, and they use light-emitting diodes (LEDs) to display digital videos on their inward faces. Construction and design of the Crown Fountain cost US$17 million. Weather permitting, the water operates from May to October, intermittently cascading down the two towers and spouting through a nozzle on each tower's front face. History: Few new fountains have been built in Paris since 2000. The most notable is La Danse de la fontaine emergente (2008), located on Place Augusta-Holmes, rue Paul Klee, in the 13th arrondissement. It was designed by the French-Chinese sculptor Chen Zhen (1955-2000), shortly before his death in 2000, and finished through the efforts of his spouse and collaborator. It shows a dragon, in stainless steel, glass and plastic, emerging and submerging from the pavement of the square. The fountain is in three parts. A bas-relief of the dragon is fixed on the wall of the structure of the water-supply plant, and the dragon seems to be emerging from the wall and plunging underground. This part of the dragon is opaque. The second and third parts depict the arch of the dragon's back coming out of the pavement. These parts of the dragon are transparent, and water under pressure flows visibly within, and is illuminated at night. Musical fountains: Musical fountains create a theatrical spectacle with music, light and water, usually employing a variety of programmable spouts and water jets controlled by a computer. Musical fountains: Musical fountains were first described in the 1st century AD by the Greek scientist and engineer Hero of Alexandria in his book Pneumatics. Hero described and provided drawings of "A bird made to whistle by flowing water," "A Trumpet sounded by flowing water," and "Birds made to sing and be silent alternately by flowing water." In Hero's descriptions, water pushed air through musical instruments to make sounds. It is not known if Hero made working models of any of his designs.During the Italian Renaissance, the most famous musical fountains were located in the gardens of the Villa d'Este, in Tivoli. which were created between 1550 and 1572. Following the ideas of Hero of Alexandria, the Fountain of the Owl used a series of bronze pipes like flutes to make the sound of birds. The most famous feature of the garden was the great Organ Fountain. It was described by the French philosopher Michel de Montaigne, who visited the garden in 1580: "The music of the Organ Fountain is true music, naturally created ... made by water which falls with great violence into a cave, rounded and vaulted, and agitates the air, which is forced to exit through the pipes of an organ. Other water, passing through a wheel, strikes in a certain order the keyboard of the organ. The organ also imitates the sound of trumpets, the sound of cannon, and the sound of muskets, made by the sudden fall of water ... The Organ Fountain fell into ruins, but it was recently restored and plays music again. Musical fountains: Louis XIV created the idea of the modern musical fountain by staging spectacles in the Gardens of Versailles, using music and fireworks to accompany the flow of the fountains. Musical fountains: The great international expositions held in Philadelphia, London and Paris featured the ancestors of the modern musical fountain. They introduced the first fountains illuminated by gas lights (Philadelphia in 1876); and the first fountains illuminated by electric lights (London in 1884 and Paris in 1889). The Exposition Universelle (1900) in Paris featured fountains illuminated by colored lights controlled by a keyboard. The Paris Colonial Exposition of 1931 presented the Théâtre d'eau, or water theater, located in a lake, with performance of dancing water. The Exposition Internationale des Arts et Techniques dans la Vie Moderne (1937) had combined arches and columns of water from fountains in the Seine with light, and with music from loudspeakers on eleven rafts anchored in the river, playing the music of the leading composers of the time. (See International Exposition Fountains, above.) Today some of the best-known musical fountains in the world are at the Bellagio Hotel & Casino in Las Vegas, (2009); the Dubai Fountain in the United Arab Emirates; the World of Color at Disney California Adventure Park (2010) and Aquanura at the Efteling in the Netherlands (2012). Splash fountains: A splash fountain or bathing fountain is intended for people to come in and cool off on hot summer days. These fountains are also referred to as interactive fountains. These fountains are designed to allow easy access, and feature nonslip surfaces, and have no standing water, to eliminate possible drowning hazards, so that no lifeguards or supervision is required. These splash pads are often located in public pools, public parks, or public playgrounds (known as "spraygrounds"). In some splash fountains, such as Dundas Square in Toronto, Canada, the water is heated by solar energy captured by the special dark-colored granite slabs. The fountain at Dundas Square features 600 ground nozzles arranged in groups of 30 (3 rows of 10 nozzles). Each group of 30 nozzles is located beneath a stainless steel grille. Twenty such grilles are arranged in two rows of 10, in the middle of the main walkway through Dundas Square. Drinking fountain: A water fountain or drinking fountain is designed to provide drinking water and has a basin arrangement with either continuously running water or a tap. The drinker bends down to the stream of water and swallows water directly from the stream. Modern indoor drinking fountains may incorporate filters to remove impurities from the water and chillers to reduce its temperature. In some regional dialects, water fountains are called bubblers. Water fountains are usually found in public places, like schools, rest areas, libraries, and grocery stores. Many jurisdictions require water fountains to be wheelchair accessible (by sticking out horizontally from the wall), and to include an additional unit of a lower height for children and short adults. The design that this replaced often had one spout atop a refrigeration unit. Drinking fountain: In 1859, The Metropolitan Drinking Fountain and Cattle Trough Association was established to promote the provision of drinking water for people and animals in the United Kingdom and overseas. More recently, in 2010, the FindaFountain campaign was launched in the UK to encourage people to use drinking fountains instead of environmentally damaging bottled water. A map showing the location of UK drinking water fountains is published on the FindaFountain website. How fountains work: From Roman times until the end of the 19th century, fountains operated by gravity, requiring a source of water higher than the fountain itself to make the water flow. The greater the difference between the elevation of the source of water and the fountain, the higher the water would go upwards from the fountain. In Roman cities, water for fountains came from lakes and rivers and springs in the hills, brought into city in aqueducts and then distributed to fountains through a system of lead pipes. How fountains work: From the Middle Ages onwards, fountains in villages or towns were connected to springs, or to channels which brought water from lakes or rivers. In Provence, a typical village fountain consisted of a pipe or underground duct from a spring at a higher elevation than the fountain. The water from the spring flowed down to the fountain, then up a tube into a bulb-shaped stone vessel, like a large vase with a cover on top. The inside of the vase, called the bassin de répartition, was filled with water up to a level just above the mouths of the canons, or spouts, which slanted downwards. The water poured down through the canons, creating a siphon, so that the fountain ran continually. How fountains work: In cities and towns, residents filled vessels or jars of water jets from the canons of the fountain or paid a water porter to bring the water to their home. Horses and domestic animals could drink the water in the basin below the fountain. The water not used often flowed into a separate series of basins, a lavoir, used for washing and rinsing clothes. After being used for washing, the same water then ran through a channel to the town's kitchen garden. In Provence, since clothes were washed with ashes, the water that flowed into the garden contained potassium, and was valuable as fertilizer.The most famous fountains of the Renaissance, at the Villa d'Este in Tivoli, were located on a steep slope near a river; the builders ran a channel from the river to a large fountain at top of the garden, which then fed other fountains and basins on the levels below. The fountains of Rome, built from the Renaissance through the 18th century, took their water from rebuilt Roman aqueducts which brought water from lakes and rivers at a higher elevation than the fountains. Those fountains with a high source of water, such as the Triton Fountain, could shoot water 16 feet (4.9 m) in air. Fountains with a lower source, such as the Trevi Fountain, could only have water pour downwards. The architect of the Trevi Fountain placed it below street level to make the flow of water seem more dramatic. How fountains work: The fountains of Versailles depended upon water from reservoirs just above the fountains. As King Louis XIV built more fountains, he was forced to construct an enormous complex of pumps, called the Machine de Marly, with fourteen water wheels and 220 pumps, to raise water 162 meters above the Seine River to the reservoirs to keep his fountains flowing. Even with the Machine de Marly, the fountains used so much water that they could not be all turned on at the same time. Fontainiers watched the progress of the King when he toured the gardens and turned on each fountain just before he arrived.The architects of the fountains at Versailles designed specially-shaped nozzles, or tuyaux, to form the water into different shapes, such as fans, bouquets, and umbrellas. How fountains work: In Germany, some courts and palace gardens were situated in flat areas, thus fountains depending on pumped pressurized water were developed at a fairly early point in history. The Great Fountain in Herrenhausen Gardens at Hanover was based on ideas of Gottfried Leibniz conceived in 1694 and was inaugurated in 1719 during the visit of George I. After some improvements, it reached a height of some 35 m in 1721 which made it the highest fountain in European courts. The fountains at the Nymphenburg Palace initially were fed by water pumped to water towers, but as from 1803 were operated by the water powered Nymphenburg Pumping Stations which are still working. How fountains work: Beginning in the 19th century, fountains ceased to be used for drinking water and became purely ornamental. By the beginning of the 20th century, cities began using steam pumps and later electric pumps to send water to the city fountains. Later in the 20th century, urban fountains began to recycle their water through a closed recirculating system. An electric pump, often placed under the water, pushes the water through the pipes. The water must be regularly topped up to offset water lost to evaporation, and allowance must be made to handle overflow after heavy rain. How fountains work: In modern fountains a water filter, typically a media filter, removes particles from the water—this filter requires its own pump to force water through it and plumbing to remove the water from the pool to the filter and then back to the pool. The water may need chlorination or anti-algal treatment, or may use biological methods to filter and clean water. How fountains work: The pumps, filter, electrical switch box and plumbing controls are often housed in a "plant room". How fountains work: Low-voltage lighting, typically 12 volt direct current, is used to minimise electrical hazards. Lighting is often submerged and must be suitably designed. High wattage lighting (incandescent and halogen) either as submerged lighting or accent lighting on waterwall fountains have been implicated in every documented Legionnaires' disease outbreak associated with fountains. This is detailed in the "Guidelines for Control of Legionella in Ornamental Features". How fountains work: Floating fountains are also popular for ponds and lakes; they consist of a float pump nozzle and water chamber. The tallest fountains in the world: King Fahd's Fountain (1985) in Jeddah, Saudi Arabia. The fountain jets water 260 meters (853 feet) above the Red Sea and is currently the tallest fountain in the world. The World Cup Fountain in the Han-gang River in Seoul, Korea (2002), advertises a height of 202 meters (663 feet). The Gateway Geyser (1995), next to the Mississippi River in St. Louis, Missouri, shoots water 192 meters (630 feet) in the air. It is the tallest fountain in the United States. Port Fountain (2006) in Karachi, Pakistan, rises to height of 190 meters (620 feet) making it the fourth tallest fountain. Fountain Park, Fountain Hills, Arizona (1970). Can reach 171 meters (561 feet) when all three pumps are operating, but normally runs at 91 meters (300 feet). The Dubai Fountain, opened in 2009 next to Burj Khalifa, the world's tallest building. The fountain performs once every half-hour to recorded music, and shoots water to height of 73 meters (240 feet). The fountain also has extreme shooters, not used in every show, which can reach 150 meters (490 feet). The Captain James Cook Memorial Jet in Canberra (1970), 147 meters (482 feet) The Jet d'eau, in Geneva (1951), 140 meters (460 feet) Magic Fountain of Montjuïc (1929), Barcelona, Catalonia, Spain. 170 Feet, Created by Carles Buïgas. Tallest Fountains in the World
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heat kernel** Heat kernel: In the mathematical study of heat conduction and diffusion, a heat kernel is the fundamental solution to the heat equation on a specified domain with appropriate boundary conditions. It is also one of the main tools in the study of the spectrum of the Laplace operator, and is thus of some auxiliary importance throughout mathematical physics. The heat kernel represents the evolution of temperature in a region whose boundary is held fixed at a particular temperature (typically zero), such that an initial unit of heat energy is placed at a point at time t = 0. Heat kernel: The most well-known heat kernel is the heat kernel of d-dimensional Euclidean space Rd, which has the form of a time-varying Gaussian function, exp ⁡(tΔ)(x,y)=1(4πt)d/2e−‖x−y‖2/4t(x,y∈Rd,t>0) This solves the heat equation ∂K∂t(t,x,y)=ΔxK(t,x,y) for all t > 0 and x,y ∈ Rd, where Δ is the Laplace operator, with the initial condition lim t→0K(t,x,y)=δ(x−y)=δx(y) where δ is a Dirac delta distribution and the limit is taken in the sense of distributions. To wit, for every smooth function φ of compact support, lim t→0∫RdK(t,x,y)ϕ(y)dy=ϕ(x). Heat kernel: On a more general domain Ω in Rd, such an explicit formula is not generally possible. The next simplest cases of a disc or square involve, respectively, Bessel functions and Jacobi theta functions. Nevertheless, the heat kernel (for, say, the Dirichlet problem) still exists and is smooth for t > 0 on arbitrary domains and indeed on any Riemannian manifold with boundary, provided the boundary is sufficiently regular. More precisely, in these more general domains, the heat kernel for the Dirichlet problem is the solution of the initial boundary value problem for all and lim for all or y∈∂Ω. Heat kernel: It is not difficult to derive a formal expression for the heat kernel on an arbitrary domain. Consider the Dirichlet problem in a connected domain (or manifold with boundary) U. Let λn be the eigenvalues for the Dirichlet problem of the Laplacian in on ∂U. Let φn denote the associated eigenfunctions, normalized to be orthonormal in L2(U). The inverse Dirichlet Laplacian Δ−1 is a compact and selfadjoint operator, and so the spectral theorem implies that the eigenvalues satisfy 0<λ1<λ2≤λ3≤⋯,λn→∞. The heat kernel has the following expression: Formally differentiating the series under the sign of the summation shows that this should satisfy the heat equation. However, convergence and regularity of the series are quite delicate. The heat kernel is also sometimes identified with the associated integral transform, defined for compactly supported smooth φ by Tϕ=∫ΩK(t,x,y)ϕ(y)dy. The spectral mapping theorem gives a representation of T in the form T=etΔ. There are several geometric results on heat kernels on manifolds; say, short-time asymptotics, long-time asymptotics, and upper/lower bounds of Gaussian type.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Detoxification** Detoxification: Detoxification or detoxication (detox for short) is the physiological or medicinal removal of toxic substances from a living organism, including the human body, which is mainly carried out by the liver. Additionally, it can refer to the period of drug withdrawal during which an organism returns to homeostasis after long-term use of an addictive substance. In medicine, detoxification can be achieved by decontamination of poison ingestion and the use of antidotes as well as techniques such as dialysis and (in a limited number of cases) chelation therapy.Many alternative medicine practitioners promote various types of detoxification such as detoxification diets. Scientists have described these as a "waste of time and money". Sense about Science, a UK-based charitable trust, determined that most such dietary "detox" claims lack any supporting evidence.The liver and kidney are naturally capable of detox, as are intracellular (specifically, inner membrane of mitochondria or in the endoplasmic reticulum of cells) proteins such as CYP enzymes. In cases of kidney failure, the action of the kidneys is mimicked by dialysis; kidney and liver transplants are also used for kidney and liver failure, respectively. Types: Alcohol detoxification Alcohol detoxification is a process by which a heavy drinker's system is brought back to normal after being habituated to having alcohol in the body continuously for an extended period of substance abuse. Serious alcohol addiction results in a downregulation of GABA neurotransmitter receptors. Precipitous withdrawal from long-term alcohol addiction without medical management can cause severe health problems and can be fatal. Alcohol detox is not a treatment for alcoholism. After detoxification, other treatments must be undergone to deal with the underlying addiction that caused alcohol use. Types: Drug detoxification Clinicians use drug detoxification to reduce or relieve withdrawal symptoms while helping an addicted person adjust to living without drug use; drug detoxification does not aim to treat addiction but rather represents an early step within long-term treatment. Detoxification may be achieved drug-free or may use medications as an aspect of treatment. Often drug detoxification and treatment will occur in a community program that lasts several months and takes place in a residential setting rather than in a medical center. Types: Drug detoxification varies depending on the location of treatment, but most detox centers provide treatment to avoid the symptoms of physical withdrawal from alcohol and from other drugs. Most also incorporate counseling and therapy during detox to help with the consequences of withdrawal. Types: Metabolic detoxification An animal's metabolism can produce harmful substances which it can then make less toxic through reduction, oxidation (collectively known as redox reactions), conjugation and excretion of molecules from cells or tissues. This is called xenobiotic metabolism. Enzymes that are important in detoxification metabolism include cytochrome P450 oxidases, UDP-glucuronosyltransferases, and glutathione S-transferases. These processes are particularly well-studied as part of drug metabolism, as they influence the pharmacokinetics of a drug in the body. Types: Alternative medicine Certain approaches in alternative medicine claim to remove "toxins" from the body through herbal, electrical or electromagnetic treatments. These toxins are undefined and have no scientific basis, making the validity of such techniques questionable. There is little evidence for toxic accumulation in these cases, as the liver and kidneys automatically detoxify and excrete many toxic materials including metabolic wastes. Under this theory, if toxins are too rapidly released without being safely eliminated (such as when metabolizing fat that stores toxins), they can damage the body and cause malaise. Therapies include contrast showers, detoxification foot pads, oil pulling, Gerson therapy, snake-stones, body cleansing, Scientology's Purification Rundown, water fasting, and metabolic therapy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scientific instrument** Scientific instrument: A scientific instrument is a device or tool used for scientific purposes, including the study of both natural phenomena and theoretical research. History: Historically, the definition of a scientific instrument has varied, based on usage, laws, and historical time period. Before the mid-nineteenth century such tools were referred to as "natural philosophical" or "philosophical" apparatus and instruments, and older tools from antiquity to the Middle Ages (such as the astrolabe and pendulum clock) defy a more modern definition of "a tool developed to investigate nature qualitatively or quantitatively." Scientific instruments were made by instrument makers living near a center of learning or research, such as a university or research laboratory. Instrument makers designed, constructed, and refined instruments for purposes, but if demand was sufficient, an instrument would go into production as a commercial product.In a description of the use of the eudiometer by Jan Ingenhousz to show photosynthesis, a biographer observed, "The history of the use and evolution of this instrument helps to show that science is not just a theoretical endeavor but equally an activity grounded on an instrumental basis, which is a cocktail of instruments and techniques wrapped in a social setting within a community of practitioners. The eudiometer has been shown to be one of the elements in this mix that kept a whole community of researchers together, even while they were at odds about the significance and the proper use of the thing."By World War II, the demand for improved analyses of wartime products such as medicines, fuels, and weaponized agents pushed instrumentation to new heights. Today, changes to instruments used in scientific endeavors — particularly analytical instruments — are occurring rapidly, with interconnections to computers and data management systems becoming increasingly necessary. Scope: Scientific instruments vary greatly in size, shape, purpose, complication and complexity. They include relatively simple laboratory equipment like scales, rulers, chronometers, thermometers, etc. Other simple tools developed in the late 20th century or early 21st century are the Foldscope (an optical microscope), the SCALE(KAS Periodic Table), the MasSpec Pen (a pen that detects cancer), the glucose meter, etc. However, some scientific instruments can be quite large in size and significant in complexity, like particle colliders or radio-telescope antennas. Conversely, microscale and nanoscale technologies are advancing to the point where instrument sizes are shifting towards the tiny, including nanoscale surgical instruments, biological nanobots, and bioelectronics. The digital era: Instruments are increasingly based upon integration with computers to improve and simplify control; enhance and extend instrumental functions, conditions, and parameter adjustments; and streamline data sampling, collection, resolution, analysis (both during and post-process), and storage and retrieval. Advanced instruments can be connected as a local area network (LAN) directly or via middleware and can be further integrated as part of an information management application such as a laboratory information management system (LIMS). Instrument connectivity can be furthered even more using internet of things (IoT) technologies, allowing for example laboratories separated by great distances to connect their instruments to a network that can be monitored from a workstation or mobile device elsewhere. List of scientific instruments designers: Jones, William Kipp, Petrus Jacobus Le Bon, Gustave Roelofs, Arjen Schöner, Johannes Von Reichenbach, Georg Friedrich History of scientific instruments: Museums Collection of Historical Scientific Instruments (CHSI) Boerhaave Museum Chemical Heritage Foundation Deutsches Museum Royal Victoria Gallery for the Encouragement of Practical Science Whipple Museum of the History of Science Historiography Paul Bunge Prize
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shadow bands** Shadow bands: Shadow bands are thin, wavy lines of alternating light and dark that can be seen moving and undulating in parallel on plain-coloured surfaces immediately before and after a total solar eclipse. They are caused by the refraction by Earth's atmospheric turbulence of the solar crescent as it thins to a narrow slit, which increasingly collimates the light reaching Earth in the minute just before and after totality.The shadows' detailed structure is due to random patterns of fine air turbulence that refract the collimated sunlight arriving from the narrow eclipse crescent. Shadow bands: The bands' rapid sliding motion is due to shifting air currents combined with the angular motion of the sun projecting through higher altitudes. The degree of collimation in the light gradually increases as the crescent thins, until the solar disk is completely covered and the eclipse is total. Stars twinkle for the same reason. They are so far from Earth that they appear as point sources of light easily disturbed by Earth's atmospheric turbulence which acts like lenses and prisms diverting the light's path. Viewed toward the collimated light of a star, the shadows bands from atmospheric refraction pass over the eye. History: In the 9th century CE, shadow bands during a total solar eclipse were described for the first time – in the Völuspá, part of the old Icelandic poetic edda. History: In 1820, Hermann Goldschmidt of Germany noted shadow bands visible just before and after totality at some eclipses.In 1842, George B. Airy, the English astronomer royal, saw his first total eclipse of the sun. He recalled shadow bands as one of the highlights: "As the totality approached, a strange fluctuation of light was seen upon the walls and the ground, so striking that in some places children ran after it and tried to catch it with their hands."In 1905, Catherine Octavia Stevens observed shadow bands at the start of the total eclipse of August 30 at Cas Català, Majorca. "As to the character of their appearance and mode of progression, it was observed that they swept along with a flight that was at once rapid and orderly, there was no confusion of the wavy lines with one another, but all bore along in one and the same direction in parallel formation, traversing the ground as water-wave reflections may be seen to do on the under surface of a boat, only that there seemed in the case of the shadow-bands to be a more distinct expression of a forward movement." Clouds prevent observations after totality.In 2008, British astrophysicist Stuart Eves speculated that shadow bands might be an effect of infrasound, which involves the shadow of the moon travelling at supersonic speed and inducing an atmospheric shock wave. However, astronomy professor Barrie Jones, an expert on shadow bands, stated, "The [accepted] theory works; there's no need to seek an alternative."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gibbs phenomenon** Gibbs phenomenon: In mathematics, the Gibbs phenomenon is the oscillatory behavior of the Fourier series of a piecewise continuously differentiable periodic function around a jump discontinuity. The N {\textstyle N} th partial Fourier series of the function (formed by summing the N {\textstyle N} lowest constituent sinusoids of the Fourier series of the function) produces large peaks around the jump which overshoot and undershoot the function values. As more sinusoids are used, this approximation error approaches a limit of about 9% of the jump, though the infinite Fourier series sum does eventually converge almost everywhere (pointwise convergence on continuous points) except points of discontinuity.The Gibbs phenomenon was observed by experimental physicists and was believed to be due to imperfections in the measuring apparatus, but it is in fact a mathematical result. It is one cause of ringing artifacts in signal processing. Description: The Gibbs phenomenon is a behavior of the Fourier series of a function with a jump discontinuity and is described as the following:As more Fourier series constituents or components are taken, the Fourier series shows the first overshoot in the oscillatory behavior around the jump point approaching ~ 9% of the (full) jump and this oscillation does not disappear but gets closer to the point so that the integral of the oscillation approaches to zero (i.e., zero energy in the oscillation).At the jump point, the Fourier series gives the average of the function's both side limits toward the point. Description: Square wave example The three pictures on the right demonstrate the Gibbs phenomenon for a square wave (with peak-to-peak amplitude of {\textstyle c} from {\textstyle -c/2} to {\textstyle c/2} and the periodicity {\textstyle L} ) whose {\textstyle N} th partial Fourier series is where {\textstyle \omega =2\pi /L} . More precisely, this square wave is the function {\textstyle f(x)} which equals c2 between {\textstyle 2n(L/2)} and {\textstyle (2n+1)(L/2)} and {\textstyle -{\tfrac {c}{2}}} between {\textstyle (2n+1)(L/2)} and {\textstyle (2n+2)(L/2)} for every integer {\textstyle n} ; thus, this square wave has a jump discontinuity of peak-to-peak height {\textstyle c} at every integer multiple of {\textstyle L/2} As more sinusoidal terms are added (i.e., increasing {\textstyle N} ), the error of the partial Fourier series converges to a fixed height. But because the width of the error continues to narrow, the area of the error – and hence the energy of the error – converges to 0. The square wave analysis reveals that the error exceeds the height (from zero) c2 of the square wave by (OEIS: A243268) or about 9% of the full jump {\textstyle c} . More generally, at any discontinuity of a piecewise continuously differentiable function with a jump of {\textstyle c} , the {\textstyle N} th partial Fourier series of the function will (for a very large {\textstyle N} value) overshoot this jump by an error approaching 0.089489872236 {\textstyle c\cdot (0.089489872236\dots )} at one end and undershoot it by the same amount at the other end; thus the "full jump" in the partial Fourier series will be about 18% larger than the full jump in the original function. At the discontinuity, the partial Fourier series will converge to the midpoint of the jump (regardless of the actual value of the original function at the discontinuity) as a consequence of Dirichlet's theorem. The quantity (OEIS: A036792) is sometimes known as the Wilbraham–Gibbs constant. Description: History The Gibbs phenomenon was first noticed and analyzed by Henry Wilbraham in an 1848 paper. The paper attracted little attention until 1914 when it was mentioned in Heinrich Burkhardt's review of mathematical analysis in Klein's encyclopedia. In 1898, Albert A. Michelson developed a device that could compute and re-synthesize the Fourier series. A widespread myth says that when the Fourier coefficients for a square wave were input to the machine, the graph would oscillate at the discontinuities, and that because it was a physical device subject to manufacturing flaws, Michelson was convinced that the overshoot was caused by errors in the machine. In fact the graphs produced by the machine were not good enough to exhibit the Gibbs phenomenon clearly, and Michelson may not have noticed it as he made no mention of this effect in his paper (Michelson & Stratton 1898) about his machine or his later letters to Nature.Inspired by correspondence in Nature between Michelson and A. E. H. Love about the convergence of the Fourier series of the square wave function, J. Willard Gibbs published a note in 1898 pointing out the important distinction between the limit of the graphs of the partial sums of the Fourier series of a sawtooth wave and the graph of the limit of those partial sums. In his first letter Gibbs failed to notice the Gibbs phenomenon, and the limit that he described for the graphs of the partial sums was inaccurate. In 1899 he published a correction in which he described the overshoot at the point of discontinuity (Nature, April 27, 1899, p. 606). In 1906, Maxime Bôcher gave a detailed mathematical analysis of that overshoot, coining the term "Gibbs phenomenon" and bringing the term into widespread use.After the existence of Henry Wilbraham's paper became widely known, in 1925 Horatio Scott Carslaw remarked, "We may still call this property of Fourier's series (and certain other series) Gibbs's phenomenon; but we must no longer claim that the property was first discovered by Gibbs." Explanation Informally, the Gibbs phenomenon reflects the difficulty inherent in approximating a discontinuous function by a finite series of continuous sinusoidal waves. It is important to put emphasis on the word finite, because even though every partial sum of the Fourier series overshoots around each discontinuity it is approximating, the limit of summing an infinite number of sinusoidal waves does not. The overshoot peaks moves closer and closer to the discontinuity as more terms are summed, so convergence is possible. Description: There is no contradiction (between the overshoot error converging to a non-zero height even though the infinite sum has no overshoot), because the overshoot peaks move toward the discontinuity. The Gibbs phenomenon thus exhibits pointwise convergence, but not uniform convergence. For a piecewise continuously differentiable (class C1) function, the Fourier series converges to the function at every point except at jump discontinuities. At jump discontinuities, the infinite sum will converge to the jump discontinuity's midpoint (i.e. the average of the values of the function on either side of the jump), as a consequence of Dirichlet's theorem.The Gibbs phenomenon is closely related to the principle that the smoothness of a function controls the decay rate of its Fourier coefficients. Fourier coefficients of smoother functions will more rapidly decay (resulting in faster convergence), whereas Fourier coefficients of discontinuous functions will slowly decay (resulting in slower convergence). For example, the discontinuous square wave has Fourier coefficients (11,0,13,0,15,0,17,0,19,0,…) that decay only at the rate of 1n , while the continuous triangle wave has Fourier coefficients (112,0,−132,0,152,0,−172,0,192,0,…) that decay at a much faster rate of 1n2 This only provides a partial explanation of the Gibbs phenomenon, since Fourier series with absolutely convergent Fourier coefficients would be uniformly convergent by the Weierstrass M-test and would thus be unable to exhibit the above oscillatory behavior. By the same token, it is impossible for a discontinuous function to have absolutely convergent Fourier coefficients, since the function would thus be the uniform limit of continuous functions and therefore be continuous, a contradiction. See Convergence of Fourier series § Absolute convergence. Description: Solutions In practice, the difficulties associated with the Gibbs phenomenon can be ameliorated by using a smoother method of Fourier series summation, such as Fejér summation or Riesz summation, or by using sigma-approximation. Using a continuous wavelet transform, the wavelet Gibbs phenomenon never exceeds the Fourier Gibbs phenomenon. Also, using the discrete wavelet transform with Haar basis functions, the Gibbs phenomenon does not occur at all in the case of continuous data at jump discontinuities, and is minimal in the discrete case at large change points. In wavelet analysis, this is commonly referred to as the Longo phenomenon. In the polynomial interpolation setting, the Gibbs phenomenon can be mitigated using the S-Gibbs algorithm. Formal mathematical description of the Gibbs phenomenon: Let {\textstyle f:{\mathbb {R} }\to {\mathbb {R} }} be a piecewise continuously differentiable function which is periodic with some period {\textstyle L>0} . Suppose that at some point {\textstyle x_{0}} , the left limit {\textstyle f(x_{0}^{-})} and right limit {\textstyle f(x_{0}^{+})} of the function {\textstyle f} differ by a non-zero jump of {\textstyle c} For each positive integer {\textstyle N} ≥ 1, let {\textstyle S_{N}f(x)} be the {\textstyle N} th partial Fourier series ( {\textstyle S_{N}} can be treated as a mathematical operator on functions.) where the Fourier coefficients {\textstyle {\widehat {f}}(n),a_{n},b_{n}} for integers {\textstyle n} are given by the usual formulae Then we have and but More generally, if {\textstyle x_{N}} is any sequence of real numbers which converges to {\textstyle x_{0}} as {\textstyle N\to \infty } , and if the jump of {\textstyle a} is positive then and If instead the jump of {\textstyle c} is negative, one needs to interchange limit superior ( lim sup {\textstyle \limsup } ) with limit inferior ( lim inf {\textstyle \liminf } ), and also interchange the {\textstyle \leq } and {\textstyle \geq } signs, in the above two inequalities. Formal mathematical description of the Gibbs phenomenon: Proof of the Gibbs phenomenon in a general case Stated again, let {\textstyle f:{\mathbb {R} }\to {\mathbb {R} }} be a piecewise continuously differentiable function which is periodic with some period {\textstyle L>0} , and this function has multiple jump discontinuity points denoted {\textstyle x_{i}} where {\textstyle i=0,1,2,} and so on. At each discontinuity, the amount of the vertical full jump is {\textstyle c_{i}} Then, {\textstyle f} can be expressed as the sum of a continuous function {\textstyle f_{c}} and a multi-step function {\textstyle f_{s}} which is the sum of step functions such as {\textstyle S_{N}f(x)} as the {\textstyle N} th partial Fourier series of {\textstyle f=f_{c}+f_{s}=f_{c}+\left(f_{s_{1}}+f_{s_{2}}+f_{s_{3}}+\ldots \right)} will converge well at all {\textstyle x} points except points near discontinuities {\textstyle x_{i}} . Around each discontinuity point {\textstyle x_{i}} , {\textstyle f_{s_{i}}} will only have the Gibbs phenomenon of its own (the maximum oscillatory convergence error of ~ 9 % of the jump ci , as shown in the square wave analysis) because other functions are continuous ( fc ) or flat zero ( fsj where j≠i ) around that point. This proves how the Gibbs phenomenon occurs at every discontinuity. Signal processing explanation: From a signal processing point of view, the Gibbs phenomenon is the step response of a low-pass filter, and the oscillations are called ringing or ringing artifacts. Truncating the Fourier transform of a signal on the real line, or the Fourier series of a periodic signal (equivalently, a signal on the circle), corresponds to filtering out the higher frequencies with an ideal (brick-wall) low-pass filter. This can be represented as convolution of the original signal with the impulse response of the filter (also known as the kernel), which is the sinc function. Thus, the Gibbs phenomenon can be seen as the result of convolving a Heaviside step function (if periodicity is not required) or a square wave (if periodic) with a sinc function: the oscillations in the sinc function cause the ripples in the output. Signal processing explanation: In the case of convolving with a Heaviside step function, the resulting function is exactly the integral of the sinc function, the sine integral; for a square wave the description is not as simply stated. For the step function, the magnitude of the undershoot is thus exactly the integral of the left tail until the first negative zero: for the normalized sinc of unit sampling period, this is ∫ − ∞ − 1 sin ⁡ ( π x ) π x d x . Signal processing explanation: {\textstyle \int _{-\infty }^{-1}{\frac {\sin(\pi x)}{\pi x}}\,dx.} The overshoot is accordingly of the same magnitude: the integral of the right tail or (equivalently) the difference between the integral from negative infinity to the first positive zero minus 1 (the non-overshooting value). The overshoot and undershoot can be understood thus: kernels are generally normalized to have integral 1, so they result in a mapping of constant functions to constant functions – otherwise they have gain. The value of a convolution at a point is a linear combination of the input signal, with coefficients (weights) the values of the kernel. Signal processing explanation: If a kernel is non-negative, such as for a Gaussian kernel, then the value of the filtered signal will be a convex combination of the input values (the coefficients (the kernel) integrate to 1, and are non-negative), and will thus fall between the minimum and maximum of the input signal – it will not undershoot or overshoot. If, on the other hand, the kernel assumes negative values, such as the sinc function, then the value of the filtered signal will instead be an affine combination of the input values and may fall outside of the minimum and maximum of the input signal, resulting in undershoot and overshoot, as in the Gibbs phenomenon. Signal processing explanation: Taking a longer expansion – cutting at a higher frequency – corresponds in the frequency domain to widening the brick-wall, which in the time domain corresponds to narrowing the sinc function and increasing its height by the same factor, leaving the integrals between corresponding points unchanged. This is a general feature of the Fourier transform: widening in one domain corresponds to narrowing and increasing height in the other. This results in the oscillations in sinc being narrower and taller, and (in the filtered function after convolution) yields oscillations that are narrower (and thus with smaller area) but which do not have reduced magnitude: cutting off at any finite frequency results in a sinc function, however narrow, with the same tail integrals. This explains the persistence of the overshoot and undershoot. Signal processing explanation: Thus, the features of the Gibbs phenomenon are interpreted as follows: the undershoot is due to the impulse response having a negative tail integral, which is possible because the function takes negative values; the overshoot offsets this, by symmetry (the overall integral does not change under filtering); the persistence of the oscillations is because increasing the cutoff narrows the impulse response but does not reduce its integral – the oscillations thus move towards the discontinuity, but do not decrease in magnitude. Square wave analysis: We examine the {\textstyle N} th partial Fourier series {\textstyle S_{N}f(x)} of a square wave {\textstyle f(x)} with the periodicity {\textstyle L} and a discontinuity of a vertical "full" jump {\textstyle c} from {\textstyle y=y_{0}} at {\textstyle x=x_{0}} . Because the case of odd {\textstyle N} is very similar, let us just deal with the case when {\textstyle N} is even: with {\textstyle \omega ={\frac {2\pi }{L}}} . ( {\textstyle N=2N'} where {\textstyle N'} is the number of non-zero sinusoidal Fourier series components so there are literatures using {\textstyle N'} instead of {\textstyle N} .) Substituting {\textstyle x=x_{0}} (a point of discontinuity), we obtain as claimed above. (The first term that only survives is the average of the Fourier series.) Next, we find the first maximum of the oscillation around the discontinuity {\textstyle x=x_{0}} by checking the first and second derivatives of {\textstyle S_{N}f(x)} . The first condition for the maximum is that the first derivative equals to zero as where the 2nd equality is from one of Lagrange's trigonometric identities. Solving this condition gives {\textstyle x-x_{0}=k\pi /(N\omega )=kL/(2N)} for integers {\textstyle k} excluding multiples of {\textstyle N\omega } to avoid the zero denominator, so {\textstyle k=1,2,\ldots ,N\omega -1,N\omega +1,\ldots } and their negatives are allowed. Square wave analysis: The second derivative of {\textstyle S_{N}f(x)} at {\textstyle x-x_{0}=kL/(2N)} is Thus, the first maximum occurs at {\textstyle x=x_{0}+L/(2N)} {\textstyle k=1} ) and {\textstyle S_{N}f(x)} at this {\textstyle x} value is If we introduce the normalized sinc function sinc sin {\textstyle \operatorname {sinc} (x)={\frac {\sin(\pi x)}{\pi x}}} for {\textstyle x\neq 0} , we can rewrite this as For a sufficiently large {\textstyle N} , the expression in the square brackets is a Riemann sum approximation to the integral sinc {\textstyle \int _{0}^{1}\operatorname {sinc} (x)\ dx} (more precisely, it is a midpoint rule approximation with spacing 2N ). Since the sinc function is continuous, this approximation converges to the integral as N→∞ . Thus, we have which was claimed in the previous section. A similar computation shows Consequences: The Gibbs phenomenon is undesirable because it causes artifacts, namely clipping from the overshoot and undershoot, and ringing artifacts from the oscillations. In the case of low-pass filtering, these can be reduced or eliminated by using different low-pass filters. In MRI, the Gibbs phenomenon causes artifacts in the presence of adjacent regions of markedly differing signal intensity. This is most commonly encountered in spinal MRIs where the Gibbs phenomenon may simulate the appearance of syringomyelia. Consequences: The Gibbs phenomenon manifests as a cross pattern artifact in the discrete Fourier transform of an image, where most images (e.g. micrographs or photographs) have a sharp discontinuity between boundaries at the top / bottom and left / right of an image. When periodic boundary conditions are imposed in the Fourier transform, this jump discontinuity is represented by continuum of frequencies along the axes in reciprocal space (i.e. a cross pattern of intensity in the Fourier transform). Consequences: And although this article mainly focused on the difficulty with trying to construct discontinuities without artifacts in the time domain with only a partial Fourier series, it is also important to consider that because the inverse Fourier transform is extremely similar to the Fourier transform, there equivalently is difficulty with trying to construct discontinuities in the frequency domain using only a partial Fourier series. Thus for instance because idealized brick-wall and rectangular filters have discontinuities in the frequency domain, their exact representation in the time domain necessarily requires an infinitely-long sinc filter impulse response, since a finite impulse response will result in Gibbs rippling in the frequency response near cut-off frequencies, though this rippling can be reduced by windowing finite impulse response filters (at the expense of wider transition bands).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Toppo** Toppo: Toppo may refer to: Toppo (food), chocolate and bread-based snack Toppo (surname), surname Mitsubishi Toppo, light recreational vehicle
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Canon BG-ED3** Canon BG-ED3: The Canon BG-ED3 is a battery grip manufactured by Canon for certain models of its EOS digital SLR camera range. It was originally designed for the Canon EOS D30. It can hold 2 BP-511 or BP-511A batteries, effectively doubling the battery life of these cameras. The BG-ED3 can also accept the DR-400 DC Coupler, which when attached to a CA-PS400 or AC adapter ACK-E2, draws directly from an AC power source. A BG-ED3 is not necessary to use a compatible EOS camera with the DR-400. This battery grip also has extra buttons for controlling the camera. It has a shutter release button on the corner, making it easier to shoot vertically framed shots, as the button will be under the right index finger of the photographer. There are other buttons, a switch, and a dial. A larger dial is used to turn the screw that secures the grip to the camera body. The camera body's battery cover can be removed without tools since it is held in place with a spring-loaded pin that can be retracted by a fingernail. The BG-ED3 has space to store the camera body's detached battery cover next to the post that slides into the camera body's battery compartment. Canon BG-ED3: It is also compatible with the Canon EOS 10D, D30 and D60 cameras.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hammett acidity function** Hammett acidity function: The Hammett acidity function (H0) is a measure of acidity that is used for very concentrated solutions of strong acids, including superacids. It was proposed by the physical organic chemist Louis Plack Hammett and is the best-known acidity function used to extend the measure of Brønsted–Lowry acidity beyond the dilute aqueous solutions for which the pH scale is useful. In highly concentrated solutions, simple approximations such as the Henderson–Hasselbalch equation are no longer valid due to the variations of the activity coefficients. The Hammett acidity function is used in fields such as physical organic chemistry for the study of acid-catalyzed reactions, because some of these reactions use acids in very high concentrations, or even neat (pure). Definition: The Hammett acidity function, H0, can replace the pH in concentrated solutions. It is defined using an equation analogous to the Henderson–Hasselbalch equation: BH log BH +] where log(x) is the common logarithm of x, and pKBH+ is −log(K) for the dissociation of BH+, which is the conjugate acid of a very weak base B, with a very negative pKBH+. In this way, it is rather as if the pH scale has been extended to very negative values. Hammett originally used a series of anilines with electron-withdrawing groups for the bases.Hammett also pointed out the equivalent form log BH +) where a is the activity, and the γ are thermodynamic activity coefficients. In dilute aqueous solution (pH 0–14) the predominant acid species is H3O+ and the activity coefficients are close to unity, so H0 is approximately equal to the pH. However, beyond this pH range, the effective hydrogen-ion activity changes much more rapidly than the concentration. This is often due to changes in the nature of the acid species; for example in concentrated sulfuric acid, the predominant acid species ("H+") is not H3O+ but rather H3SO4+, which is a much stronger acid. The value H0 = -12 for pure sulfuric acid must not be interpreted as pH = −12 (which would imply an impossibly high H3O+ concentration of 10+12 mol/L in ideal solution). Instead it means that the acid species present (H3SO4+) has a protonating ability equivalent to H3O+ at a fictitious (ideal) concentration of 1012 mol/L, as measured by its ability to protonate weak bases. Definition: Although the Hammett acidity function is the best known acidity function, other acidity functions have been developed by authors such as Arnett, Cox, Katrizky, Yates, and Stevens. Typical values: On this scale, pure H2SO4 (18.4 M) has a H0 value of −12, and pyrosulfuric acid has H0 ~ −15. Take note that the Hammett acidity function clearly avoids water in its equation. It is a generalization of the pH scale—in a dilute aqueous solution (where B is H2O), pH is very nearly equal to H0. By using a solvent-independent quantitative measure of acidity, the implications of the leveling effect are eliminated, and it becomes possible to directly compare the acidities of different substances (e.g. using pKa, HF is weaker than HCl or H2SO4 in water but stronger than HCl in glacial acetic acid.) H0 for some concentrated acids: Helonium: −63 Fluoroantimonic acid (1990): −23 > H0 > −28 Magic acid (1974): −23 Carborane superacids: H0 < −18.0 Fluorosulfuric acid (1944): −15.1 Hydrogen fluoride: −15.1 Trifluoromethanesulfonic acid (1940): −14.9 Perchloric acid: −13 Sulfurochloridic acid: -13.8; −12.78 Sulfuric acid: −12.0For mixtures (e.g., partly diluted acids in water), the acidity function depends on the composition of the mixture and has to be determined empirically. Graphs of H0 vs mole fraction can be found in the literature for many acids.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cardiac diet** Cardiac diet: A cardiac diet also known as a heart healthy diet is a diet focus on reducing sodium, fat and cholesterol intake. The diet concentrates on reducing "foods containing saturated fats and trans fats" and substituting them with "mono and polyunsaturated fats". The diet advocates increasing intake of "complex carbohydrates, soluble fiber and omega 3 fatty acids" and is recommended for people with cardiovascular disease or people looking for a healthier diet.The diet limits the intake of meat, dairy products, egg products, certain desserts and caffeine. The cardiac diet emphasizes a fruit and vegetable based diet. Foods such as spinach, cauliflower, broccoli, tomatoes, bok choy, arugula, bell peppers, and carrots are recommended. Fiber is also recommended, foods such as oats, beans, ground flaxseed and berries are advised. A healthy cardiac diet "allows for an estimated 25–30% of total calories from fat" mostly from mono and polyunsaturated fats. Since 2006, the American Heart Association have been "substantially more stringent on saturated fat intake". Besides the diet recommended by the American Heart Association, a Mediterranean diet or ovo-lacto vegetarianism are also viable.Commercial cardiac diets are also available for pets such as cats and dogs with cardiovascular health issues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Soft core (synthesis)** Soft core (synthesis): A soft core (also called softcore) is a digital circuit that can be wholly implemented using logic synthesis. It can be implemented via different semiconductor devices containing programmable logic (e.g., ASIC, FPGA, CPLD), including both high-end and commodity variations. Many soft cores may be implemented in one FPGA. In those multi-core systems, rarely used resources can be shared between all the cores. Examples of soft core implementations are soft microprocessors, graphics chips like AGA or Open Graphics Project, harddisc controllers etc.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Free Download Manager** Free Download Manager: Free Download Manager is a download manager for Windows, macOS, Linux and Android.Free Download Manager is proprietary software, but was free and open-source software between versions 2.5 and 3.9.7. Starting with version 3.0.852 (15 April 2010), the source code was made available in the project's Subversion repository instead of being included with the binary package. This continued until versions 3.9.7. The source code for version 5.0 and newer is not available and the GNU General Public License agreement has been removed from the app. Free Download Manager: The ability to download YouTube videos was included in the program's functionality until October 16th, 2021, when one of the developers, Alex, indicated that Google had filed a complaint report, requesting the option be disabled. Attempts to download any videos from YouTube currently result in the message, "Youtube downloads are not available" being shown in the download box. A resolution with Google's legal team still has yet to be reached. Features: The GUI presents several tabs that organize types of downloads and allow access to different features in the program. Download information view that shows each download's progress bar, file preview, community opinions if any are written for that download and a log showing connection status. Features: Download acceleration Dropbox for file drag-and-drop HTTP and FTP download support Enhanced audio/video files support RTSP/MMS download support Batch downloading support for downloading a set of files Segmented file transfer: Splits large file into parts (specified in the settings of the software) and downloads simultaneously BitTorrent support (based on Libtorrent), Magnet URI scheme support Flash video download from sites like Google Video (exclude Android) Resuming broken downloads, if permitted by server Zip files partial download, lets users to download only the necessary part of a zip file. Features: Simultaneous downloading from several mirrors Support bandwidth throttling via three fully customizable traffic modes: light, medium and heavy. Import list of URLs from clipboard Integrates with the browser being used to track URL or Copy functions if downloadable content is found Remote control via Internet Smart file management and powerful scheduler Portable mode, users can easily create its portable version and avoid the need to install and configure the program on each computer. Active Spyware and Adware protection using active communication among users and also through installed Antivirus software on the computer. Tabs: Downloads – This is the focal point of the program, which is simply a download manager. Users can also create groups with folders to which files with specific extensions will be downloaded. Flash video downloads – This feature helps users to download FLV video files from Google Video and many other sites. Torrents – Allows to download the torrent files Scheduler – Users can create and manage lists of tasks to be executed at a preset time. Tasks include launching external programs, starting and stopping downloads, and shutting down the computer in all possible ways. Site Explorer – This feature is an FTP client. Site Manager – This feature allows users to tell FDM how to act with specific sites, such as websites that require authentication, or how many download connections a website can accept simultaneously from the user. HTML Spider – This feature can download a website by following and downloading links recursively.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Free radical damage to DNA** Free radical damage to DNA: Free radical damage to DNA can occur as a result of exposure to ionizing radiation or to radiomimetic compounds. Damage to DNA as a result of free radical attack is called indirect DNA damage because the radicals formed can diffuse throughout the body and affect other organs. Malignant melanoma can be caused by indirect DNA damage because it is found in parts of the body not exposed to sunlight. DNA is vulnerable to radical attack because of the very labile hydrogens that can be abstracted and the prevalence of double bonds in the DNA bases that free radicals can easily add to. Damage via radiation exposure: Radiolysis of intracellular water by ionizing radiation creates peroxides, which are relatively stable precursors to hydroxyl radicals. 60%–70% of cellular DNA damage is caused by hydroxyl radicals, yet hydroxyl radicals are so reactive that they can only diffuse one or two molecular diameters before reacting with cellular components. Thus, hydroxyl radicals must be formed immediately adjacent to nucleic acids in order to react. Radiolysis of water creates peroxides that can act as diffusable, latent forms of hydroxyl radicals. Some metal ions in the vicinity of DNA generate the hydroxyl radicals from peroxide. Damage via radiation exposure: H2O + hν → H2O+ + e− H2O + e− → H2O− H2O+ → H+ + OH· H2O− → OH− + H· 2 OH· →H2O2Free radical damage to DNA is thought to cause mutations that may lead to some cancers. The Fenton reaction: The Fenton reaction results in the creation of hydroxyl radicals from hydrogen peroxide and an Iron (II) catalyst. Iron(III) is regenerated via the Haber–Weiss reaction. Transition metals with a free coordination site are capable of reducing peroxides to hydroxyl radicals. Iron is believed to be the metal responsible for the creation of hydroxyl radicals because it exists at the highest concentration of any transition metal in most living organisms. The Fenton reaction is possible because transition metals can exist in more than one oxidation state and their valence electrons may be unpaired, allowing them to participate in one-electron redox reactions. The Fenton reaction: Fe2+ + H2O2 → Fe3+ + OH· + OH−The creation of hydroxyl radicals by iron(II) catalysis is important because iron(II) can be found coordinated with, and therefore in close proximity to, DNA. This reaction allows for hydrogen peroxide created by radiolysis of water to diffuse to the nucleus and react with Iron (II) to produce hydroxyl radicals, which in turn react with DNA. The location and binding of Iron (II) to DNA may play an important role in determining the substrate and nature of the radical attack on the DNA. The Fenton reaction generates two types of oxidants, Type I and Type II. Type I oxidants are moderately sensitive to peroxides and ethanol. Type I and Type II oxidants preferentially cleave at the specific sequences. Radical hydroxyl attack: Hydroxyl radicals can attack the deoxyribose DNA backbone and bases, potentially causing a plethora of lesions that can be cytotoxic or mutagenic. Cells have developed complex and efficient repair mechanisms to fix the lesions. In the case of free radical attack on DNA, base-excision repair is the repair mechanism used. Hydroxyl radical reactions with the deoxyribose sugar backbone are initiated by hydrogen abstraction from a deoxyribose carbon, and the predominant consequence is eventual strand breakage and base release. The hydroxyl radical reacts with the various hydrogen atoms of the deoxyribose in the order 5′ H > 4′ H > 3′ H ≈ 2′ H ≈ 1′ H. This order of reactivity parallels the exposure to solvent of the deoxyribose hydrogens.Hydroxyl radicals react with DNA bases via addition to the electron-rich, pi bonds. These pi bonds in the bases are located between C5-C6 of pyrimidines and N7-C8 in purines. Upon addition of the hydroxyl radical, many stable products can be formed. In general, radical hydroxyl attacks on base moieties do not cause altered sugars or strand breaks except when the modifications labilize the N-glycosyl bond, allowing the formation of baseless sites that are subject to beta-elimination. Abasic sites: Hydrogen abstraction from the 1’-deoxyribose carbon by the hydroxyl radical creates a 1 ‘-deoxyribosyl radical. The radical can then react with molecular oxygen, creating a peroxyl radical which can be reduced and dehydrated to yield a 2’-deoxyribonolactone and free base. A deoxyribonolactone is mutagenic and resistant to repair enzymes. Thus, an abasic site is created. Radical damage through radiomimetic compounds: Radical damage to DNA can also occur through the interaction of DNA with certain natural products known as radiomimetic compounds, molecular compounds which affect DNA in similar ways to radiation exposure. Radiomimetic compounds induce double-strand breaks in DNA via highly specific, concerted free-radical attacks on the deoxyribose moieties in both strands of DNA. General mechanism: Many radiomimetic compounds are enediynes, which undergo the Bergman cyclization reaction to produce a 1,4-didehydrobenzene diradical. The 1,4-didehydrobenzene diradical is highly reactive, and will abstract hydrogens from any possible hydrogen-donor. General mechanism: In the presence of DNA, the 1,4-didehydrobenzene diradical abstracts hydrogens from the deoxyribose sugar backbone, predominantly at the C-1’, C-4’ and C-5’ positions. Hydrogen abstraction causes radical formation at the reacted carbon. The carbon radical reacts with molecular oxygen, which leads to a strand break in the DNA through a variety of mechanisms. 1,4-Didehydrobenzene is able to position itself in such a way that it can abstract proximal hydrogens from both strands of DNA. This produces a double-strand break in the DNA, which can lead to cellular apoptosis if not repaired. General mechanism: Enediynes generally undergo the Bergman cyclization at temperatures exceeding 200 °C. However, incorporating the enediyne into a 10-membered cyclic hydrocarbon makes the reaction more thermodynamically favorable by releasing the ring strain of the reactants. This allows for the Bergman cyclization to occur at 37 °C, the biological temperature of humans. Molecules which incorporate enediynes into these larger ring structures have been found to be extremely cytotoxic. Natural products: Enediynes are present in many complicated natural products. They were originally discovered in the early 1980s during a search for new anticancer products produced by microorganisms. Calicheamicin was one of the first such products identified and was originally found in a soil sample taken from Kerrville, Texas. These compounds are synthesized by bacteria as defense mechanisms due to their ability to cleave DNA through the formation of 1,4-didehydrobenzene from the enediyne component of the molecule. Natural products: Calicheamicin and other related compounds share several common characteristics. The extended structures attached to the enediyne allow the compound to specifically bind DNA, in most cases to the minor groove of the double helix. Additionally, part of the molecule is known as the “trigger” which, under specific physiological conditions, activates the enediyne, known as the “warhead” and 1,4-didehydrobenzene is generated. Natural products: Three classes of enediynes have since been identified: calicheamicin, dynemicin, and chromoprotein-based products. The calicheamicin types are defined by a methyl trisulfide group that is involved in triggering the molecule by the following mechanism. Natural products: Calicheamicin and the closely related esperamicin have been used as anticancer drugs due to their high toxicity and specificity.Dynemicin and its relatives are characterized by the presence of an anthraquinone and enediyne core. The anthraquinone component allows for specific binding of DNA at the 3’ side of purine bases through intercalation, a site that is different from calicheamicin. Its ability to cleave DNA is greatly increased in the presence of NADPH and thiol compounds. This compound has also found prominence as an antitumor agent. Natural products: Chromoprotein enediynes are characterized by an unstable chromophore enediyne bound to an apoprotein. The chromophore is unreactive when bound to the apoprotein. Upon its release, it reacts to form 1,4-didehydrobenzene and subsequently cleaves DNA. Antitumor ability: Most enediynes, including the ones listed above, have been used as potent antitumor antibiotics due to their ability to efficiently cleave DNA. Calicheamicin and esperamicin are the two most commonly used types due to their high specificity when binding to DNA, which minimizes unfavorable side reactions. They have been shown to be especially useful for treating acute myeloid leukemia.Additionally, calicheamicin is able to cleave DNA at low concentrations, proving to be up to 1000 times more effective than adriamycin at combating certain types of tumors. In all cases, cells lack the ability to repair double-stranded DNA breaks, making these compounds especially effective for treating tumor cells. Antitumor ability: The free radical mechanism to treat certain types of cancers extends beyond enediynes. Tirapazamine generates a free radical under anoxic conditions instead of the trigger mechanism of an enediyne. The free radical then continues on to cleave DNA in a similar manner to 1,4-didehydrobenzene in order to treat cancerous cells. It is currently in Phase III trials. Evolution of Meiosis: Meiosis is a central feature of sexual reproduction in eukaryotes. The need to repair oxidative DNA damage caused by oxidative free radicals has been hypothesized to be a major driving force in the evolution of meiosis
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Modern Hebrew grammar** Modern Hebrew grammar: Modern Hebrew grammar is partly analytic, expressing such forms as dative, ablative, and accusative using prepositional particles rather than morphological cases. On the other hand, Modern Hebrew grammar is also fusional synthetic: inflection plays a role in the formation of verbs and nouns (using non-concatenative discontinuous morphemes realised by vowel transfixation) and the declension of prepositions (i.e. with pronominal suffixes). Representation of Hebrew examples: Examples of Hebrew here are represented using the International Phonetic Alphabet (IPA) as well as native script. Although most speakers collapse the phonemes /ħ, ʕ/ into /χ, ʔ/, the distinction is maintained by a limited number of speakers and will therefore be indicated here for maximum coverage. In the transcriptions, /r/ is used for the rhotic, which in Modern Hebrew phonology is more commonly a lax voiced uvular approximant [ʁ]. Representation of Hebrew examples: Hebrew is written from right to left. Syntax: Every Hebrew sentence must contain at least one subject, at least one predicate, usually but not always a verb, and possibly other arguments and complements. Syntax: Word order in Modern Hebrew is somewhat similar to that in English: as opposed to Biblical Hebrew, where the word order is Verb-Subject-Object, the usual word order in Modern Hebrew is Subject-Verb-Object. Thus, if there is no case marking, one can resort to the word order. Modern Hebrew is characterized by an asymmetry between definite Objects and indefinite Objects. There is an accusative marker, et, only before a definite Object (mostly a definite noun or personal name). Et-ha is currently undergoing fusion and reduction to become ta. Consider ten li et ha-séfer "give:2ndPerson.Masculine.Singular.Imperative to-me ACCUSATIVE the-book" (i.e. "Give me the book"), where et, albeit syntactically a case-marker, is a preposition and ha is a definite article. This sentence is realised phonetically as ten li ta-séfer. Syntax: Sentences with finite verbs In sentences where the predicate is a verb, the word order is usually subject–verb–object (SVO), as in English. However, word order can change in the following instances: An object can typically be topicalized by moving it to the front of the sentence. When the object is a question word, this topicalization is almost mandatory. Example : ?לְמִי אָמַר‎ /leˈmi ʔaˈmar?/, literally "To-whom he-told?", means "Whom did he tell?" In other cases, this topicalization can be used for emphasis. Syntax: Hebrew is a partly pro-drop language. This means that subject pronouns are sometimes omitted when verb conjugations are able to reflect gender, number, and person; othewise the subject pronouns should be mentioned. Specifically, subject pronouns are always used with verbs in the present tense because present forms of verbs don't reflect person. Syntax: Indefinite subjects (like English's a boy, a book, and so on) are often postponed, giving the sentence some of the sense of "there exists [subject]" in addition to the verb's normal meaning. For example, פָּנָה אֵלַי אֵיזֶשֶׁהוּ אָדָם שִׁבִּקֵּשׁ שֶׁאֶעֱזֹר לוֹ עִם מַשֶּׁהוּ‎ /paˈna ʔeˈlaj ˈʔezeʃehu ʔaˈdam, ʃe-biˈkeʃ ʃe-ʔe.ʕeˈzor lo ʕim ˈmaʃehu/, literally "Turned to-me some man that-asked that-[I]-will-help to-him with something", means "A man came to me wanting me to help him with something." This serves a purpose somewhat analogous to English's narrative use of this with a semantically indefinite subject: "So, I'm at work, and this man comes up to me and asks me to help him." Indeed, outside of the present tense, mere existence is expressed using the verb to be with a postponed indefinite subject. Example: הָיְתָה סִבָּה שֶׁבִּקַּשְׁתִּי‎ /hajˈta siˈba ʃe-biˈkaʃti/, literally "Was reason that-[I]-asked", means "There was a reason I asked." Definite subjects can be postponed for a number of reasons. Syntax: In some cases, a postponed subject can be used to sound formal or archaic. This is because historically, Hebrew was typically verb–subject–object (VSO). The Bible and other religious texts are predominantly written in VSO word order. Sometimes, postponing a subject can give it emphasis. One response to הַתְחֵל‎ /hatˈħel!/ ("Start") might be הַתְחֵל אַתָּה‎ /hatˈħel aˈta!/ ("You start!"). Syntax: A subject might initially be omitted and then added later as an afterthought, such as נַעֲשֶׂה אֶת זֶה בְּיַחַד אַתָּה וַאֲנִי‎ /naʕaˈse ʔet ˈze beˈjaħad, aˈta vaʔanˈi/, literally "[We]'ll-do it together, you and-I", means "You and I will do it together" or "We'll do it together, you and I".Generally, Hebrew marks every noun in a sentence with some sort of preposition, with the exception of subjects and semantically indefinite direct objects. Unlike English, indirect objects require prepositions (Hebrew "הוּא נָתַן לִי אֶת הַכַּדּוּר‎" /hu naˈtan li ʔet ha-kaˈdur/ (literally "he gave to-me direct-object-marker the ball) in contrast to English "He gave me the ball") and semantically definite direct objects are introduced by the preposition את‎ /et/ (Hebrew "הוּא נָתַן לִי אֶת הַכַּדּוּר‎" /hu naˈtan li ʔet ha-kaˈdur/ (literally "he gave to-me direct-object-marker the ball) in contrast to English "He gave me the ball"). Syntax: Nominal sentences Hebrew also produces sentences where the predicate is not a finite verb. A sentence of this type is called משפט שמני‎ /miʃˈpat ʃemaˈni/, a nominal sentence. These sentences contain a subject, a non-verbal predicate, and an optional copula. Types of copulae include: The verb הָיָה‎ /haˈja/ (to be):While the verb to be does have present-tense forms, they are used only in exceptional circumstances. The following structures are used instead: While the past and future tenses follow the structure [sometimes-optional subject]-[form of to be]-[noun complement] (analogous to English, except that in English the subject is always mandatory), the present tense follows [optional subject]-[subject pronoun]-[noun complement].אַבָּא שֶׁלִּי הָיָה שׁוֹטֵר בִּצְעִירוּתוֹ. /ˈʔaba ʃeˈli haˈja ʃoˈter bi-t͡sʕiruˈto/ (my father was a policeman when he was young.) הַבֵּן שֶׁלּוֹ הוּא אַבָּא שֶׁלָּהּ. /ha-ˈben ʃeˈlo hu ˈʔaba ʃeˈlah/ (literally "the-son of-his he the-father of-hers", his son is her father.) יוֹסִי יִהְיֶה כִימָאִי. /ˈjosi jihˈje χimaˈʔi/ (Yossi will be a chemist)While לֹא /lo/ ("not") precedes the copula in the past and future tenses, it follows the copula (a subject pronoun) in the present tense. Syntax: Where the past and future tenses are structured as [optional subject]-[form of to be]-[adjective complement] (analogous to English, except that in English the subject is mandatory), the present tense is simply [subject]-[adjective complement]. For example, הַדֶּלֶת סְגוּרָה /ha-ˈdelet sɡuˈra/, literally "the-door closed", means "the door is closed." That said, additional subject pronouns are sometimes used, as with noun complements, especially with complicated subjects. Example: זֶה מוּזָר שֶׁהוּא אָמַר כָּךְ /ze muˈzar ʃe-hu ʔaˈmar kaχ/, literally " it strange that-he said thus", means "that he said that is strange," i.e. "it's strange that he said that."The verbs הָפַךְ /haˈfaχ/, נֶהֱפַךְ /neheˈfaχ/ and נִהְיָה /nihˈja/ (to become):When the sentence implies progression or change, the said verbs are used and considered copulae between the nominal subject and the non-verbal predicate. For instance: הַכֶּלֶב נִהְיָה עַצְבָּנִי יוֹתֵר מֵרֶגַע לְרֶגַע haˈkelev nihˈja ʕat͡sbaˈni joˈter me-ˈregaʕ le-ˈregaʕ/ (The dog became more angry with every passing moment) הֶחָבֵר שֶׁלִּי נֶהֱפַךְ לְמִפְלֶצֶת! /he-ħaˈver ʃeˈli neheˈfaχ le-mifˈlet͡set!/ (My friend has become a monster!)Possession / existence: יש/אין /jeʃ/en/:Possession in Hebrew is constructed indefinitely. There is no Hebrew translation to the English verb "to have," common in many Indo-European languages to express possession as well as to serve as a helping verb. To express the English sentence "I have a dog" in Hebrew is "יֵשׁ לִי כֶּלֶב",ˈ/jeːʃ ˈliː ˈkelev/, literally meaning "there exists to me a dog." The word יֵשׁ /jeʃ/ expresses existence in the present tense, and is unique in the Hebrew language as a verb-like form with no inflected qualities at all. Dispossession in the present tense in Hebrew is expressed with the antithesis to יש, which is אֵין /en/ – "אֵין לִי כֶּלֶב" /en li ˈkelev/ means "I do not have a dog." Possession in the past and the future in Hebrew is also expressed impersonally, but uses conjugated forms of the Hebrew copula, לִהְיוֹת [lihyot]. For example, the same sentence "I do not have a dog" would in the past tense become "לֹא הָיָה לִי כֶּלֶב" /lo haja li kelev/, literally meaning "there was not to me a dog." Sentence types Sentences are generally divided into three types: Simple sentence A simple sentence is a sentence that contains one subject, one verb, and optional objects. As the name implies, it is the simplest type of sentence. Syntax: Compound sentences Two or more sentences that do not share common parts and can be separated by comma are called מִשְפָּט מְחֻבָּר /miʃˈpat meħuˈbar/, a compound sentence. In many cases, the second sentence uses a pronoun that stands for the other's subject; they are generally interconnected. The two sentences are linked with a coordinating conjunction (מִלַּת חִבּוּר /miˈlat ħiˈbur/). The conjunction is a stand-alone word that serves as a connection between both parts of the sentence, belonging to neither part. Syntax: לֹא אָכַלְתִּי כָּל הַיּוֹם, וְלָכֵן בְּסוֹף הַיּוֹם הָיִיתִי מוּתָשׁ. /lo ʔaˈχalti kol ha-ˈjom, ve-laˈχen be-ˈsof ha-ˈjom haˈjiti muˈtaʃ/ (I haven't eaten all day, therefore at the end of the day I was exhausted.)Both parts of the sentence can be separated by a period and stand alone as grammatically correct sentences, which makes the sentence a compound sentence (and not a complex sentence): לֹא אָכַלְתִּי כָּל הַיּוֹם. בְּסוֹף הַיּוֹם הָיִיתִי מוּתָשׁ. /lo ʔaˈχalti kol ha-ˈjom. be-ˈsof ha-ˈjom haˈjiti muˈtaʃ./ (I haven't eaten all day. By the end of the day I was exhausted.) Complex sentences Like English, Hebrew allows clauses, פְּסוּקִיּוֹת /psukiˈjot/ (sing. פְּסוּקִית /psuˈkit/), to serve as parts of a sentence. A sentence containing a subordinate clause is called משפט מרכב /miʃˈpat murˈkav/, or a complex sentence. Subordinate clauses almost always begin with the subordinating conjunction -ש /ʃe-/ (usually that), which attaches as a prefix to the word that follows it. For example, in the sentence יוֹסִי אוֹמֵר שֶׁהוּא אוֹכֵל /ˈjosi ʔoˈmer ʃe-ˈhu ʔoˈχel/ (Yossi says that he is eating), the subordinate clause שֶׁהוּא אוֹכֵל /ʃe-ˈhu ʔoˈχel/ (that he is eating) serves as the direct object of the verb אוֹמֵר /ʔoˈmer/ (says). Unlike English, Hebrew does not have a large number of subordinating conjunctions; rather, subordinate clauses almost always act as nouns and can be introduced by prepositions in order to serve as adverbs. For example, the English As I said, there's nothing we can do in Hebrew is כְּפִי שֶׁאָמַרְתִּי, אֵין מָה לַעֲשׂוֹת /kfi ʃe-ʔaˈmarti, ʔen ma laʕaˈsot/ (literally As that-I-said, there-isn't what to-do). Syntax: That said, relative clauses, which act as adjectives, are also formed using -ֶׁש /ʃe-/. For example, English Yosi sees the man who is eating apples is in Hebrew יוֹסִי רוֹאֶה אֶת הָאִישׁ שֶׁאוֹכֵל תַּפּוּחִים /ˈjosi roˈʔe ʔet ha-ˈʔiʃ ʃe-ʔoˈχel tapuˈħim/ (literally Yosi sees [et] the-man that-eats apples). In this use ש /ʃe-/ sometimes acts as a relativizer rather than as a relative pronoun; that is, sometimes the pronoun remains behind in the clause: הִיא מַכִּירָה אֶת הָאִישׁ שֶׁדִּבַּרְתִּי עָלָיו /hi makiˈra ʔet ha-ˈʔiʃ ʃe-diˈbarti ʕaˈlav/, which translates to She knows the man I talked about, literally means She knows [et] the-man that-I-talked about him. This is because in Hebrew, a preposition (in this case על /ʕal/) cannot appear without its object, so the him יו- (/-av/) could not be dropped. However, some sentences, such as the above example, can be written both with relativizers and with relative pronouns. The sentence can also be rearranged into הִיא מַכִּירָה אֶת הָאִישׁ עָלָיו דִבַּרְתִּי /hi makiˈra ʔet ha-ˈʔiʃ ʕaˈlav diˈbarti/, literally She knows [et] the-man about him I-talked., and translates into the same meaning. In that example, the preposition and its object עָלָיו /ʕaˈlav/ also act as a relative pronoun, without use of -ֶׁש /ʃe-/. Syntax: Impersonal sentences A sentence may lack a determinate subject, then it is called מִשְפָּט סְתָמִי /miʃˈpat staˈmi/, an indefinite or impersonal sentence. These are used in order to put emphasis on the action, and not on the agent of the action. Usually the verb is of the 3rd person plural form. Syntax: עָשׂוּ שִׁפּוּץ בַּבִּנְיָן שֶׁלִּי /ʕaˈsu ʃipˈut͡s ba-binˈjan ʃeˈli/ (literally: they-made a renovation in-the building of-mine; my building was renovated) Collective sentences When a sentence contains multiple parts of the same grammatical function and relate to the same part of the sentence, they are called collective parts. They are usually separated with the preposition וְ- /ve-/ (and), and if there are more than two, they are separated with commas while the last pair with the preposition, as in English. Collective parts can have any grammatical function in the sentence, for instance: Subject: Predicate: Direct object: Indirect object: When a collective part is preceded by a preposition, the preposition must be copied onto all parts of the collective. Verbs: Hebrew verbs (פועל /ˈpoʕal/) utilize nonconcatenative morphology extensively, meaning they have much more internal structure than most other languages. Every Hebrew verb is formed by casting a three- or four-consonant root (שֹׁרֶשׁ /ˈʃoreʃ/) into one of seven derived stems called /binjaˈnim/ (בִּנְיָנִים, meaning buildings or constructions; the singular is בִּנְיָן /binˈjan/, written henceforth as binyan). Most roots can be cast into more than one binyan, meaning more than one verb can be formed from a typical root. When this is the case, the different verbs are usually related in meaning, typically differing in voice, valency, semantic intensity, aspect, or a combination of these features. The "concept" of the Hebrew verb's meaning is defined by the identity of the triliteral root. The "concept" of the Hebrew verb assumes verbal meaning by taking on vowel-structure as dictated by the binyan's rules. Verbs: Conjugation Each binyan has a certain pattern of conjugation and verbs in the same binyan are conjugated similarly. Conjugation patterns within a binyan alter somewhat depending on certain phonological qualities of the verb's root; the alterations (called גִּזְרָה [ɡizra], meaning "form") are defined by the presence of certain letters composing the root. For example, three-letter roots (triliterals) whose second letter is ו /vav/ or י /jud/ are so-called hollow or weak roots, losing their second letter in binyan הִפְעִיל /hifˈʕil/, in הֻפְעַל /hufˈʕal/, and in much of פָּעַל /paʕal/. The feature of being conjugated differently because the second root-letter is ו or י is an example of a gizra. These verbs are not strictly irregular verbs, because all Hebrew verbs that possess the same feature of the gizra are conjugated in accordance with the gizra's particular set of rules. Verbs: Every verb has a past tense, a present tense, and a future tense, with the present tense doubling as a present participle. Other forms also exist for certain verbs: verbs in five of the binyanim have an imperative mood and an infinitive, verbs in four of the binyanim have gerunds, and verbs in one of the binyanim have a past participle. Finally, a very small number of fixed expressions include verbs in the jussive mood, which is essentially an extension of the imperative into the third person. Except for the infinitive and gerund, these forms are conjugated to reflect the number (singular or plural), person (first, second, or third) and gender (masculine or feminine) of its subject, depending on the form. Modern Hebrew also has an analytic conditional~past-habitual mood expressed with the auxiliary haya. Verbs: In listings such as dictionaries, Hebrew verbs are sorted by their third-person masculine singular past tense form. This differs from English verbs, which are identified by their infinitives. (Nonetheless, the Hebrew term for infinitive is shem poʕal, which means verb name.) Further, each of the seven binyanim is identified by the third-person masculine singular past tense form of the root פ-ע-ל (P-ʕ-L, meaning doing, action, etc.) cast into that binyan: פָּעַל /ˈpaʕal/, נִפְעַל /nifˈʕal/, פִּעֵל /piˈʕel/, פֻּעַל /puˈʕal/, הִפְעִיל /hifˈʕil/, הֻפְעַל /hufˈʕal/, and הִתְפַּעֵל /hitpaˈʕel/. Verbs: Binyan פָּעַל /paʕal/ Binyan paʕal, also called binyan קַל or qal /qal/ (light), is the most common binyan. Paʕal verbs are in the active voice, and can be either transitive or intransitive. This means that they may or may not take direct objects. Paʕal verbs are never formed from four-letter roots. Binyan paʕal is the only binyan in which a given root can have both an active and a passive participle. For example, רָצוּי /raˈt͡suj/ (desirable) is the passive participle of רָצָה /raˈt͡sa/ (want). Binyan paʕal has the most diverse number of gzarot (pl. of gizra), and the small number of Hebrew verbs that are strictly irregular (about six to ten) are generally considered to be part of the pa'al binyan, as they have some conjugation features similar to paʕal. Binyan נִפְעַל /nifˈʕal/ Verbs in binyan nifal are always intransitive, but beyond that there is little restriction on their range of meanings. The nifal is the passive-voice counterpart of paal. In principle, any transitive paal verb can be rendered passive by taking its root and casting it into nifal. Nonetheless, this is not nifʕal's main use, as the passive voice is fairly rare in ordinary Modern Hebrew. Verbs: More commonly, it is paal's middle- or reflexive-voice counterpart. Ergative verbs in English often translate into Hebrew as a paal–nifal pair. For example, English he broke the plate corresponds to Hebrew הוּא שָׁבַר אֶת הַצַּלַּחַת /hu ʃaˈvar et ha-t͡saˈlaħat/, using paa'al; but English the plate broke corresponds to Hebrew הַצַּלַּחַת נִשְׁבְּרָה /ha-t͡saˈlaħat niʃbeˈra/, using nifal. The difference is that in the first case, there is an agent doing the breaking (active), while in the second case, the agent is ignored (although the object is acted upon; passive). (Nonetheless, as in English, it can still be made clear that there was an ultimate agent: הוּא הִפִּיל אֶת הַצַּלַּחַת וְהִיא נִשְׁבְּרָה /hu hiˈpil ʔet ha-t͡saˈlaħat ve-hi niʃbeˈra/, he dropped the plate and it broke, uses nif'al.) Other examples of this kind include פָּתַח /paˈtaħ//נִפְתַּח /nifˈtaħ/ (to open, transitive/intransitive) and גָּמַר /ɡaˈmar//נִגְמַר /niɡˈmar/ (to end, transitive/intransitive). Verbs: Other relationships between a paa'al verb and its nifa'al counterpart can exist as well. One example is זָכַר /zaˈχar/ and נִזְכַּר /nizˈkar/: both mean to remember, but the latter implies that one had previously forgotten, rather like English to suddenly remember. Another is פָּגַשׁ /paˈɡaʃ/ and נִפְגַּשׁ /nifˈɡaʃ/: both mean to meet, but the latter implies an intentional meeting, while the former often means an accidental meeting. Verbs: Finally, sometimes a nifal verb has no pa'al counterpart, or at least is much more common than its paʕal counterpart; נִדְבַּק /nidˈbak/ (to stick, intransitive) is a fairly common verb, but דָּבַק /daˈvak/ (to cling) is all but non-existent by comparison. (Indeed, נִדְבַּק /nidˈbak/'s transitive counterpart is הִדְבִּיק /hidˈbik/, of binyan hifʕil; see below.) Like pa'al verbs, nifal verbs are never formed from four-letter roots. Verbs: Nifal verbs, unlike verbs in the other passive binyanim (pua'al and hufa'al, described below), do have gerunds, infinitives and imperatives. Binyan פִּעֵל /piˈʕel/ Binyan pi'el, like binyan pa'al, consists of transitive and intransitive verbs in the active voice, though there is perhaps a greater tendency for piʕel verbs to be transitive. Verbs: Most roots with a pa'al verb do not have a piʕel verb, and vice versa, but even so, there are many roots that do have both. Sometimes the pi'el verb is a more intense version of the paʕal verb; for example, קִפֵּץ /kiˈpet͡s/ (to spring) is a more intense version of קָפַץ /kaˈfat͡s/ (to jump), and שִׁבֵּר /ʃiˈber/ (to smash, to shatter, transitive) is a more intense version of שָׁבַר /ʃaˈvar/ (to break, transitive). In other cases, a piʕel verb acts as a causative counterpart to the pa'al verb with the same root; for example, לִמֵּד /liˈmed/ (to teach) is essentially the causative of לָמַד /laˈmad/ (to learn). And in yet other cases, the nature of the relationship is less obvious; for example, סִפֵּר /siˈper/ means to tell / to narrate or to cut hair, while סָפַר /saˈfar/ means to count, and פִּתֵּחַ /piˈte.aħ/ means to develop (transitive verb), while פָּתַח /paˈtaħ/ means to open (transitive verb). Verbs: Binyan פֻּעַל /puˈʕal/ Binyan puʕal is the passive-voice counterpart of binyan piʕel. Unlike binyan nifʕal, it is used only for the passive voice. It is therefore not very commonly used in ordinary speech, except that the present participles of a number of puʕal verbs are used as ordinary adjectives: מְבֻלְבָּל /mevulˈbal/ means mixed-up (from בֻּלְבַּל /bulˈbal/, the passive of בִּלְבֵּל /bilˈbel/, to confuse), מְעֻנְיָן /meunˈjan/ means interested, מְפֻרְסָם /mefurˈsam/ means famous (from פֻּרְסַם /purˈsam/, the passive of פִּרְסֵם /pirˈsem/, to publicize), and so on. Indeed, the same is true of many piʕel verbs, including the piʕel counterparts of two of the above examples: מְבַלְבֵּל /mevalˈbel/, confusing, and מְעַנְיֵן /meʕanˈjen/, interesting. The difference is that piʕel verbs are also frequently used as verbs, whereas puʕal is much less common. Verbs: Puʕal verbs do not have gerunds, imperatives, or infinitives. Verbs: Binyan הִפְעִיל /hifˈʕil/ Binyan hifʕil is another active binyan. Hifʕil verbs are often causative counterparts of verbs in other binyanim; examples include הִכְתִּיב /hiχˈtiv/ (to dictate; the causative of כָּתַב /kaˈtav/, to write), הִדְלִיק /hidˈlik/ (to turn on (a light), transitive; the causative of נִדְלַק /nidˈlak/, (for a light) to turn on, intransitive), and הִרְשִׁים /hirˈʃim/ (to impress; the causative of התרשם /hitraˈʃem/, to be impressed). Nonetheless, not all are causatives of other verbs; for example, הִבְטִיחַ /hivˈtiaħ/ (to promise). Verbs: Binyan הֻפְעַל /hufˈʕal/ Binyan huf'al is much like binyan pu'al, except that it corresponds to hif'il instead of to pi'el. Like pu'al, it is not commonly used in ordinary speech, except in present participles that have become adjectives, such as מֻכָּר /muˈkar/ (familiar, from הֻכַּר /huˈkar/, the passive of הִכִּיר /hiˈkir/, to know (a person)) and מֻגְזָם /muɡˈzam/ (excessive, from /huɡˈzam/, the passive of הִגְזִים /hiɡˈzim/, to exaggerate). Like puʕal verbs, hufʕal verbs do not have gerunds, imperatives, or infinitives. Verbs: Binyan הִתְפַּעֵל /hitpaˈʕel/ Binyan hitpa'el is rather like binyan nif'al, in that all hitpa'el verbs are intransitive, and most have a reflexive sense. Indeed, many hitpa'el verbs are reflexive counterparts to other verbs with the same root; for example, הִתְרַחֵץ /hitraˈħet͡s/ (to wash oneself) is the reflexive of רָחַץ /raˈħat͡s/ (to wash, transitive), and הִתְגַּלֵּחַ /hitɡaˈleaħ/ (to shave oneself, i.e. to shave, intransitive) is the reflexive of גִּלֵּחַ /ɡiˈleaħ/ (to shave, transitive). Some hitpaʕel verbs are a combination of causative and reflexive; for example,הִסְתַּפֵּר /histaˈper/ (to get one's hair cut) is the causative reflexive of סִפֵּר /siˈper/ (to cut (hair)), and הִצְטַלֵּם /hit͡staˈlem/ (to get one's picture taken) is the causative reflexive of צִלֵּם /t͡siˈlem/ (to take a picture (of someone or something)). Verbs: Hitpa'el verbs can also be reciprocal; for example, הִתְכַּתֵּב /hitkaˈtev/ (to write to each other, i.e. to correspond) is the reciprocal of כָּתַב /kaˈtav/ (to write). Verbs: In all of the above uses, the hitpa'el verb contrasts with a pu'al or huf'al verb in two ways: firstly, the subject of the hitpa'el verb is generally either performing the action, or at least complicit in it, whereas the subject of the pu'al or huf'al verb is generally not; and secondly, pu'al and huf'al verbs often convey a sense of completeness, which hitpa'el verbs generally do not. So whereas the sentence אֲנִי מְצֻלָּם /aˈni met͡suˈlam/ (I am photographed, using pu'al) means something like there exists a photo of me, implying that the photo already exists, and not specifying whether the speaker caused the photo to be taken, the sentence אֲנִי מִצְטַלֵּם /aˈni mit͡staˈlem/ (I am photographed, using hitpa'el) means something like I'm having my picture taken, implying that the picture does not exist yet, and that the speaker is causing the picture to be taken. Verbs: In other cases, hitpa'el verbs are ordinary intransitive verbs; for example, התנהג /hitnaˈheɡ/ (to behave), structurally is the reciprocal of נהג /naˈhaɡ/ (to act), as in נְהַג בְּחָכְמָה /neˈhag be-ħoχˈma/ (act wisely). However, it is used sparsely, only in sayings as such, and the more common meaning of nahaɡ is to drive; for that meaning, הִתְנַהֵג /hitnaˈheɡ/ is not a reciprocal form, but a separate verb in effect. For example: in talking about a car that drives itself, one would say מְכוֹנִית שֶׁנּוֹהֶגֶת אֶת עַצְמָהּ /meχoˈnit ʃe-noˈheɡet ʔet ʕat͡sˈmah/ (a car that drives itself, using nahag), not מְכוֹנִית שֶׁמִּתְנַהֶגֶת /meχoˈnit ʃe-mitnaˈheɡet/ (a car that behaves, using hitnaheg). Nouns: The Hebrew noun (שֵׁם עֶצֶם /ʃem ʕet͡sem/) is inflected for number and state, but not for case and therefore Hebrew nominal structure is normally not considered to be strictly declensional. Nouns are generally related to verbs (by shared roots), but their formation is not as systematic, often due to loanwords from foreign languages. Hebrew nouns are also inflected for definiteness by application of the prefix ַה (ha) before the given noun. Semantically, the prefix "ha" corresponds roughly to the English word "the". Nouns: Gender: masculine and feminine Every noun in Hebrew has a gender, either masculine or feminine (or both); for example, סֵפֶר /ˈsefer/ (book) is masculine, דֶּלֶת /ˈdelet/ (door) is feminine, and סַכִּין /saˈkin/ (knife) is both. There is no strict system of formal gender, but there is a tendency for nouns ending in ת (/-t/) or ה (usually /-a/) to be feminine and for nouns ending in other letters to be masculine. There is a very strong tendency toward natural gender for nouns referring to people and some animals. Such nouns generally come in pairs, one masculine and one feminine; for example, אִישׁ /iʃ/ means man and אִשָּׁה /iˈʃa/ means woman. (When discussing mixed-sex groups, the plural of the masculine noun is used.) Number: singular, plural, and dual Hebrew nouns are inflected for grammatical number; as in English, count nouns have a singular form for referring to one object and a plural form for referring to more than one. Unlike in English, some count nouns also have separate dual forms, for referring to two objects; see below. Nouns: Masculine nouns generally form their plural by adding the suffix ים /-im/: מַחְשֵׁב /maħˈʃev/ (computer) → מַחְשְׁבִים /maħʃeˈvim/ (computers)The addition of the extra syllable usually causes the vowel in the first syllable to shorten if it is Kamatz: דָּבָר /davar/ (thing) → דְּבָרִים /dvaˈrim/ (things)Many common two-syllable masculine nouns accented on the penultimate syllable (often called segolates, because many (but not all) of them have the vowel /seˈɡol/ (/-e-/) in the last syllable), undergo more drastic characteristic vowel changes in the plural: יֶלֶד /ˈjeled/ (boy) → יְלָדִים /jelaˈdim/ (boys, children) בֹּקֶר /ˈboker/ (morning) → בְּקָרִים /bkaˈrim/ (mornings) חֶדֶר /ˈħeder/ (room) → חֲדָרִים /ħadaˈrim/ (rooms)Feminine nouns ending in /-a/ or /-at/ generally drop this ending and add /-ot/, usually without any vowel changes: מִטָּה /miˈta/ (bed) → מִטּוֹת /miˈtot/ (beds) מִסְעָדָה /misʕaˈda/ (restaurant) → מִסְעָדוֹת /misʕaˈdot/ (restaurants) צַּלַּחַת /t͡saˈlaħat/ (plate) → צַלָּחוֹת /t͡salaˈħot/ (plates)Nouns ending in /-e-et/ also replace this ending with /-ot/, with an /-e-/ in the preceding syllable usually changing to /-a-/: מַחְבֶּרֶת /maħˈberet/ (notebook) → מַחְבָּרוֹת /maħbaˈrot/ (notebooks)Nouns ending in /-ut/ and /-it/ replace these endings with /-ujot/ and /-ijot/, respectively: חֲנוּת /ħaˈnut/ (store) → חֲנוּיוֹת /ħanuˈjot/ (stores) אֶשְׁכּוֹלִית /eʃkoˈlit/ (grapefruit) → אֶשְׁכּוֹלִיּוֹת /eʃkoliˈjot/ (grapefruits) Plural exceptions A large number of masculine nouns take the usually feminine ending /-ot/ in the plural: מָקוֹם /maˈkom/ (place) → מְקוֹמוֹת /mekoˈmot/ (places) חַלּוֹן /ħalon/ (window) → חַלּוֹנוֹת /ħaloˈnot/ (windows)A small number of feminine nouns take the usually masculine ending /-im/: מִלָּה /mila/ (word) → מִלִּים /miˈlim/ (words) שָׁנָה /ʃana/ (year) → שָׁנִים /ʃaˈnim/ (years)Many plurals are completely irregular: עִיר /ir/ (city) → עָרִים /ʕaˈrim/ (cities) עִפָּרוֹן /iparon/ (pencil) → עֶפְרוֹנוֹת /ʕefroˈnot/ (pencils) אִישׁ /ish/ (man; root ʔ-I-) → אֲנָשִׁים /ʔanaˈʃim/ (men, people; root ʔ-N-ʃ)Some forms, like אָחוֹת ← אֲחָיוֹת (sister) or חָמוֹת ← חֲמָיוֹת (mother-in-law) reflect the historical broken plurals of Proto-Semitic, which have been preserved in other Semitic languages (most notably Arabic). Nouns: Dual Hebrew also has a dual number, expressed in the ending /-ajim/, but even in ancient times its use was very restricted. In modern times, it is usually used in expressions of time and number, or items that are inherently dual. These nouns have plurals as well, which are used for numbers higher than two, for example: The dual is also used for some body parts, for instance: רֶגֶל /ˈreɡel/ (foot) → רַגְלַיִם /raɡˈlajim/ (feet) אֹזֶן /ˈʔozen/ (ear) → אָזְנַיִם /ʔozˈnajim/ (ears) עַיִן /ˈʕajin/ (eye) → עֵינַיִם /ʕe(j)ˈnajim/ (eyes) יָד /jad/ (hand) → יָדַיִם /jaˈdajim/ (hands)In this case, even if there are more than two, the dual is still used, for instance /leˈχelev jeʃ ˈʔarbaʕ raɡˈlajim/ ("a dog has four legs"). Nouns: The dual is also used for certain objects that are "semantically" dual. These words have no singular, for instance משקפים /miʃkaˈfajim/ (eyeglasses) and מספרים /mispaˈrajim/ (scissors). As in the English "two pairs of pants", the plural of these words uses the word זוּג /zuɡ/ (pair), e.g. /ʃne zuˈɡot mispaˈrajim/ ("two pairs-of scissors-DUAL"). Nouns: Similarly, the dual can be found in some place names, such as the city גִּבְעָתַיִם /givʕaˈtajim/ (Twin Peaks, referring to the two hills of the landscape on which the city is built) and the country מִצְרַיִם /mit͡sˈrajim/ (Egypt, related to the ancient conceptualization of Egypt as two realms: Upper Egypt and Lower Egypt). However, both the city name and country name are actually grammatically treated as feminine singular nouns, as the words עיר /ʕir/ for city and מדינה /mediˈna/ for country are both feminine. Nouns: Noun construct In Hebrew, as in English, a noun can modify another noun. This is achieved by placing the modifier immediately after what it modifies, in a construction called סְמִיכוּת /smiˈχut/ (adjacency). The noun being modified appears in its construct form, or status constructus. For most nouns, the construct form is derived fairly easily from the normal (indefinite) form: The singular of a masculine noun typically does not change form. Nouns: The plural of a masculine noun typically replaces the suffix ים- /-im/ with the suffix י- /-e/. The singular of a feminine noun ending in ה- /-a/ typically replaces that ה with a ת /-at/. The plural of a feminine noun typically does not change form.There are many words (usually ancient ones) that have changes in vocalization in the construct form. For example, the construct form of /ˈbajit/ (house, בַּיִת) is /bet/ (house-of, בֵּית). However, these two forms are written the same without niqquds. In addition, the definite article is never placed on the first noun (the one in the construct form). Nouns: בֵּית סֵפֶר /bet ˈsefer/ (literally, house-of book or bookhouse, i.e. school) בֵּית הַסֵּפֶר /bet ha-ˈsefer/ (literally, house-of the-book, i.e. the school) בָּתֵּי חוֹלִים /baˈte ħoˈlim/ (literally, houses-of sick-people, i.e. hospitals) עוּגַת הַשּׁוֹקוֹלָד /ʕuɡat ha-ʃokolad/ (the chocolate cake) דֹּאַר אֲוִיר /ˈdoʔar ʔaˈvir/ (air mail) כֶּלֶב רְחוֹב /ˈkelev reˈħov/ (street dog) בַּקְבּוּק הֶחָלָב /bakˈbuk he-ħaˈlav/ (the bottle of milk)However, this rule is not always adhered to in informal or colloquial speech; one finds, for example, הָעוֹרֵךְ דִּין /ha-ˈoʁeχ din/ (literally the law organiser, i.e. lawyer). Nouns: Possession Possession is generally indicated using the preposition של /ʃel/, roughly meaning of or belonging to: הַסֵּפֶר שֶׁלִּי /ha-ˈsefer ʃeˈli/ (literally the-book of-me, i.e. my book) הַדִּירָה שֶׁלְּךָ /ha-diˈra ʃelˈχa/ (literally the-apartment of-you, i.e. your apartment, single masculine form) הַמִּשְׂחָק שֶׁל אֶנְדֶּר /ha-misˈħaq ʃel ˈender/ (literally the-game of-Ender, i.e. Ender's Game)In literary style, nouns are inflected to show possession through noun declension; a personal suffix is added to the construct form of the noun (discussed above). So, סִפְרֵי /sifˈre/ (books of) can be inflected to form סְפָרַי /sfaˈraj/ (my books),סְפָרֶיךָ /sfaˈreχa/ (your books, singular masculine form), סְפָרֵינוּ /sfaˈrenu/ (our books), and so forth, while דִּירַת /diˈrat/ (apartment of) gives דִּירָתִי /diraˈti/ (my apartment), דִּירַתְךָ /diratˈχa/ (your apartment; singular masculine form),דִּירָתֵנוּ /diraˈtenu/ (our apartment), etc. Nouns: While the use of these forms is mostly restricted to formal and literary speech, they are in regular use in some colloquial phrases, such as ?מָה שְׁלוֹמְךָ /ma ʃlomˈχa?/ (literally "what peace-of-you?", i.e. "what is your peace?", i.e. "how are you?", singular masculine form) or לְדַעֲתִי /ledaʕaˈti/ (in my opinion/according to my knowledge). Nouns: In addition, the inflected possessive is commonly used for terms of kinship; for instance, בְּנִי /bni/ (my son), בִּתָּם /biˈtam/ (their daughter), and אִשְׁתּוֹ /iʃˈto/ (his wife) are preferred to הַבֵּן שֶׁלִּי /ha-ˈben ʃe'li/, הַבַּת שֶׁלָּהֶם /ha-ˈbat ʃelahem/, and הָאִשָּׁה שֶׁלּוֹ /ha-ʔiˈʃa ʃe'lo/. However, usage differs for different registers and sociolects: In general, the colloquial will use more analytic constructs in place of noun declensions. Nouns: Noun derivation In the same way that Hebrew verbs are conjugated by applying various prefixes, suffixes and internal vowel combinations, Hebrew nouns can be formed by applying various "meters" (Hebrew /miʃkaˈlim/) and suffixes to the same roots. Gerunds, as indicated above, are one example. Nouns: Many abstract nouns are derived from noun, using the suffix /-ut/: סֵפֶר /ˈsefer/ (book) → סִפְרוּת /sifˈrut/ (literature)Also, there is הִתְקַטְּלוּת /hitkat'lut/ meter, that also ends with /-ut/: הִתְיַעֵץ /hitjaˈʕet͡s/ (to consult) → הִתְיַעֲצוּת /hitjaʕaˈt͡sut/ (consultation) הִתְרַגֵּשׁ /hitraˈɡeʃ/ (to get excited) → הִתְרַגְּשׁוּת /hitraɡˈʃut/ (excitement)The קַטְלָן /katˈlan/ meter applied to a root, and the /-an/ suffix applied to a noun, indicate an agent or job: שֶׁקֶר /ˈʃeker/ (lie) (root: ש-ק-ר ʃ-q-r) → שַׁקְרָן /ʃak'ran/ (liar) פַּחַד /ˈpaħad/ (fear) (root: פ-ח-ד p-ħ-d) → פַּחְדָן /paħˈdan/ (coward) חָלָב /ħaˈlav/ (milk) → חַלְבָן /ħalˈvan/ (milkman) סֵדֶר /ˈseder/ (order) → סַדְרָן /sadˈran/ (usher)The suffix /-on/ usually denotes a diminutive: מִטְבָּח /mitˈbaħ/ (kitchen) → מִטְבָּחוֹן /mitbaˈħon/ (kitchenette) סֵפֶר /ˈsefer/ (book) → סִפְרוֹן /sifˈron/ (booklet) מַחְשֵׁב /maħˈʃev/ (computer) → מַחְשְׁבוֹן /maħʃeˈvon/ (calculator)Though occasionally this same suffix can denote an augmentative: חֲנָיָה /ħanaˈja/ (parking space) → חַנְיוֹן /ħanˈjon/ (parking lot) קֶרַח /ˈkeraħ/ (ice) → קַרְחוֹן /karˈħon/ (glacier)Repeating the last two letters of a noun or adjective can also denote a diminutive: כֶּלֶב /ˈkelev/ (dog) → כְּלַבְלַב /klavˈlav/ (puppy) קָצָר /kaˈt͡sar/ (short) → קְצַרְצַר /kt͡sarˈt͡sar/ (very short)The קָטֶּלֶת/kaˈtelet/ meter commonly used to name diseases: אָדֹם /ʔaˈdom/ (red) → אַדֶּמֶת /ʔaˈdemet/ (rubella) כֶּלֶב /ˈkelev/ (dog) → כַּלֶּבֶת /kaˈlevet/ (rabies) צָהֹב /t͡saˈhov/ (yellow) → צַהֶבֶת /t͡saˈhevet/ (jaundice, more colloquially hepatitis)However, it can have various different meanings as well: נְיָר /neˈjar/ (paper) → נַיֶּרֶת /naˈjeret/ (paperwork) כֶּסֶף /ˈkesef/ (money) → כַּסֶּפֶת /kaˈsefet/ (a safe)New nouns are also often formed by the combination of two existing stems: קוֹל /kol/ (sound) + נוֹעַ /ˈno.aʕ/ (motion) → קוֹלְנוֹע /kolˈno.aʕ/ (cinema) רֶמֶז /ˈremez/ (hint) + אוֹר /ʔor/ (light) → רַמְזוֹר /ramˈzor/ (traffic light) קְנִיָּה /kniˈja/ (purchase) + חַנְיוֹן /ħanˈjon/ (parking lot) → קַנְיוֹן /kanˈjon/ (shopping mall)רַמְזוֹר /ramˈzor/ uses more strictly recent compound conventions, as the א aleph (today usually silent but historically very specifically a glottal stop) is dropped entirely from spelling and pronunciation of the compound. Nouns: Some nouns use a combination of methods of derivation: תּוֹעֶלֶת /toˈʕelet/ (benefit) → תוֹעַלְתָּנוּת /toʕaltaˈnut/ (Utilitarianism) (suffix /-an/ followed by suffix /-ut/) קֹמֶץ /ˈkomet͡s/ (handful) → קַמְצָן /kamˈt͡san/ (miser, miserly) → קַמְצָנוּת /qamt͡sanˈut/ (miserliness) (suffix /-an/ followed by suffix /-ut/) Adjectives: In Hebrew, an adjective (שֵׁם תֹּאַר /ʃem toar/) agrees in gender, number, and definiteness with the noun it modifies. Attributive adjectives follow the nouns they modify. Adjectives: סֵפֶר קָטָן /ˈsefer kaˈtan/ (a small book) סְפָרִים קְטַנִּים /sfaˈrim ktaˈnim/ (small books) בֻּבָּה קְטַנָּה /buˈba ktaˈna/ (a small doll) בֻּבּוֹת קְטַנּוֹת /buˈbot ktaˈnot/ (small dolls)Adjectives ending in -i have slightly different forms: אִישׁ מְקוֹמִי /ʔiʃ mekoˈmi/ (a local man) אִשָּׁה מְקוֹמִית /ʔiˈʃa mekoˈmit/ (a local woman) אֲנָשִׁים מְקוֹמִיִּים /ʔanaˈʃim mekomiˈjim/ (local people) נָשִׁים מְקוֹמִיּוֹת /naˈʃim mekomiˈjot/ (local women)Masculine nouns that take the feminine plural ending /-ot/ still take masculine plural adjectives, e.g. מְקוֹמוֹת יָפִים /mekoˈmot jaˈfim/ (beautiful places). The reverse goes for feminine plural nouns ending in /-im/, e.g. מִלִּים אֲרֻכּוֹת /miˈlim ʔaruˈkot/ (long words). Adjectives: Many adjectives, like segolate nouns, change their vowel structure in the feminine and plural. Adjectives: Use of the definite article with adjectives In Hebrew, an attributive adjective takes the definite article if it modifies a definite noun (either a proper noun, or a definite common noun): הַמְּכוֹנִית הַחֲדָשָׁה הָאֲדֻמָּה הַמְּהִירָה /ha-mχonit ha-ħadaʃa ha-ʔaduma ha-mhira/ (The new, red, fast car, lit. The car the new the red the fast (f.sing.)) דָּוִד הַגָּדוֹל /daˈvid ha-ɡaˈdol/ (David the Great, lit. David the-great (m.sing.)) Adjectives derived from verbs Many adjectives in Hebrew are derived from the present tense of verbs. These adjectives are inflected the same way as the verbs they are derived from: סוֹעֵר /soˈʕer/ (stormy, paʕal) → סוֹעֶרֶת /soˈʕeret/, סוֹעֲרִים /soʕaˈrim/, סוֹעֲרוֹת /soʕaˈrot/ מְנֻתָּק /menuˈtak/ (alienated, puʕal) → מְנֻתֶּקֶת /menuˈteket/, מְנֻתָּקִים /menutaˈkim/, מְנֻתָּקוֹת /menutaˈkot/ מַרְשִׁים /marˈʃim/ (impressive, hifʕil) → מַרְשִׁימָה /marʃiˈma/, מַרְשִׁימִים /marʃiˈmim/, מַרְשִׁימוֹת /marʃiˈmot/ Adverbs: The Hebrew term for adverb is תֹּאַר הַפֹּעַל /ˈtoʔar ha-ˈpoʕal/. Hebrew forms adverbs in several different ways. Adverbs: Some adjectives have corresponding one-word adverbs. In many cases, the adverb is simply the adjective's masculine singular form: חָזָק /ħaˈzak/ (strong or strongly) בָּרוּר /baˈrur/ (clear or clearly)In other cases, the adverb has a distinct form: מַהֵר /maˈher/ (quickly; from the adjective מָהִיר /maˈhir/, quick) לְאַט /leʔat/ (slowly; from the adjective אִטִּי /iˈti/, slow) הֵיטֵב /heˈtev/ (well; from the adjective טוֹב /tov/, good)In some cases, an adverb is derived from an adjective using its singular feminine form or (mostly in poetic or archaic usage) its plural feminine form: אוֹטוֹמָטִית /otoˈmatit/ (automatically) קַלּוֹת /kaˈlot/ (lightly)Most adjectives, however, do not have corresponding one-word adverbs; rather, they have corresponding adverb phrases, formed using one of the following approaches: using the prepositional prefix ב /be-/ (in) with the adjective's corresponding abstract noun: בִּזְהִירוּת /bi-zhiˈrut/ ("in carefulness": carefully) בַּעֲדִינוּת /ba-ʕadiˈnut/ ("in fineness": finely) using the same prefix, but with the noun אֹפֶן /ˈʔofen/ (means/fashion), and modifying the noun with the adjective's masculine singular form: בְּאֹפֶן אִטִּי /beˈʔofen ʔiˈti/ ("in slow fashion": slowly). Adverbs: similarly, but with the noun צוּרָה /t͡suˈra/ (like/shape), and using the adjective's feminine singular form: בְּצוּרָה אָפְיָנִית /be-t͡suˈra ʔofjaˈnit/ ("in characteristic form": characteristically).The use of one of these methods does not necessarily preclude the use of the others; for example, slowly may be either לְאַט /leˈʔat/ (a one-word adverb), בְּאִטִּיּוּת /be-ʔitiˈjut/ (literally "in slowness", a somewhat more elegant way of expressing the same thing) or בְּאֹפֶן אִטִּי /beˈʔofen ʔiˈti/ ("in slow fashion"), as mentioned above. Adverbs: Finally, as in English, there are various adverbs that do not have corresponding adjectives at all: לָכֵן /laˈχen/ (therefore) כָּכָה /ˈkaχa/ (thus) Prepositions: Like English, Hebrew is primarily a prepositional language, with a large number of prepositions. Several of Hebrew's most common prepositions are prefixes rather than separate words. For example, English in a room is Hebrew בְּחֶדֶר /bə-ˈħeder/. These prefixes precede the definite prefix ה, which assimilates to them: the room is הַחֶדֶר /ha-ˈħeder/; in the room is בַּחֶדֶר /ba-ˈħeder/. Prepositions: Direct objects The preposition אֶת /ʔet/ plays an important role in Hebrew grammar. Its most common use is to introduce a direct object; for example, English I see the book is in Hebrew אֲנִי רוֹאֶה אֶת הַסֵּפֶר /ʔaˈni roˈʔe ʔet ha-ˈsefer/ (literally I see /ʔet/ the-book). However, אֶת /ʔet/ is used only with semantically definite direct objects, such as nouns with the, proper nouns, and personal pronouns; with semantically indefinite direct objects, it is simply omitted: אֲנִי רוֹאֶה סֵפֶר ʔani roʔe sefer (I see a book) does not use את /ʔet/. This has no direct translation into English, and is best described as an object particle — that is, it denotes that the word it precedes is the direct object of the verb. Prepositions: This preposition has a number of special uses. For example, when the adjective צָרִיךְ /t͡saˈriχ/ (in need (of)) takes a definite noun complement, it uses the preposition אֶת /ʔet/: הָיִיתִי צָרִיךְ אֶת זֶה /haˈjiti t͡saˈriχ ʔet ze/ (literally I-was in-need-of /ʔet/ this, i.e. I needed this). Here as elsewhere, the אֶת /ʔet/ is dropped with an indefinite complement: הָיוּ צְרִיכִים יוֹתֵר /haˈju t͡sriˈχim joˈter/ (literally they-were in-need-of more, i.e. they needed more). This is perhaps related to the verb-like fashion in which the adjective is used. Prepositions: In Biblical Hebrew, there is possibly another use of /ʔet/. Waltke and O'Connor (pp. 177–178) make the point: "...(1) ...sign of the accusative... (2) More recent grammarians regard it as a marker of emphasis used most often with definite nouns in the accusative role. The apparent occurrences with the nominative are most problematic... AM Wilson late in the nineteenth century concluded from his exhaustive study of all the occurrences of the debated particle that it had an intensive or reflexive force in some of its occurrences. Many grammarians have followed his lead. (reference lists studies of 1955, 1964, 1964, 1973, 1965, 1909, 1976.) On such a view, /ʔet/ is a weakened emphatic particle corresponding to the English pronoun 'self'... It resembles Greek 'autos' and Latin 'ipse' both sometimes used for emphasis, and like them it can be omitted from the text, without obscuring the grammar. This explanation of the particle's meaning harmonizes well with the facts that the particle is used in Mishnaic Hebrew as a demonstrative and is found almost exclusively with determinate nouns." Pronominal suffix There is a form called the verbal pronominal suffix, in which a direct object can be rendered as an additional suffix onto the verb. This form allows for a high degree of word economy, as the single fully conjugated verb expresses the verb, its voice, its subject, its object, and its tense. Prepositions: שְׁמַרְנוּהוּ /ʃmarˈnuhu/ (we protected him)In modern usage, the verbal pronominal suffixes are rarely used, in favor of expression of direct objects as the inflected form of the separate word ʔet. It is used more commonly in biblical and poetic Hebrew (for instance, in prayers). Indirect objects Indirect objects are objects requiring a preposition other than אֶת /ʔet/. The preposition used depends on the verb, and these can be very different from the one used in English. In the case of definite indirect objects, the preposition will replace את /ʔet/. שָׁכַחְתִּי מֵהַבְּחִירוֹת /ʃaˈχaħti me-ha-bħiˈrot/ (I forgot about the election)Hebrew grammar distinguishes between various kinds of indirect objects, according to what they specify. Thus, there is a division between objects for time תֵּאוּר זְמַן (/teˈʔur zman/), objects for place תֵּאוּר מָקוֹם (/teʔur maˈkom/), objects for reason תֵּאוּר סִבָּה (/teˈʔur siˈba/) and many others. In Hebrew, there are no distinct prepositional pronouns. If the object of a preposition is a pronoun, the preposition contracts with the object yielding an inflected preposition. דִּבַּרְנוּ עִם דָּוִד /diˈbarnu ʕim 'david/ (we spoke with David) דִּבַּרְנוּ אִתּוֹ /diˈbarnu iˈto/ (we spoke with him)(The preposition עִם /ʕim/ (with) in everyday speech is not inflected, rather a different, more archaic pronoun אֶת /ʔet/ with the same meaning, unrelated to the direct object marker, is used instead.) Inflected prepositions
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Light welterweight** Light welterweight: Light welterweight, also known as junior welterweight or super lightweight, is a weight class in combat sports. Boxing: Professional boxing In professional boxing, light welterweight is contested between the lightweight and welterweight divisions, in which boxers weigh above 61.2kg or 135 pounds and up to 63.5 kg or 140 pounds. The first champion of this weight class was Pinky Mitchell in 1922, though he was only awarded his championship by a vote of the readers of the Boxing Blade magazine. Boxing: There was not widespread acceptance of this new weight division in its early years, and the New York State Athletic Commission withdrew recognition of it in 1930. The National Boxing Association continued to recognize it until its champion, Barney Ross relinquished the title in 1935 to concentrate on regaining the welterweight championship. Boxing: A few commissions recognized bouts in the 1940s as being for the light welterweight title, but the modern beginnings of this championship date from 1959 when Carlos Ortiz won the vacant title with a victory over Kenny Lane. Both the World Boxing Association (WBA) and the World Boxing Council (WBC) recognized the same champions until 1967, when the WBC stripped Paul Fuji of the title and matched Pedro Adigue and Adolph Pruitt for their version of the championship. Adigue won a fifteen-round decision. The International Boxing Federation (IBF) recognized Aaron Pryor as its first champion in 1984. Hector Camacho became the first World Boxing Organization (WBO) champion with his victory against Ray Mancini in 1989. Boxing: Current world champions Current world rankings The Ring As of June 20, 2023.Keys: C Current The Ring world champion BoxRec As of 1 July 2022. Boxing: Amateur boxing In amateur boxing, light welterweight is a weight class for fighters weighing up to 64 kilograms. For the 1952 Summer Olympics, the division was created when the span from 54 to 67 kg was changed from three weight classes (featherweight, lightweight, and welterweight) to four. Perhaps the most famous amateur light welterweight champion is Sugar Ray Leonard, who went on to an impressive professional career. Kickboxing: In the International Kickboxing Federation (IKF), Super lightweight is 132.1 - 137 lbs or 60.04 - 62.27 kg, & Light welterweight is 137.1 - 142 lbs or 62.31 - 64.54 kg. Lethwei: The World Lethwei Championship recognizes the light welterweight division with an upper limit of 63.5 kg (140 lb). In World Lethwei Championship Antonio Faria is the Light welterweight Champion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Case preservation** Case preservation: When a computer file system stores file names, the computer may keep or discard case information. When the case is stored, it is called case preservation. Case preservation: A system that is not case-preserving is necessarily case-insensitive, but it is possible and common for a system to be case-insensitive, yet case-preserving. This combination is often considered most natural for people to understand, because most people prefer using the correct capitalization but will still recognize others. For example, if someone refers to the "uNiTeD states oF AMERICA," it is understood to mean the United States of America, even though the capitalization is incorrect. Case preservation: macOS, current versions of the Microsoft Windows operating systems and all versions of Amiga OS are case-preserving and case-insensitive in most cases. Since they are case-insensitive, when requesting a file by name any capitalization can be used, in contrast to case-sensitive systems where only a single capitalization would work. But as they are case-preserving, when viewing a file's name it will be presented with the capitalization used when the file was created. On a non-case-preserving system, arbitrary capitalization would be displayed instead, such as all upper- or lower-case. Also, in case-insensitive but case preserving file systems there cannot be a readme.txt and a Readme.txt in the same folder. Case preservation: Examples of systems with various case-sensitivity and case-preservation exist among file systems:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Study of animal locomotion** Study of animal locomotion: The study of animal locomotion is a branch of biology that investigates and quantifies how animals move. Kinematics: Kinematics is the study of how objects move, whether they are mechanical or living. In animal locomotion, kinematics is used to describe the motion of the body and limbs of an animal. The goal is ultimately to understand how the movement of individual limbs relates to the overall movement of an animal within its environment. Below highlights the key kinematic parameters used to quantify body and limb movement for different modes of animal locomotion. Quantifying locomotion: Walking Legged locomotion is a dominant form of terrestrial locomotion, the movement on land. The motion of limbs is quantified by intralimb and interlimb kinematic parameters. Intralimb kinematic parameters capture movement aspects of an individual limb, whereas, interlimb kinematic parameters characterize the coordination across limbs. Interlimb kinematic parameters are also referred to as gait parameters. The following are key intralimb and interlimb kinematic parameters of walking: Characterizing swing and stance transitions The calculation of the above intra- and interlimb kinematics relies on the classification of when the legs of an animal touches and leaves the ground. Stance onset is defined as when a leg first contacts the ground, whereas, swing onset occurs at the time when the leg leaves the ground. Typically, the transition between swing and stance, and vice versa, of a leg is determined by first recording the leg's motion with high-speed videography (see the description of high-speed videography below for more details). From the video recordings of the leg, a marker on the leg (usually placed at the distal tip of the leg) is then tracked manually or in an automated fashion to obtain the position signal of the leg's movement. The position signal associated with each leg is then normalized to that associated with a marker on the body; transforming the leg position signal into body-centered coordinates. This normalization is necessary to isolate the movement of the leg relative to that of the body. For tracking the leg movement of unconstrained/untethered animals, it is important to perform a coordinate transform so that the movement of the animal is aligned along one axis (e.g. a common heading angle). This is also a necessary step because it isolates the oscillatory anterior-posterior movement of the leg along a single axis, rather than being obscured across multiple axes. Once the tracked and normalized leg position is obtained, one way to determine the onsets of stance and swing are to find the peaks and troughs of the leg position signal. The peaks of the leg position signal are the stance onsets, which are also the anterior extreme positions of the leg for each step. On the other hand, the troughs of the leg position signal are the swing onsets as well as the posterior extreme positions of the leg for each step. Therefore, the transitions between stance and swing, and vice versa, are determined by finding the peaks and the troughs of the normalized leg position signal. Alternatively, these transitions can be found by using leg velocity, the derivative of the leg position signal. When using this approach, a threshold is chosen to categorize leg movement into stance and swing given the instantaneous velocity of the leg. This stance and swing classification approach is useful for instances when the interaction between the leg and substrate is unclear (i.e. difficult to tell when the leg truly contacts the substrate). Regardless of the approach, accurately classifying swing and stance is crucial for the above calculations of intra- and interlimb kinematic parameters. Quantifying locomotion: Intralimb kinematic parameters Anterior Extreme Position (AEP): the forwardmost position of the leg (i.e. usually the start of stance phase). Posterior Extreme Position (PEP): the rearmost position of the leg (i.e. usually the start of swing phase). Stride duration: elapsed time between two onsets of stance. Stride frequency: inverse of stride duration (i.e. number of strides per second) Stance duration: time elapsed between stance onset and swing onset. Swing duration: time elapsed between swing onset and the subsequent stance onset . Stride length: the straight line distance between stance onset and swing offset. Stride range of motion: the leg's integrated path between stance onset and swing offset. Quantifying locomotion: Joint angles: Walking can also be quantified through the analysis of joint angles. During legged locomotion, an animal flexes and extends its joints in an oscillatory manner, creating a joint angle pattern that repeats across steps. The following are some useful joint angle analyses for characterizing walking:Joint angle trace: a trace of the angles that a joint exhibits during walking. Quantifying locomotion: Joint angle distribution: the distribution of angles of a joint. Joint angle extremes: the maximum (extension) and minimum (flexion) angle of a joint during walking. Joint angle variability across steps: the variability between joint angle traces of several steps. Interlimb kinematic parameters Step length: the distance from the stance onset of a reference leg to its contralateral counterpart. Phase offsets: the lag of a leg relative to the stride period of a reference leg. Number of legs in stance: The number of legs in stance at a single point in time. Quantifying locomotion: Tripod coordination strength (TCS): specific to hexapod interlimb coordination, this parameter determines how much the interlimb coordination resembles the canonical tripod gait. TCS is calculated as the ratio of the total time legs belonging to a tripod (i.e. left front, middle right, and hind left legs, or vice versa) are in swing together, by the time elapsed between the first leg of the tripod that enters swing and the last leg of the same tripod that exits swing. Quantifying locomotion: Relationship between several joint angles: the relative angles of two joints, either from the same leg or between legs. For example, the angle of a human's left femur-tibia (knee) joint when the right femur-tibia joint is at its most flexed or extended angle. Quantifying locomotion: Measures of walking stability Static stability: minimum distance from the center of mass (COM) to any edge of the support polygon created by the legs in stance for each moment in time. A walking animal is statically stable if there are enough legs to form the support polygon (i.e. 3 or more) and the COM is within the support polygon. Moreover, static stability is at its maximum when it lies at the center of the support polygon. Steps to calculate static stability are as follows: Find which legs are in stance and the location of the center of mass. Note, if there are less than 3 legs in stance then the animal is not statically stable. Quantifying locomotion: Form the support polygon by creating edges between these legs in a clock-wise manner. Determine if the center of mass lies inside or outside of the support polygon. The ray casting algorithm is a common approach of finding if a point is located within a polygon. If the center of mass is outside of the polygon then the animal is statically unstable. If the center of mass is inside the support polygon, calculate static stability by computing the minimum distance of the center of mass to any edge of the polygon.Dynamic stability: dictates the degree to which deviations from periodic movement during walking will result in instability. Quantifying locomotion: Analyzing kinematics across steps Quantifying walking often involves assessing the kinematics of individual steps. The first task is to parse walking data into individual steps. Methods for parsing individual steps from walking data rely heavily on the data collection process. At a high-level, walking data should be cyclical which each cycle reflecting the movements of one step, and steps can therefor be parsed at the peaks of the signal. It is often useful to compare or pool step data. One difficulty in this pursuit is the variable length of steps both within and between legs. There are many ways to align steps, the following are a few useful methods. Quantifying locomotion: Stretch step: steps of variable durations may be stretched to the same duration. Quantifying locomotion: Step phase: the phase of each step can be computed which quantifies how far through the step each data point is. This normalizes the data by step length, allowing data from steps of variable lengths to be compared. The Hilbert transform may be used to calculate phase, however a manual phase calculation may be better for aligning peak (swing and stance start) alignment. Quantifying locomotion: Speed-dependent kinematic changes Many animals alter walking kinematics as they modulate walking speed. An interlimb kinematic parameter that is commonly speed dependent is gait, the stepping pattern across legs. While some animals alternate between distinct gaits as a function of speed, others move along a continuum of gaits. Similarly, animals commonly modulate intralimb parameters across speed. For example, fruit flies decrease stance duration and increase step length as forward speed increases. Importantly, kinematics are not only modulated across forward velocity, but also rotational and sideslip velocities. In these cases, asymmetry in the modulation between left and right legs is common. Quantifying locomotion: Flight Aerial locomotion is a form of movement used by many organisms and is typically powered by at least one pair of wings. Some organisms, however, have other morphological features that allow them to glide. There are many different flight modes, such as takeoff, hovering, soaring, and landing. Quantifying wing movements during these flight modes will provide insight about the body and wing maneuvers that are required to execute these behaviors. Wing orientation is quantified throughout the flight cycle by three angles that are defined in a coordinate system relative to the base of the wing. The magnitude of these three angles are often compared for upstrokes and downstrokes. In addition, kinematic parameters are used to characterize the flight cycle, which consists of an upstroke and a downstroke. Aerodynamics are often considered when quantifying aerial locomotion, as aerodynamic forces (e.g. lift or drag) are able to influence flight performance. Key parameters from these three categories are defined as follows: Angles to quantify wing orientation Wing orientation is described in the coordinate system centered at the wing hinge. The x-y plane coincides with the stroke plane, the plane parallel to the plane that contains both wing tips and is centered at the wing base. Assuming the wing can modeled by the vector passing through the wing base and wing tip, the following angles describe the orientation of the wing: Stroke position: angle describing the anterior-to-posterior motion of the wings relative to the stroke plane. This angle is computed as the projection of the wing vector onto the stroke plane. Quantifying locomotion: Stroke deviation: angle describing the vertical amplitude of the wings relative to the stroke plane. This angle is defined as the angle between the wing vector and its projection onto the stroke plane. Angle of attack: angular orientation of the wings (i.e. tilt) relative to the stroke plane. This angle is computed as the angle between the wing cross section vector and the stroke plane. Kinematic parameters Upstroke amplitude: angular distance through which the wings travel during an upstroke. Downstroke amplitude: angular distance through which the wings travel during a downstroke. Stroke duration: time elapsed between the onset of two consecutive upstrokes. Wingbeat frequency: inverse of stroke duration. The number of wingbeats per second. Flight distance per wingbeat: the distance covered during each wingbeat. Upstroke duration: time elapsed between the onset of an upstroke and the onset of a downstroke. Downstroke duration: time elapsed between the onset of a downstroke and the onset of an upstroke. Phase: if an organism has both front and hind wings, the lag of a wing pair relative to the other (reference) wing pair. Aerodynamic parameters Reynolds number: ratio of inertial forces to viscous forces. This metric helps describe how wing performance changes with body size. Swimming Aquatic locomotion is incredibly diverse, ranging from flipper and fin based movement to jet propulsion. Below are some common methods for characterizing swimming: Fin and flipper locomotion Body, tail, or fin angle: the curvature of the body or displacement of a fin or flipper.Tail or fin frequency: the frequency of a fin or tail completing one movement cycle. Quantifying locomotion: Jet propulsion Jet propulsion consists of two phases - a refill phase during which an animal fills a cavity with water, and a contraction phase when they squeeze water out of the cavity to push them in the opposite direction. The size of the cavity can be measured in these two phases to compare the amount of water cycled through each propulsion. Methods of study: A variety of methods and equipment are used to study animal locomotion: Treadmills are used to allow animals to walk or run while remaining stationary or confined with respect to external observers. This technique facilitates filming or recordings of physiological information from the animal (e.g., during studies of energetics). Some treadmills consist of a linear belt (single or split belt) that constrains the animal to forward walking, while others allow 360 degrees of rotation. Unmotorized treadmills move in response to an animal's self-initiated locomotion, while motorized treadmills externally drive locomotion and are often used to measure the endurance capacity (stamina) of animals.Tethered locomotion Animals may be fixed in place, allowing them to move while remaining stationary relative to their environment. Tethered animals can be lowered onto a treadmill to study walking, suspended in air to study flight, or submersed in water to study swimming.Untethered locomotion Animals may move through an environment without being held in place and their movement can bet tracked for analysis of the behavior.Visual arenas locomotion can be prolonged and sometimes controlled using a visual arena displaying a particular pattern of light. Many animals use visual queues from their surroundings to control their locomotion and so presenting them with a pseudo optic flow or context-specific visual feature can prompt and prolong locomotion.Racetracks lined with photocells or filmed while animals run along them are used to measure acceleration and maximal sprint speed.High-speed videography for the study of the motion of an entire animal or parts of its body (i.e. Kinematics) is typically accomplished by tracking anatomical locations on the animal and then recording video of its movement from multiple angles. Traditionally, anatomical locations have been tracked using visual markers that have been placed on the animal's body. However, it is becoming increasingly more common to use computer vision techniques to achieve markerless pose estimation. Methods of study: Marker-based pose estimation: Visual markers must be placed on an animal at the desired regions of interest. The location of each marker is determined for each video frame, and data from multiple views is integrated to give positions of each point through time. The visual markers can then be annotated in each frame manually. However, this is a time-consuming task, so computer vision techniques are often used to automate the detection of the markers. Methods of study: Markerless pose estimation: User-defined body parts must be manually annotated in a series of frames to use as training data. Deep learning and computer vision techniques are then employed to learn the location of the body parts in the training data. Next, the trained model is used to predict the location of the body parts in each frame on newly collected videos. The resulting time series data consists of the positions of the visible body parts at each frame in the video. Model parameters can be optimized to minimize tracking error and increase robustness.The kinematic data obtained from either of these methods can be used to determine fundamental motion attributes such as velocity, acceleration, joint angles, and the sequencing and timing of kinematic events. These fundamental attributes can be used to quantify various higher level attributes, such as the physical abilities of the animal (e.g., its maximum running speed, how steep a slope it can climb), gait, neural control of locomotion, and responses to environmental variation. These can aid in formulation of hypotheses about the animal or locomotion in general.Marker-based and markerless pose estimation approaches have advantages and disadvantages, so the method that is best suited for collecting kinematic data may be largely dependent on the animal of study. Marker-based tracking methods tend to be more portable than markerless methods, which require precise camera calibration. Markerless approaches, however, overcome several weaknesses of marker-based tracking, since placing visual markers on the animal of study may be impractical, expensive, or time-consuming. There are many publicly accessible software packages that provide support for markerless pose estimation.Force plates are platforms, usually part of a trackway, that can be used to measure the magnitude and direction of forces of an animal's step. When used with kinematics and a sufficiently detailed model of anatomy, inverse dynamics solutions can determine the forces not just at the contact with the ground, but at each joint in the limb.Electromyography (EMG) is a method of detecting the electrical activity that occurs when muscles are activated, thus determining which muscles an animal uses for a given movement. This can be accomplished either by surface electrodes (usually in large animals) or implanted electrodes (often wires thinner than a human hair). Furthermore, the intensity of electrical activity can correlate to the level of muscle activity, with greater activity implying (though not definitively showing) greater force.Optogenetics is a method used to control the activity of targeted neurons that have been genetically modified to respond to light signals. Optogenetic activation and silencing of neurons can help determine which neurons are required to carry out certain locomotor behaviors, as well as the function of these neurons in the execution of the behavior.Sonomicrometry employs a pair of piezoelectric crystals implanted in a muscle or tendon to continuously measure the length of a muscle or tendon. This is useful because surface kinematics may be inaccurate due to skin movement. Similarly, if an elastic tendon is in series with the muscle, the muscle length may not be accurately reflected by the joint angle.Tendon force buckles measure the force produced by a single muscle by measuring the strain of a tendon. After the experiment, the tendon's elastic modulus is determined and used to compute the exact force produced by the muscle. However, this can only be used on muscles with long tendons.Particle image velocimetry is used in aquatic and aerial systems to measure the flow of fluid around and past a moving aquatic organism, allowing fluid dynamics calculations to determine pressure gradients, speeds, etc.Fluoroscopy allows real-time X-ray video, for precise kinematics of moving bones. Markers opaque to X-rays can allow simultaneous tracking of muscle length.Many of the above methods can be combined to enhance the study of locomotion. For example, studies frequently combine EMG and kinematics to determine motor pattern, the series of electrical and kinematic events that produce a given movement. Optogenetic perturbations are also frequently combined with kinematics to study how locomotor behaviors and tasks are affected by the activity of a certain group of neurons. Observations resulting from optogenetic experiments may provide insight into the neural circuitry that underlies different locomotor behaviors. It is also common for studies to collect high-speed videos of animals on a treadmill. Such a setup may allow for increased accuracy and robustness when determining an animal's poses across time. Modeling animal locomotion: Models of animal locomotion are important for gaining new insights and predications on how kinematics arise from the interactions of the nervous, skeletal, and/or muscular systems that would otherwise be difficult to glean from experiments. The following are types of animal locomotion models: Neuromechanical models Neuromechanics is a field that combines biomechanics and neuroscience to understand the complex interactions between the physical environment, nervous system, and the muscular and skeletal systems that consequently result in anticipated body movement. Therefore, neuromechanical models aim to simulate movement given the neural commands to specific muscles, and how those muscles are connected to the animal's skeleton. The key components of neuromechanical models are: A morphologically accurate 3D model of the animal's skeleton consisting of rigid bodies (i.e. bones) that are arranged in a naturalistic manner. In these models, the properties of each rigid body, like mass, length, and width, need to be prescribed. Additionally, the joints between rigid bodies need to be defined, both in terms of type (e.g. hinge and ball-in-socket) and degrees of freedom (i.e. how the rigid bodies move relative to one another). The final step is to assign a mesh object to each rigid body that determines the appearance (e.g. outer surface of a bone) and other contact properties of the rigid bodies. These skeletal models can be built using a variety of 3D modeling programs, such as Blender and Opensim Creator. Modeling animal locomotion: After the skeletal model is built, the next step is to accurately define the attachment points of muscle to the rigid bodies. This assignment is crucial for the rigid bodies to be articulated in a naturalistic way. There are several type of muscle models that simulate the dynamics of muscle activation, contraction, and relaxation, which include Hill-type and Ekeberg-type muscle models. Modeling animal locomotion: Neural controllers that simulate motor neuron recruitment and activity by central commands are used to dictate the timing and strength of modeled muscle activation. There are many flavors of these controllers, such as coupled phase oscillator and neural network models. An environment that incorporates physics is essential in simulating realistic movement of neuromechanical models because they will abide by the laws of physics. Environments used for physics simulation include, Opensim, PyBullet, and MuJoCo.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Windows Media Components for QuickTime** Windows Media Components for QuickTime: Windows Media Components for QuickTime, also known as Flip4Mac WMV Player by Telestream, Inc. was one of the few commercial products that allow playback of Microsoft's proprietary audio and video codecs inside QuickTime for macOS. It allowed playback of: Windows Media Video 7, 8, 9, SD and HD Windows Media Audio 7, 8, 9, Professional and LosslessIt also included a web browser plug-in to allow playback of embedded Windows Media files in web pages. With the components installed, any QuickTime-compatible application is able to directly play WMV content. This includes the official QuickTime Player by Apple as well as countless third party players. WMV Player also allows Windows media files to be associated to QuickTime Player. Windows Media Components for QuickTime: On January 12, 2006, Microsoft discontinued support for Windows Media Player for Mac OS X and began distributing a free version of WMV Player as Windows Media Components for QuickTime on their website. As of June 2015, there is no longer a free version of this application offered. Flip4Mac was retired as of July 1, 2019. "If you are a current user of Flip4Mac, or your Flip4Mac stopped functioning when up upgraded your operating system, we invite you to take a look at Switch." Timeline: July 8, 2006 – Flip4Mac did not officially run on Intel-based Macs. July 15, 2006 – version 2.1 of Flip4Mac now supported Windows Media Player 10 content, which was previously inaccessible to Macintosh users. This newer version also supports Intel-based Macs. July 27, 2006 – version 2.1 is a non-beta release of the Universal Binary format for Mac OS X. September 20, 2016 - Flip4Mac doesn't work in macOS Sierra (10.12) July 1, 2019 - Flip4Mac officially discontinued by Telestream (Telestream had stopped development as of 2016)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**N6-hydroxylysine O-acetyltransferase** N6-hydroxylysine O-acetyltransferase: In enzymology, a N6-hydroxylysine O-acetyltransferase (EC 2.3.1.102) is an enzyme that catalyzes the chemical reaction acetyl-CoA + N6-hydroxy-L-lysine ⇌ CoA + N6-acetyl-N6-hydroxy-L-lysineThus, the two substrates of this enzyme are acetyl-CoA and N6-hydroxy-L-lysine, whereas its two products are CoA and N6-acetyl-N6-hydroxy-L-lysine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:N6-hydroxy-L-lysine 6-acetyltransferase. Other names in common use include N6-hydroxylysine:acetyl CoA N6-transacetylase, N6-hydroxylysine acetylase, and acetyl-CoA:6-N-hydroxy-L-lysine 6-acetyltransferase. This enzyme participates in lysine degradation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polygalacturonase** Polygalacturonase: Endo-polygalacturonase (EC 3.2.1.15, pectin depolymerase, pectolase, pectin hydrolase, and poly-α-1,4-galacturonide glycanohydrolase; systematic name (1→4)-α-D-galacturonan glycanohydrolase (endo-cleaving)) is an enzyme that hydrolyzes the α-1,4 glycosidic bonds between galacturonic acid residues: (1,4-α-D-galacturonosyl)n+m + H2O = (1,4-α-D-galacturonosyl)n + (1,4-α-D-galacturonosyl)mPolygalacturonan, whose major component is galacturonic acid, is a significant carbohydrate component of the pectin network that comprises plant cell walls. Therefore, the activity of the endogenous plant PGs works to soften and sweeten fruit during the ripening process. Similarly, phytopathogens use PGs as a means to weaken the pectin network, so that digestive enzymes can be excreted into the plant host to acquire nutrients. Structure: This enzyme's multiple parallel β sheets form a helical shape that is called a β helix. This highly stable structure, thanks to numerous hydrogen bonds and disulfide bonds between strands, is a common characteristic of enzymes involved in the degradation of pectin. The interior of the β helix is hydrophobic. X-ray crystallography has been used to determine the three-dimensional structure of several PGs in different organisms. Fungal PGs from Colletotrichum lupini, Aspergillus aculeatus, and Aspergillus niger (PG1 and PG2) have been crystallized. The PGs from bacteria like Erwinia carotovora and Bacillus subtilis have also been crystallized. The active site of Fusarium moniliforme PG comprises six charged amino acid residues: H188, R267, and K269 are involved in substrate binding, D212 (a general acid) is responsible for proton donation to the glycosydic oxygen, and D213 and D191 activate H2O for a nucleophilic attack. Mechanism: Polygalacturonase is a pectinase, an enzyme that degrades pectin by hydrolyzing the O-glycosyl bonds in pectin's polygalacturonan network, resulting in α-1,4-polygalacturonic residues. The rate of hydrolysis is dependent on polysaccharide chain length. Low rates of hydrolysis are associated with very short chains (e.g. digalacturonic acid) and very long chains. Mechanism: Exo- vs Endo-polygalacturonases Exo- and Endo-polygalacturonases utilize different hydrolytic modes of action. Endo-polygalacturonases hydrolyze in a random fashion along the polygalacturonan network. This method results in oligogalacturonides. Exo-polygalacturonases hydrolyze at the non-reducing end of the polymer, generating a monosaccharide galacturonic acid. Occasionally, organisms employ both methods. In addition to different modes of action, polygalacturonases polymorphism allows fungal polygalacturonases to more effectively degrade a wider range of plant tissues. PG variety in optimal pH, substrate specificity, and other factors are likely helpful for phytopathogenic organisms like fungi. Agricultural relevance: Due to the applicability of this enzyme's activity on agricultural productivity and commercial success, much of the research on PGs has revolved around the role of PGs in the fruit ripening process, pollen, and abscission. Agricultural relevance: Pectin is one of the three polysaccharides present in the plant cell wall, and it plays a role in maintaining the barrier between the inside and outside environment and gives strength to the plant cell walls. Specifically, pectin in the middle lamella holds neighboring cells together.Fruit ripening The first GM food available in stores was a genetically modified tomato (also known as Flavr Savr) that had a longer shelf life and was ideal for shipping. Its delayed ripening was achieved by preventing polygalacturonase from destroying pectin, which makes tomatoes firm. An antisense PG gene was introduced, preventing polygalacturonase from ripening and softening the tomato. Although this method has been shown to reduce PG enzymatic activity by 70 to 90%, the PG antisense RNA did not hinder normal color development.Depolymerization of pectin is largely involved in the later stages of fruit ripening, especially as the fruit becomes overripe. While tomatoes are the prime example of high PG activity, this enzyme is also very active in avocado and peach ripening. PG enzymes in peach, two exo-PGs and one endo-PG, become active when the fruit is already soft. Fruits like persimmons may either lack PG enzymes or have very low levels of PG and as such they have not been detected yet. In these cases, other enzymes may catalyze the ripening process. Agricultural relevance: Pollen Exo-PGs play a role in enabling pollen tube elongation since pectin rearrangement is necessary for the growth of pollen tubes. This PG activity has been found in grasses like maize as well as in trees, particularly in the Eastern cottonwood. Exo-PGs involved in pollen tube growth need Ca2+ for maximal enzymatic activity and can be inhibited by high concentrations of NaCl, citrate, and EDTA.Abscission zones It is largely unclear whether PGs play a role in facilitating abscission in certain plants, and if they do, whether they are exo- or endo-acting. Conflicting research has been published on, for example, whether PG is involved in citrus fruit abscission. One particular issue has been the usage of assays that are not able to measure exo-PG activity. An additional complication is the difference in PG enzymatic activity between fruit and leaf cell-separation zones. In peach, PG activity was only detected in fruit abscission zones.Other Agricultural pests like Lygus hesperus damage cotton and other crops because they secrete PGs in their saliva that digest plant tissue. They employ both exo- and endo-PGs. Inhibition: Phytopathogenic fungi expose plant cell walls to cell wall degrading enzymes (CWDEs) like PGs. Inhibition: In response, most plants have natural inhibitor proteins that slow the hydrolytic activity of PG. These inhibitors also prompt long chain oligogalacturonide accumulation in order to encourage a defense mechanism against the attack. The polygalacturonase inhibitor proteins (PGIPs) are leucine-rich repeat proteins that have been reported to demonstrate both non-competitive and competitive inhibition of PGs. The active site of PG interacts with a pocket containing multiple polar amino acids in Phaseolus vulgaris PGIP2. The inhibitor prevents substrate binding by occupying the active site, resulting in competitive inhibition.The crystal structures for PGIP and PGIP2 have been determined for the bean P. vulgaris. The charged and polar residues that interact with the PG active site have been identified in P. vulgaris as D131, S133, T155, D157, T180, and D203. Using PGIP2 as a template, the theoretical structures of other PGIPs have been determined for some other common crops.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Encounter (psychology)** Encounter (psychology): The term "encounter", in the context of existential-humanism (like existential therapy), has the specific meaning of an authentic, congruent meeting between individuals. Examples: Some uses of the concept of encountering: Jacob L. Moreno Invitations to an Encounter, 1914 Martin Buber frequently uses this term and associated ideas. Irvin Yalom in his book "Existential Psychotherapy". Carl Rogers, in encounter groups and person-centered psychotherapy. Jerzy Grotowski's notion of a "poor theatre" – "The core of the theatre is an encounter". R D Laing contrasts encounter with collusion in much of his work, especially Self and Others.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamic voltage restoration** Dynamic voltage restoration: Dynamic voltage restoration (DVR) is a method of overcoming voltage sags and swells that occur in electrical power distribution. These are a problem because spikes consume power and sags reduce efficiency of some devices. DVR saves energy through voltage injections that can affect the phase and wave-shape of the power being supplied.Devices used for DVR include static var devices, which are series compensation devices that use voltage source converters (VSC). The first such system in North America was installed in 1996 - a 12.47 kV system located in Anderson, South Carolina. Operation: The basic principle of dynamic voltage restoration is to inject a voltage of the magnitude and frequency necessary to restore the load side voltage to the desired amplitude and waveform, even when the source voltage is unbalanced or distorted. Generally, devices for dynamic voltage restoration employ gate turn off thyristors, (GTO) solid state power electronic switches in a pulse-width modulated (PWM) inverter structure. The DVR can generate or absorb independently controllable real and reactive power at the load side. In other words, the DVR is a solid state DC to AC switching power converter that injects a set of three-phase AC output voltages in series and synchronicity with the distribution and transmission line voltages. Operation: The source of the injected voltage is the commutation process for reactive power demand and an energy source for the real power demand. The energy source may vary according to the design and manufacturer of the DVR, but DC capacitors and batteries drawn from the line through a rectifier are frequently used. The energy source is typically connected to the DVR through its DC input terminal. Operation: The amplitude and phase angle of the injected voltages are variable, thereby allowing control of the real and reactive power exchange between the dynamic voltage restorer and the distribution system. As the reactive power exchange between the DVR and the distribution system is internally generated by the DVR without the AC passive reactive components. Similar devices: DVRs use a technically similar approach as low voltage ride-through (LVRT) capability systems in wind turbine generators use. The dynamic response characteristics, particularly for line supplied DVRs, are similar to those in LVRT-mitigated turbines. Conduction losses in both kinds of devices are often minimized by using integrated gate-commutated thyristor (IGCT) technology in the inverters. Applications: Practically, DVR systems can to inject up to 50% of nominal voltage, but only for a short time (up to 0.1 seconds). However, most voltage sags are much less than 50 percent, so this is not typically an issue. DVRs can also mitigate the damaging effects of voltage swells, voltage unbalance and other waveform distortions. Drawbacks: DVRs may provide good solutions for end-users subject to unwanted power quality disturbances. However, they are generally not used in systems that are subject to prolonged reactive power deficiencies (resulting in low voltage conditions) and in systems that are vulnerable to voltage collapse. Because DVRs will maintain appropriate supply voltage, in such systems where incipient voltage conditions are present they actually make collapses more difficult to prevent and can even lead to cascading interruptions. Drawbacks: Therefore, when applying DVRs, it is vital to consider the nature of the load whose voltage supply is being secured, as well as the transmission system which must tolerate the change in voltage-response of the load. It may be necessary to provide local fast reactive supply sources in order to protect the system, including the DVR, from voltage collapse and cascading interruptions. SSSC and DVR: The SSSC’s counterpart is the Dynamic Voltage Regulator (DVR). Although both are utilized for series voltage sag compensation, their operating principles differ from each other. The static synchronous series compensator injects a balance voltage in series with the transmission line. On the other hand, the DVR compensates the unbalance in supply voltage of different phases. Also, DVRs are usually installed on a critical feeder supplying the active power through DC energy storage and the required reactive power is generated internally without any means of DC storage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OpenKODE** OpenKODE: OpenKODE is a set of native APIs for handheld games and media applications providing a cross-platform abstraction layer for other media technologies such as OpenGL ES, OpenVG, OpenMAX AL and OpenSL ES. Besides of being an umbrella specification of the other APIs, OpenKODE also contains an API of its own, OpenKODE Core. OpenKODE Core defines POSIX-like functions to access operating system resources such as file access. OpenKODE: OpenKODE is managed by the non-profit technology consortium Khronos Group.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Caving** Caving: Caving – also known as spelunking in the United States and Canada and potholing in the United Kingdom and Ireland – is the recreational pastime of exploring wild cave systems (as distinguished from show caves). In contrast, speleology is the scientific study of caves and the cave environment.The challenges involved in caving vary according to the cave being visited; in addition to the total absence of light beyond the entrance, negotiating pitches, squeezes, and water hazards can be difficult. Cave diving is a distinct, and more hazardous, sub-speciality undertaken by a small minority of technically proficient cavers. In an area of overlap between recreational pursuit and scientific study, the most devoted and serious-minded cavers become accomplished at the surveying and mapping of caves and the formal publication of their efforts. These are usually published freely and publicly, especially in the UK and other European countries, although in the US, these are generally private. Sometimes categorized as an "extreme sport", it is not commonly considered as such by longtime enthusiasts, who may dislike the term for its connotation of disregard for safety.Many caving skills overlap with those involved in canyoning and mine and urban exploration. Motivation: Caving is often undertaken for the enjoyment of the outdoor activity or for physical exercise, as well as original exploration, similar to mountaineering or diving. Physical or biological science is also an important goal for some cavers, while others are engaged in cave photography. Virgin cave systems comprise some of the last unexplored regions on Earth and much effort is put into trying to locate, enter and survey them. In well-explored regions (such as most developed nations), the most accessible caves have already been explored, and gaining access to new caves often requires cave digging or cave diving. Motivation: One old technique used by hill people in the United States to find caves worth exploring was to yell into a hole and listen for an echo. On finding a hole, the size of which did not matter, the would-be cave explorer would yell into the opening and listen for an echo. If there was none, the hole was just a hole. If there was an echo, the size of the cave could be determined by the length and strength of the echoes. This method is simple, cheap, and effective. The explorer could then enlarge the hole to make an entrance. Meriwether Lewis, of the Lewis and Clark Expedition, used the yelling technique to find caves in Kentucky when he was a boy. Since caves were dark, and flashlights had not been invented, Lewis, and other explorers, made torches out of knots of pine tree branches. Such torches burned a long time and cast a bright light.Caving, in certain areas, has also been utilized as a form of eco and adventure tourism, for example in New Zealand. Tour companies have established an industry leading and guiding tours into and through caves. Depending on the type of cave and the type of tour, the experience could be adventure-based or ecological-based. There are tours led through lava tubes by a guiding service (e.g. Lava River Cave, the oceanic islands of Tenerife, Iceland and Hawaii). Motivation: Caving has also been described as an "individualist's team sport" by some, as cavers can often make a trip without direct physical assistance from others but will generally go in a group for companionship or to provide emergency help if needed. Some however consider the assistance cavers give each other as a typical team sport activity. Etymology: The term Potholing refers to the act of exploring potholes, a word originating in the north of England for predominantly vertical caves.Clay Perry, an American caver of the 1940s, wrote about a group of men and boys who explored and studied caves throughout New England. This group referred to themselves as spelunkers, a term derived from the Latin spēlunca ("cave, cavern, den"), itself from the Greek σπῆλυγξ spēlynks ("cave"). This is regarded as the first use of the word in the Americas. Throughout the 1950s, spelunking was the general term used for exploring caves in US English. It was used freely, without any positive or negative connotations, although only rarely outside the US. Etymology: In the 1960s, the terms spelunking and spelunker began to be considered déclassé among experienced enthusiasts. In 1985, Steve Knutson – editor of the National Speleological Society (NSS) publication American Caving Accidents – made the following distinction: …Note that (in this case) the term 'spelunker' denotes someone untrained and unknowledgeable in current exploration techniques, and 'caver' is for those who are. Etymology: This sentiment is exemplified by bumper stickers and T-shirts displayed by some cavers: "Cavers rescue spelunkers". Nevertheless, outside the caving community, "spelunking" and "spelunkers" predominately remain neutral terms referring to the practice and practitioners, without any respect to skill level. History: In the mid-nineteenth century, John Birkbeck explored potholes in England, notably Gaping Gill in 1842 and Alum Pot in 1847–8, returning there in the 1870s. In the mid-1880s, Herbert E. Balch began exploring Wookey Hole Caves and in the 1890s, Balch was introduced to the caves of the Mendip Hills. One of the oldest established caving clubs, Yorkshire Ramblers' Club, was founded in 1892.Caving as a specialized pursuit was pioneered by Édouard-Alfred Martel (1859–1938), who first achieved the descent and exploration of the Gouffre de Padirac, in France, as early as 1889 and the first complete descent of a 110-metre wet vertical shaft at Gaping Gill in 1895. He developed his own techniques based on ropes and metallic ladders. Martel visited Kentucky and notably Mammoth Cave National Park in October 1912. In the 1920s famous US caver Floyd Collins made important explorations in the area and in the 1930s, as caving became increasingly popular, small exploration teams both in the Alps and in the karstic high plateaus of southwest France (Causses and Pyrenees) transformed cave exploration into both a scientific and recreational activity. Robert de Joly, Guy de Lavaur and Norbert Casteret were prominent figures of that time, surveying mostly caves in Southwest France. During World War II, an alpine team composed of Pierre Chevalier, Fernand Petzl, Charles Petit-Didier and others explored the Dent de Crolles cave system near Grenoble, which became the deepest explored system in the world (-658m) at that time. The lack of available equipment during the war forced Pierre Chevalier and the rest of the team to develop their own equipment, leading to technical innovation. The scaling-pole (1940), nylon ropes (1942), use of explosives in caves (1947) and mechanical rope-ascenders (Henri Brenot's "monkeys", first used by Chevalier and Brenot in a cave in 1934) can be directly associated to the exploration of the Dent de Crolles cave system.In 1941, American cavers organized themselves into the National Speleological Society (NSS) to advance the exploration, conservation, study and understanding of caves in the United States. American caver Bill Cuddington, known as "Vertical Bill", further developed the single-rope technique (SRT) in the late 1950s. In 1958, two Swiss alpinists, Juesi and Marti teamed together, creating the first rope ascender known as the Jumar. In 1968 Bruno Dressler asked Fernand Petzl, who worked as a metals machinist, to build a rope-ascending tool, today known as the Petzl Croll, that he had developed by adapting the Jumar to vertical caving. Pursuing these developments, Petzl started in the 1970s a caving equipment manufacturing company named Petzl. The development of the rappel rack and the evolution of mechanical ascension systems extended the practice and safety of vertical exploration to a wider range of cavers. Practice and equipment: Hard hats are worn to protect the head from bumps and falling rocks. The caver's primary light source is usually mounted on the helmet in order to keep the hands free. Electric LED lights are most common. Many cavers carry two or more sources of light – one as primary and the others as backup in case the first fails. More often than not, a second light will be mounted to the helmet for quick transition if the primary fails. Carbide lamp systems are an older form of illumination, inspired by miner's equipment, and are still used by some cavers, particularly on remote expeditions where electric charging facilities are not available.The type of clothes worn underground varies according to the environment of the cave being explored, and the local culture. In cold caves, the caver may wear a warm base layer that retains its insulating properties when wet, such as a fleece ("furry") suit or polypropylene underwear, and an oversuit of hard-wearing (e.g., cordura) or waterproof (e.g., PVC) material. Lighter clothing may be worn in warm caves, particularly if the cave is dry, and in tropical caves thin polypropylene clothing is used, to provide some abrasion protection while remaining as cool as possible. Wetsuits may be worn if the cave is particularly wet or involves stream passages. On the feet boots are worn – hiking-style boots in drier caves, or rubber boots (such as wellies) often with neoprene socks ("wetsocks") in wetter caves. Knee-pads (and sometimes elbow-pads) are popular for protecting joints during crawls. Depending on the nature of the cave, gloves are sometimes worn to protect the hands against abrasion or cold. In pristine areas and for restoration, clean oversuits and powder-free, non-latex surgical gloves are used to protect the cave itself from contaminants. Practice and equipment: Ropes are used for descending or ascending pitches (single rope technique or SRT) or for protection. Knots commonly used in caving are the figure-of-eight- (or figure-of-nine-) loop, bowline, alpine butterfly, and Italian hitch. Ropes are usually rigged using bolts, slings, and carabiners. In some cases cavers may choose to bring and use a flexible metal ladder. Practice and equipment: In addition to the equipment already described, cavers frequently carry packs containing first-aid kits, emergency equipment, and food. Containers for securely transporting urine are also commonly carried. On longer trips, containers for securely transporting feces out of the cave are carried.During very long trips, it may be necessary to camp in the cave – some cavers have stayed underground for many days, or in particularly extreme cases, for weeks at a time. This is particularly the case when exploring or mapping very extended cave systems, where it would be impractical to retrace the route back to the surface regularly. Such long trips necessitate the cavers carrying provisions, sleeping and cooking equipment. Safety: Caves can be dangerous places; hypothermia, falling, flooding, falling rocks and physical exhaustion are the main risks. Rescuing people from underground is difficult and time-consuming, and requires special skills, training, and equipment. Full-scale cave rescues often involve the efforts of dozens of rescue workers (often other long-time cavers who have participated in specialized courses, as normal rescue staff are not sufficiently experienced in cave environments), who may themselves be put in jeopardy in effecting the rescue. This said, caving is not necessarily a high-risk sport (especially if it does not involve difficult climbs or diving). As in all physical sports, knowing one's limitations is key. Safety: Caving in warmer climates carries the risk of contracting histoplasmosis, a fungal infection that is contracted from bird or bat droppings. It can cause pneumonia and can disseminate in the body to cause continued infections.In many parts of the world, leptospirosis ("a type of bacterial infection spread by animals" including rats) is a distinct threat due to the presence of rat urine in rainwater or precipitation that enters the caves water system. Complications are uncommon, but can be serious. Safety: Safety risks while caving can be minimized by using a number of techniques: Checking that there is no danger of flooding during the expedition. Rainwater funneled underground can flood a cave very quickly, trapping people in cut-off passages and drowning them. In the UK, drowning accounts for almost half of all caving fatalities (see List of UK caving fatalities). Using teams of several cavers, preferably at least four. If an injury occurs, one caver stays with the injured person while the other two go out for help, providing assistance to each other on their way out. Notifying people outside the cave as to the intended return time. After an appropriate delay without a return, these will then organize a search party (usually made up by other cavers trained in cave rescues, as even professional emergency personnel are unlikely to have the skills to effect a rescue in difficult conditions). Use of helmet-mounted lights (hands-free) with extra batteries. American cavers recommend a minimum of three independent sources of light per person, but two lights is common practice among European cavers. Safety: Sturdy clothing and footwear, as well as a helmet, are necessary to reduce the impact of abrasions, falls, and falling objects. Synthetic fibers and woolens, which dry quickly, shed water, and are warm when wet, are vastly preferred to cotton materials, which retain water and increase the risk of hypothermia. It is also helpful to have several layers of clothing, which can be shed (and stored in the pack) or added as needed. In watery cave passages, polypropylene thermal underwear or wetsuits may be required to avoid hypothermia. Safety: Cave passages look different from different directions. In long or complex caves, even experienced cavers can become lost. To reduce the risk of becoming lost, it is necessary to memorize the appearance of key navigational points in the cave as they are passed by the exploring party. Each member of a cave party shares responsibility for being able to remember the route out of the cave. In some caves it may be acceptable to mark a small number of key junctions with small stacks or "cairns" of rocks, or to leave a non-permanent mark such as high-visibility flagging tape tied to a projection. Safety: Vertical caving uses ladders or single rope technique (SRT) to avoid the need for climbing passages that are too difficult. SRT however is a complex skill and requires proper training before use underground and needs well-maintained equipment. Some drops that are abseiled down may be as deep as several hundred meters (for example Harwood Hole). Cave conservation: Many cave environments are very fragile. Many speleothems can be damaged by even the slightest touch and some by impacts as slight as a breath. Research suggests that increased carbon dioxide levels can lead to "a higher equilibrium concentration of calcium within the drip waters feeding the speleothems, and hence causes dissolution of existing features." In 2008, researchers found evidence that respiration from cave visitors may generate elevated carbon dioxide concentrations in caves, leading to increased temperatures of up to 3 °C and a dissolution of existing features.Pollution is also of concern. Since water that flows through a cave eventually comes out in streams and rivers, any pollution may ultimately end up in someone's drinking water, and can even seriously affect the surface environment, as well. Even minor pollution such as dropping organic material can have a dramatic effect on the cave biota. Cave conservation: Cave-dwelling species are also very fragile, and often, a particular species found in a cave may live within that cave alone, and be found nowhere else in the world, such as Alabama cave shrimp. Cave-dwelling species are accustomed to a near-constant climate of temperature and humidity, and any disturbance can be disruptive to the species' life cycles. Though cave wildlife may not always be immediately visible, it is typically nonetheless present in most caves. Cave conservation: Bats are one such fragile species of cave-dwelling animal. Bats which hibernate are most vulnerable during the winter season, when no food supply exists on the surface to replenish the bat's store of energy should it be awakened from hibernation. Bats which migrate are most sensitive during the summer months when they are raising their young. For these reasons, visiting caves inhabited by hibernating bats is discouraged during cold months; and visiting caves inhabited by migratory bats is discouraged during the warmer months when they are most sensitive and vulnerable. Due to an affliction affecting bats in the northeastern US known as white nose syndrome (WNS), the US Fish & Wildlife Service has called for a moratorium effective March 26, 2009, on caving activity in states known to have hibernacula (MD, NY, VT, NH, MA, CT, NJ, PA, VA, and WV) affected by WNS, as well as adjoining states.Some cave passages may be marked with flagging tape or other indicators to show biologically, aesthetically, or archaeologically sensitive areas. Marked paths may show ways around notably fragile areas such as a pristine floor of sand or silt which may be thousands of years old, dating from the last time water flowed through the cave. Such deposits may easily be spoiled forever by a single misplaced step. Active formations such as flowstone can be similarly marred with a muddy footprint or handprint, and ancient human artifacts, such as fiber products, may even crumble to dust under all but the most gentle touch. Cave conservation: In 1988, concerned that cave resources were becoming increasingly damaged through unregulated use, Congress enacted the Federal Cave Resources Protection Act, giving land management agencies in the United States expanded authority to manage cave conservation on public land. Caving organizations: Cavers in many countries have created organizations for the administration and oversight of caving activities within their nations. The oldest of these is the French Federation of Speleology (originally Société de spéléologie) founded by Édouard-Alfred Martel in 1895, which produced the first periodical journal in speleology, Spelunca. The first University-based speleological institute in the world was founded in 1920 in Cluj-Napoca, Romania, by Emil Racovita, a Romanian biologist, zoologist, speleologist and explorer of Antarctica. Caving organizations: The British Speleological Association was established in 1935 and the National Speleological Society in the US was founded in 1941 (originally formed as the Speleological Society of the District of Columbia on May 6, 1939). An international speleological congress was proposed at a meeting in Valence-sur-Rhone, France in 1949 and first held in 1953 in Paris. The International Union of Speleology (UIS) was founded in 1965.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bottom breather** Bottom breather: A bottom breather is a front-engine automobile that takes in air from below the front fascia (nose) rather than through a conventional grille at the front of the vehicle. This styling can provide a more aerodynamic front end, or the appearance of better aerodynamics, or the look of a rear-engined sports car such as the Porsche 911, which also lacks a front grille. Unlike the 911, however, most of the vehicles that use this approach have the engines installed in the front, with a water cooled radiator installed. The airflow from below the bumper is then directed into the radiator to aid in engine cooling, which makes this approach unusual in that a traditional front grille was an evolution from installing the radiator externally for vehicles manufactured since the early days of automobile production. Some of the most well known bottom breathing cars are the Citroën DS, Chevrolet Corvette C4, Studebaker Avanti, Honda Logo, Infiniti Q45, Mazda MX-5, Mazda MX-6, Volkswagen Passat, and the Volvo 480. Bottom breather: Some vehicles disguise the lower air intake with a grille above the bumper which is not functional, like the Dodge Avenger coupe and Chrysler Sebring of the same generation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Noncommutative residue** Noncommutative residue: In mathematics, noncommutative residue, defined independently by M. Wodzicki (1984) and Guillemin (1985), is a certain trace on the algebra of pseudodifferential operators on a compact differentiable manifold that is expressed via a local density. In the case of the circle, the noncommutative residue had been studied earlier by M. Adler (1978) and Y. Manin (1978) in the context of one-dimensional integrable systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HIST1H2AA** HIST1H2AA: Histone H2A type 1-A is a protein that in humans is encoded by the HIST1H2AA gene.Histones are basic nuclear proteins that are responsible for the nucleosome structure of the chromosomal fiber in eukaryotes. Nucleosomes consist of approximately 146 bp of DNA wrapped around a histone octamer composed of pairs of each of the four core histones (H2A, H2B, H3, and H4). The chromatin fiber is further compacted through the interaction of a linker histone, H1, with the DNA between the nucleosomes to form higher order chromatin structures. This gene is intronless and encodes a member of the histone H2A family. Transcripts from this gene contain a palindromic termination element.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Guidepost cells** Guidepost cells: Guidepost cells are cells which assist in the subcellular organization of both neural axon growth and migration. They act as intermediate targets for long and complex axonal growths by creating short and easy pathways, leading axon growth cones towards their target area. Identification: In 1976, guideposts cells were identified in both grasshopper embryos and Drosophila. Single guidepost cells, acting like "stepping-stones" for the extension of Ti1 pioneer growth cones to the CNS, were first discovered in grasshopper limb bud. However, guidepost cells can also act as a group. There is a band of epithelial cells, called floor-plate cells, present in the neural tube of Drosophila available for the binding of growing axons. These studies have defined guidepost cells as non-continuous landmarks located on future paths of growing axons by providing high-affinity substrates to bind to for navigation.Guidepost cells are typically immature glial cells and neuron cells, that have yet to grown an axon. They can either be labeled as short range cells or axon dependent cells.To qualify as a guidepost cell, neurons hypothesized to be influenced by a guidance cell are examined during development. To test the guidance cell in question, neural axon growth and migration is first examined in the presence of the guidance cell. Then, the guidance cell is destroyed to further examine neural axon growth and migration in the absence of the guidance cell. If the neuronal axon extends towards the path in the presence of the guidance cell and loses its path in the absence of the guidance cell, it is qualified as a guidepost cell. Ti1 pioneer neurons is a common example neurons that require guidepost cells to reach its final destination. They have to come in contact with three guidepost neurons to reach the CNS: Fe1, Tr1, and Cx1. When Cx1 is destroyed, the Ti1 pioneer is unable to reach the CNS. Roles in formation: Lateral olfactory tract The lateral olfactory tract (LOT) is the first system where guideposts cells were proposed to play a role in axonal guidance. In this migrational pathway, olfactory neurons move from the nasal cavities to the mitral cells in the olfactory bulb. The mitral primary axons extend and form a bundle of axons, called the LOT, towards higher olfactory centers: anterior olfactory nucleus, olfactory tubercle, piriform cortexr, entorhinal cortex, and cortical nuclei of the amygdala. "Lot cells", the first neurons to appear in the telencephalon, are considered to be guideposts because they have cellular substrates to attract LOX axons. To test their role in guidance, scientists ablated lot cells with a toxin called 6-OHDA. As a result, LOT axons were stalled in the areas where lot cells were destroyed, which confirmed lot cells as guidepost cells. Roles in formation: Entorhinal projections Cajal-Retzius cells are the first cells to cover the cortical sheet and hippocampal primordium, and regulate cortical lamination by Reelin. In order to make connections with GABAergic neurons in different regions of the hippocampus (stratum oriens, stratum radiatum, and inner molecular layer), pioneer entorhinal neurons make synaptic contacts with Cajal-Retzius cells. To test their role in guidance, scientists (Del Rio and colleagues) ablated Cajal-Retzius cells with 6-OHDA. As a result, entorhinal axons did not grow in the hippocampus and ruled Cajal-Retzius cells as guidepost cells. Roles in formation: Thalamocortical connections Perirecular cells (or internal capsule cells) are neuronal guidepost cells located along the path of creating the internal capsule. They provide a scaffold for corticothalamic and thalamocortical axons (TCAs) to send messages to the thalamus. There are transcription factors associated with perirecular cells: Mash1, Lhx2, and Emx2. When guidepost cells are mutated with knock out expressions of these factors, the guidance of TCAs are defected.Corridor cells are another set of guidepost cells present for TCA guidance. These GABAergic neurons migrate to form a "corridor" between proliferation zones of the medial ganglionic eminence and globus pallidus. Corridor cells provide TCA growth through MGE-derived regions. However, the Neurgulin1 signaling pathway needs to be activated, with the expression of ErbB4 receptors on the surface of TCAs, for the connection to occur between corridor cells and TCAs. Roles in formation: Corpus callosum There are subpopulations of glial cells that provide guidance cues for axonal growth. The first set of cells, called the "mid-line glial zipper", regulate the midline fusion and guidance of pioneer axons to the septum towards the contralateral hemisphere. The "glial sling" is a second set, located at the corticoseptal boundary, which provide cellular substrates for callosal axon migration across the dorsal midline. The "glial wedge" is made up of radial fibers, secreting repellent cues to prevent axons from entering the septum and positioning them towards the corpus callosum. The last set of glial cells, located in the induseum griseum, control the positioning of pioneer cingulate neurons in the corpus callosum region.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Executable space protection** Executable space protection: In computer security, executable-space protection marks memory regions as non-executable, such that an attempt to execute machine code in these regions will cause an exception. It makes use of hardware features such as the NX bit (no-execute bit), or in some cases software emulation of those features. However, technologies that emulate or supply an NX bit will usually impose a measurable overhead while using a hardware-supplied NX bit imposes no measurable overhead. Executable space protection: The Burroughs 5000 offered hardware support for executable-space protection on its introduction in 1961; that capability remained in its successors until at least 2006. In its implementation of tagged architecture, each word of memory had an associated, hidden tag bit designating it code or data. Thus user programs cannot write or even read a program word, and data words cannot be executed. Executable space protection: If an operating system can mark some or all writable regions of memory as non-executable, it may be able to prevent the stack and heap memory areas from being executable. This helps to prevent certain buffer overflow exploits from succeeding, particularly those that inject and execute code, such as the Sasser and Blaster worms. These attacks rely on some part of memory, usually the stack, being both writable and executable; if it is not, the attack fails. OS implementations: Many operating systems implement or have an available executable space protection policy. Here is a list of such systems in alphabetical order, each with technologies ordered from newest to oldest. For some technologies, there is a summary which gives the major features each technology supports. The summary is structured as below. OS implementations: Hardware Supported Processors: (Comma separated list of CPU architectures) Emulation: (No) or (Architecture Independent) or (Comma separated list of CPU architectures) Other Supported: (None) or (Comma separated list of CPU architectures) Standard Distribution: (No) or (Yes) or (Comma separated list of distributions or versions which support the technology) Release Date: (Date of first release)A technology supplying Architecture Independent emulation will be functional on all processors which aren't hardware supported. The "Other Supported" line is for processors which allow some grey-area method, where an explicit NX bit doesn't exist yet hardware allows one to be emulated in some way. OS implementations: Android As of Android 2.3 and later, architectures which support it have non-executable pages by default, including non-executable stack and heap. FreeBSD Initial support for the NX bit, on x86-64 and IA-32 processors that support it, first appeared in FreeBSD -CURRENT on June 8, 2004. It has been in FreeBSD releases since the 5.3 release. OS implementations: Linux The Linux kernel supports the NX bit on x86-64 and IA-32 processors that support it, such as modern 64-bit processors made by AMD, Intel, Transmeta and VIA. The support for this feature in the 64-bit mode on x86-64 CPUs was added in 2004 by Andi Kleen, and later the same year, Ingo Molnár added support for it in 32-bit mode on 64-bit CPUs. These features have been part of the Linux kernel mainline since the release of kernel version 2.6.8 in August 2004.The availability of the NX bit on 32-bit x86 kernels, which may run on both 32-bit x86 CPUs and 64-bit IA-32-compatible CPUs, is significant because a 32-bit x86 kernel would not normally expect the NX bit that an AMD64 or IA-64 supplies; the NX enabler patch assures that these kernels will attempt to use the NX bit if present. OS implementations: Some desktop Linux distributions, such as Fedora, Ubuntu and openSUSE, do not enable the HIGHMEM64 option by default in their default kernels, which is required to gain access to the NX bit in 32-bit mode, because the PAE mode that is required to use the NX bit causes boot failures on pre-Pentium Pro (including Pentium MMX) and Celeron M and Pentium M processors without NX support. Other processors that do not support PAE are AMD K6 and earlier, Transmeta Crusoe, VIA C3 and earlier, and Geode GX and LX. VMware Workstation versions older than 4.0, Parallels Workstation versions older than 4.0, and Microsoft Virtual PC and Virtual Server do not support PAE on the guest. Fedora Core 6 and Ubuntu 9.10 and later provide a kernel-PAE package which supports PAE and NX. OS implementations: NX memory protection has always been available in Ubuntu for any systems that had the hardware to support it and ran the 64-bit kernel or the 32-bit server kernel. The 32-bit PAE desktop kernel (linux-image-generic-pae) in Ubuntu 9.10 and later, also provides the PAE mode needed for hardware with the NX CPU feature. For systems that lack NX hardware, the 32-bit kernels now provide an approximation of the NX CPU feature via software emulation that can help block many exploits an attacker might run from stack or heap memory. OS implementations: Non-execute functionality has also been present for other non-x86 processors supporting this functionality for many releases. OS implementations: Exec Shield Red Hat kernel developer Ingo Molnár released a Linux kernel patch named Exec Shield to approximate and utilize NX functionality on 32-bit x86 CPUs. The Exec Shield patch was released to the Linux kernel mailing list on May 2, 2003, but was rejected for merging with the base kernel because it involved some intrusive changes to core code in order to handle the complex parts of the emulation. Exec Shield's legacy CPU support approximates NX emulation by tracking the upper code segment limit. This imposes only a few cycles of overhead during context switches, which is for all intents and purposes immeasurable. For legacy CPUs without an NX bit, Exec Shield fails to protect pages below the code segment limit; an mprotect() call to mark higher memory, such as the stack, executable will mark all memory below that limit executable as well. Thus, in these situations, Exec Shield's schemes fails. This is the cost of Exec Shield's low overhead. Exec Shield checks for two ELF header markings, which dictate whether the stack or heap needs to be executable. These are called PT_GNU_STACK and PT_GNU_HEAP respectively. Exec Shield allows these controls to be set for both binary executables and for libraries; if an executable loads a library requiring a given restriction relaxed, the executable will inherit that marking and have that restriction relaxed. OS implementations: Hardware Supported Processors: All that Linux supports NX on Emulation: NX approximation using the code segment limit on IA-32 (x86) and compatible Other Supported: None Standard Distribution: Fedora Core and Red Hat Enterprise Linux Release Date: May 2, 2003 PaX The PaX NX technology can emulate NX functionality, or use a hardware NX bit. PaX works on x86 CPUs that do not have the NX bit, such as 32-bit x86. The Linux kernel still does not ship with PaX (as of May, 2007); the patch must be merged manually. OS implementations: PaX provides two methods of NX bit emulation, called SEGMEXEC and PAGEEXEC. The SEGMEXEC method imposes a measurable but low overhead, typically less than 1%, which is a constant scalar incurred due to the virtual memory mirroring used for the separation between execution and data accesses. SEGMEXEC also has the effect of halving the task's virtual address space, allowing the task to access less memory than it normally could. This is not a problem until the task requires access to more than half the normal address space, which is rare. SEGMEXEC does not cause programs to use more system memory (i.e. RAM), it only restricts how much they can access. On 32-bit CPUs, this becomes 1.5 GB rather than 3 GB. OS implementations: PaX supplies a method similar to Exec Shield's approximation in the PAGEEXEC as a speedup; however, when higher memory is marked executable, this method loses its protections. In these cases, PaX falls back to the older, variable-overhead method used by PAGEEXEC to protect pages below the CS limit, which may become quite a high-overhead operation in certain memory access patterns. When the PAGEEXEC method is used on a CPU supplying a hardware NX bit, the hardware NX bit is used, thus no significant overhead is incurred. OS implementations: PaX supplies mprotect() restrictions to prevent programs from marking memory in ways that produce memory useful for a potential exploit. This policy causes certain applications to cease to function, but it can be disabled for affected programs. OS implementations: PaX allows individual control over the following functions of the technology for each binary executable: PAGEEXEC SEGMEXEC mprotect() restrictions Trampoline emulation Randomized executable base Randomized mmap() basePaX ignores both PT_GNU_STACK and PT_GNU_HEAP. In the past, PaX had a configuration option to honor these settings but that option has been removed for security reasons, as it was deemed not useful. The same results of PT_GNU_STACK can normally be attained by disabling mprotect() restrictions, as the program will normally mprotect() the stack on load. This may not always be true; for situations where this fails, simply disabling both PAGEEXEC and SEGMEXEC will effectively remove all executable space restrictions, giving the task the same protections on its executable space as a non-PaX system. OS implementations: Hardware Supported Processors: Alpha, AMD64, IA-64, MIPS (32 and 64 bit), PA-RISC, PowerPC, SPARC Emulation: IA-32 (x86) Other Supported: PowerPC (32 and 64 bit), SPARC (32 and 64 bit) Standard Distribution: Alpine Linux Release Date: October 1, 2000 macOS macOS for Intel supports the NX bit on all CPUs supported by Apple (from Mac OS X 10.4.4 – the first Intel release – onwards). Mac OS X 10.4 only supported NX stack protection. In Mac OS X 10.5, all 64-bit executables have NX stack and heap; W^X protection. This includes x86-64 (Core 2 or later) and 64-bit PowerPC on the G5 Macs. OS implementations: NetBSD As of NetBSD 2.0 and later (December 9, 2004), architectures which support it have non-executable stack and heap. Architectures that have per-page granularity consist of: alpha, amd64, hppa, i386 (with PAE), powerpc (ibm4xx), sh5, sparc (sun4m, sun4d), sparc64. Architectures that can only support these with region granularity are: i386 (without PAE), other powerpc (such as macppc). Other architectures do not benefit from non-executable stack or heap; NetBSD does not by default use any software emulation to offer these features on those architectures. OpenBSD A technology in the OpenBSD operating system, known as W^X, marks writable pages by default as non-executable on processors that support that. On 32-bit x86 processors, the code segment is set to include only part of the address space, to provide some level of executable space protection. OpenBSD 3.3 shipped May 1, 2003, and was the first to include W^X. Hardware Supported Processors: Alpha, AMD64, HPPA, SPARC Emulation: IA-32 (x86) Other Supported: None Standard Distribution: Yes Release Date: May 1, 2003 Solaris Solaris has supported globally disabling stack execution on SPARC processors since Solaris 2.6 (1997); in Solaris 9 (2002), support for disabling stack execution on a per-executable basis was added. Windows Starting with Windows XP Service Pack 2 (2004) and Windows Server 2003 Service Pack 1 (2005), the NX features were implemented for the first time on the x86 architecture. Executable space protection on Windows is called "Data Execution Prevention" (DEP). Under Windows XP or Server 2003 NX protection was used on critical Windows services exclusively by default. If the x86 processor supported this feature in hardware, then the NX features were turned on automatically in Windows XP/Server 2003 by default. If the feature was not supported by the x86 processor, then no protection was given. OS implementations: Early implementations of DEP provided no address space layout randomization (ASLR), which allowed potential return-to-libc attacks that could have been feasibly used to disable DEP during an attack. The PaX documentation elaborates on why ASLR is necessary; a proof-of-concept was produced detailing a method by which DEP could be circumvented in the absence of ASLR. It may be possible to develop a successful attack if the address of prepared data such as corrupted images or MP3s can be known by the attacker. OS implementations: Microsoft added ASLR functionality in Windows Vista and Windows Server 2008. On this platform, DEP is implemented through the automatic use of PAE kernel in 32-bit Windows and the native support on 64-bit kernels. Windows Vista DEP works by marking certain parts of memory as being intended to hold only data, which the NX or XD bit enabled processor then understands as non-executable. In Windows, from version Vista, whether DEP is enabled or disabled for a particular process can be viewed on the Processes/Details tab in the Windows Task Manager. OS implementations: Windows implements software DEP (without the use of the NX bit) through Microsoft's "Safe Structured Exception Handling" (SafeSEH). For properly compiled applications, SafeSEH checks that, when an exception is raised during program execution, the exception's handler is one defined by the application as it was originally compiled. The effect of this protection is that an attacker is not able to add his own exception handler which he has stored in a data page through unchecked program input.When NX is supported, it is enabled by default. Windows allows programs to control which pages disallow execution through its API as well as through the section headers in a PE file. In the API, runtime access to the NX bit is exposed through the Win32 API calls VirtualAlloc[Ex] and VirtualProtect[Ex]. Each page may be individually flagged as executable or non-executable. Despite the lack of previous x86 hardware support, both executable and non-executable page settings have been provided since the beginning. On pre-NX CPUs, the presence of the 'executable' attribute has no effect. It was documented as if it did function, and, as a result, most programmers used it properly. In the PE file format, each section can specify its executability. The execution flag has existed since the beginning of the format and standard linkers have always used this flag correctly, even long before the NX bit. Because of this, Windows is able to enforce the NX bit on old programs. Assuming the programmer complied with "best practices", applications should work correctly now that NX is actually enforced. Only in a few cases have there been problems; Microsoft's own .NET Runtime had problems with the NX bit and was updated. OS implementations: Hardware Supported Processors: x86-64 (AMD64 and Intel 64), IA-64, Efficeon, Pentium M (later revisions), AMD Sempron (later revisions) Emulation: Yes Other Supported: None Standard Distribution: Post Windows XP Release Date: August 6, 2004 Xbox In Microsoft's Xbox, although the CPU does not have the NX bit, newer versions of the XDK set the code segment limit to the beginning of the kernel's .data section (no code should be after this point in normal circumstances). Starting with version 51xx, this change was also implemented into the kernel of new Xboxes. This broke the techniques old exploits used to become a terminate-and-stay-resident program. However, new exploits were quickly released supporting this new kernel version because the fundamental vulnerability in the Xbox kernel was unaffected. Limitations: Where code is written and executed at runtime—a JIT compiler is a prominent example—the compiler can potentially be used to produce exploit code (e.g. using JIT Spray) that has been flagged for execution and therefore would not be trapped.Return-oriented programming can allow an attacker to execute arbitrary code even when executable space protection is enforced.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acetarsol** Acetarsol: Acetarsol (or acetarsone) is an anti-infective drug.It was first discovered in 1921 at Pasteur Institute by Ernest Fourneau, and sold under the brand name Stovarsol. It has been given in the form of suppositories.Acetarsol can be used to make arsthinol.It has been cancelled and withdrawn from the market since August 12th, 1997. Medical uses: Acetarsol has been used for the treatment of diseases such as syphilis, amoebiasis, yaws, trypanosomiasisiasis and malaria. Acetarsol was used for the treatment of Trichomonas Vaginalis and Candida Albicans. In the oral form, acetarsol can be used for the treatment of intestinal amoebiasis. As a suppository, acetarsol was researched to be used for the treatment of proctitis. Mechanism of Action: Although the mechanism of action is not fully known, acetarsol may bind to protein-containing sulfhydryl groups located in the parasite, which then creates lethal As-S bonds, which then kills the parasite. Chemistry and pharmacokinetics: Acetarsol has the molecular formula N-acetyl-4-hydroxy-m-arsinillic acid, and it is a pentavalent arsenical compound with antiprotozoal and anthelmintic properties. The arsenic found in acetarsol is excreted mainly in urine. The level of arsenic after acetarsol administration reaches close to the toxic range in urine. Some reports indicate a remission of arsenic which can be physiologically dangerous. Toxicity: Some reports indicate that acetarsol can produce effects in the eyes such as optic neuritis and optic atrophy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Octagonal antiprism** Octagonal antiprism: In geometry, the octagonal antiprism is the 6th in an infinite set of antiprisms formed by an even-numbered sequence of triangle sides closed by two polygon caps. Antiprisms are similar to prisms except the bases are twisted relative to each other, and that the side faces are triangles, rather than quadrilaterals. In the case of a regular 8-sided base, one usually considers the case where its copy is twisted by an angle 180°/n. Extra regularity is obtained by the line connecting the base centers being perpendicular to the base planes, making it a right antiprism. As faces, it has the two n-gonal bases and, connecting those bases, 2n isosceles triangles. If faces are all regular, it is a semiregular polyhedron.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chalcone 4'-O-glucosyltransferase** Chalcone 4'-O-glucosyltransferase: Chalcone 4'-O-glucosyltransferase (EC 2.4.1.286, 4'CGT) is an enzyme with systematic name UDP-alpha-D-glucose:2',4,4',6'-tetrahydroxychalcone 4'-O-beta-D-glucosyltransferase. This enzyme catalyses the following chemical reaction (1) UDP-alpha-D-glucose + 2',4,4',6'-tetrahydroxychalcone ⇌ UDP + 2',4,4',6'-tetrahydroxychalcone 4'-O-beta-D-glucoside (2) UDP-alpha-D-glucose + 2',3,4,4',6'-pentahydroxychalcone ⇌ UDP + 2',3,4,4',6'-pentahydroxychalcone 4'-O-beta-D-glucosideThis enzyme is isolated from the plant Antirrhinum majus (snapdragon).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rake (poker)** Rake (poker): Rake is the scaled commission fee taken by a cardroom operating a poker game. It is generally 2.5% to 10% of the pot in each poker hand, up to a predetermined maximum amount. There are also other non-percentage ways for a casino to take the rake.Poker is a player-versus-player game, and the house does not wager against its players (unlike blackjack or roulette), so this fee is the principal mechanism to generate revenue. Rake (poker): It is primarily levied by an establishment that supplies the necessary services for the game to take place. In online poker it covers the various costs of operation such as support, software, and personnel. In traditional brick and mortar casinos it is also used to cover the costs involved with providing a dealer (though in many places tips provide the bulk of a dealer's income) for the game, support staff (from servers to supervisors), use of gaming equipment, and the physical building in which the game takes place. The rake in live games is generally higher than for online poker. Rake (poker): Some cardrooms will not take a percentage rake in any community card poker game like Texas hold 'em when a hand does not have a flop. This is called "no flop, no drop".To win when playing in poker games where the house takes a cut, a player must not only beat opponents, but also the financial drain of the rake. Mechanism: There are several ways for the rake to be taken. Most rake is a fixed percentage of the pot, taken on a sliding scale, with a capped maximum amount that can be removed from the pot regardless of pot size. Less frequently, the rake is a fixed amount no matter what the size of the pot. Mechanism: Pot rake A percentage rake is taken directly from the pot. In a live casino, the dealer manually removes chips from the pot while the hand is being played and sets them aside to be dropped into a secure box after completion of the hand. When playing online, the rake is taken automatically by the game software. Some software shows the rake amount next to a graphical representation of the dealer and takes it incrementally between the rounds of betting, whereas other software programs wait until the entire hand is over and then take it from the pot total before giving the rest to the winner of the hand. This is the prevalent method of collecting rake in online poker. Mechanism: Dead drop The fee is placed on the dealer button each hand by the player in that position and taken in by the dealer before any cards are dealt. Time collection Time collection (also "timed rake" or "table charge") is a set fee collected (typically) every half-hour during the game. This form of rake is collected in one of two ways: Player time: A set amount is collected from each player. Time pot: A set amount is collected from the first pot over a certain amount.Time rakes are generally reserved for higher limit games ($10–$20 and above). Fixed fees The fee per hand is a fixed rate and does not vary based on the size of the pot. Mechanism: Tournament fees The above examples are used in ring games, also known as cash games. The rake for participation in poker tournaments is collected as an entrance fee. This may be displayed by showing the tournament buy-in as $100+$20, with the $20 being the house fee or "Vig". Other times they will show their buy-in as $100 and list the percentage they take for expenses. Mechanism: Subscription fees Some online cardrooms charge a monthly subscription fee and then do not rake individual pots or tournaments. Mechanism: Rake free Some online poker websites have done away with the rake altogether. These "rake free" poker rooms generate revenue by increasing traffic to the company's other profitable businesses (such as a casino or sportsbook) or by charging monthly membership or deposit fees. Some sites are only completely rake-free for frequent players, while offering reduced rake instead for other customers. Due to high fixed costs of operating a poker room, such as marketing, few online poker rooms have been successful in offering rake-free game, often going bankrupt or sustaining themselves by exploiting loopholes in offshore jurisdictions to refuse to honor players' cash withdrawals. However, some financially sound poker rooms have on occasion offered rake-free games to entice new sign-ups or to encourage players to try out new game formats. Rakeback: Rakeback is a player rewards method that began in 2004, whereby some online poker sites or their affiliate partners return part of the rake or tournament entries a player pays as an incentive for them to continue playing on that site Rakeback in cash games can be calculated using two different methods: dealt and contributed. The dealt method awards the same amount of rakeback to each player dealt into a hand, and the contributed method rewards players based on their actual contribution to the pot. In poker tournaments, rakeback is deducted from the cardroom's entry fee. Rakeback is similar to comps in "brick and mortar" casinos. Rakeback: As online poker becomes more mainstream online poker professionals have begun using rakeback as a means of increasing profits or cutting their losses. Depending upon the stakes the player is playing, how many tables they are playing at once, and the number of hours played daily, online poker pros can earn thousands of dollars in rakeback every month. This gave rise to so-called rakeback pros, players using a less intensive losing strategy at many tables simultaneously while offsetting their losses through rakeback. Rakeback: Not every online poker room offers rakeback. Some allow affiliates to offer rakeback as a direct percentage of rake and tournament entries paid back to the players. Other card rooms such as PokerStars, PartyPoker, Ongame Network and the iPoker Network forbid affiliates to give rakeback. Instead they offer in-house loyalty programs that gives cash and other rewards to players based upon how much they play. At such networks, rakeback deals are sometimes cut between an affiliate and a player without the poker operator's knowledge. Such deals, if discovered, tend to result in the expulsion of either offending party, and, sometimes, in penalties for the poker operator, if they are part of a bigger poker network. Rakeback: In brick and mortar rooms, the floorperson may offer a rake reduction or rake-free play to players willing to start a table shorthanded. Legality: In most legal jurisdictions, taking a rake from a poker table is explicitly illegal if the party taking the rake does not have the proper gaming licences and/or permits. The laws of many jurisdictions do not prohibit the playing of poker for money at a private dwelling, provided that no one takes a rake.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Two-streams hypothesis** Two-streams hypothesis: The two-streams hypothesis is a model of the neural processing of vision as well as hearing. The hypothesis, given its initial characterisation in a paper by David Milner and Melvyn A. Goodale in 1992, argues that humans possess two distinct visual systems. Recently there seems to be evidence of two distinct auditory systems as well. As visual information exits the occipital lobe, and as sound leaves the phonological network, it follows two main pathways, or "streams". The ventral stream (also known as the "what pathway") leads to the temporal lobe, which is involved with object and visual identification and recognition. The dorsal stream (or, "where pathway") leads to the parietal lobe, which is involved with processing the object's spatial location relative to the viewer and with speech repetition. History: Several researchers had proposed similar ideas previously. The authors themselves credit the inspiration of work on blindsight by Weiskrantz, and previous neuroscientific vision research. Schneider first proposed the existence of two visual systems for localisation and identification in 1969. Ingle described two independent visual systems in frogs in 1973. Ettlinger reviewed the existing neuropsychological evidence of a distinction in 1990. Moreover, Trevarthen had offered an account of two separate mechanisms of vision in monkeys back in 1968.In 1982, Ungerleider and Mishkin distinguished the dorsal and ventral streams, as processing spatial and visual features respectively, from their lesion studies of monkeys – proposing the original where vs what distinction. Though this framework was superseded by that of Milner & Goodale, it remains influential.One hugely influential source of information that has informed the model has been experimental work exploring the extant abilities of visual agnosic patient D.F. The first, and most influential report, came from Goodale and colleagues in 1991 and work is still being published on her two decades later. This has been the focus of some criticism of the model due to the perceived over-reliance on findings from a single case. Two visual systems: Goodale and Milner amassed an array of anatomical, neuropsychological, electrophysiological, and behavioural evidence for their model. According to their data, the ventral 'perceptual' stream computes a detailed map of the world from visual input, which can then be used for cognitive operations, and the dorsal 'action' stream transforms incoming visual information to the requisite egocentric (head-centered) coordinate system for skilled motor planning. Two visual systems: The model also posits that visual perception encodes spatial properties of objects, such as size and location, relative to other objects in the visual field; in other words, it utilizes relative metrics and scene-based frames of reference. Visual action planning and coordination, on the other hand, uses absolute metrics determined via egocentric frames of reference, computing the actual properties of objects relative to the observer. Thus, grasping movements directed towards objects embedded in size-contrast-ambiguous scenes have been shown to escape the effects of these illusions, as different frames of references and metrics are involved in the perception of the illusion versus the execution of the grasping act.Norman proposed a similar dual-process model of vision, and described eight main differences between the two systems consistent with other two-system models. Two visual systems: Dorsal stream The dorsal stream is proposed to be involved in the guidance of actions and recognizing where objects are in space. The dorsal stream projects from the primary visual cortex to the posterior parietal cortex. It was initially termed the "where" pathway since it was thought that the dorsal stream processes information regarding the spatial properties of an object. However, later research conducted on a famous neuropsychological patient, Patient D.F., revealed that the dorsal stream is responsible for processing the visual information needed to construct the representations of objects one wishes to manipulate. Those findings led the nickname of the dorsal stream to be updated to the "how" pathway. The dorsal stream is interconnected with the parallel ventral stream (the "what" stream) which runs downward from V1 into the temporal lobe. Two visual systems: General features The dorsal stream is involved in spatial awareness and guidance of actions (e.g., reaching). In this it has two distinct functional characteristics—it contains a detailed map of the visual field, and is also good at detecting and analyzing movements. The dorsal stream commences with purely visual functions in the occipital lobe before gradually transferring to spatial awareness at its termination in the parietal lobe. Two visual systems: The posterior parietal cortex is essential for "the perception and interpretation of spatial relationships, accurate body image, and the learning of tasks involving coordination of the body in space".It contains individually functioning lobules. The lateral intraparietal sulcus (LIP) contains neurons that produce enhanced activation when attention is moved onto the stimulus or the animal saccades towards a visual stimulus, and the ventral intraparietal sulcus (VIP) where visual and somatosensory information are integrated. Two visual systems: Effects of damage or lesions Damage to the posterior parietal cortex causes a number of spatial disorders including: Simultanagnosia: where the patient can only describe single objects without the ability to perceive it as a component of a set of details or objects in a context (as in a scenario, e.g. the forest for the trees). Optic ataxia: where the patient can't use visuospatial information to guide arm movements. Two visual systems: Hemispatial neglect: where the patient is unaware of the contralesional half of space (that is, they are unaware of things in their left field of view and focus only on objects in the right field of view; or appear unaware of things in one field of view when they perceive them in the other). For example, a person with this disorder may draw a clock, and then label all twelve of the numbers on one side of the face and consider the drawing complete. Two visual systems: Akinetopsia: inability to perceive motion. Apraxia: inability to produce discretionary or volitional movement in the absence of muscular disorders. Ventral stream The ventral stream is associated with object recognition and form representation. Also described as the "what" stream, it has strong connections to the medial temporal lobe (which is associated with long-term memories), the limbic system (which controls emotions), and the dorsal stream (which deals with object locations and motion). Two visual systems: The ventral stream gets its main input from the parvocellular (as opposed to magnocellular) layer of the lateral geniculate nucleus of the thalamus. These neurons project to V1 sublayers 4Cβ, 4A, 3B and 2/3a successively. From there, the ventral pathway goes through V2 and V4 to areas of the inferior temporal lobe: PIT (posterior inferotemporal), CIT (central inferotemporal), and AIT (anterior inferotemporal). Each visual area contains a full representation of visual space. That is, it contains neurons whose receptive fields together represent the entire visual field. Visual information enters the ventral stream through the primary visual cortex and travels through the rest of the areas in sequence. Two visual systems: Moving along the stream from V1 to AIT, receptive fields increase their size, latency, and the complexity of their tuning. For example, recent studies have shown that the V4 area is responsible for color perception in humans, and the V8 (VO1) area is responsible for shape perception, while the VO2 area, which is located between these regions and the parahippocampal cortex, integrates information about the color and shape of stimuli into a holistic image.All the areas in the ventral stream are influenced by extraretinal factors in addition to the nature of the stimulus in their receptive field. These factors include attention, working memory, and stimulus salience. Thus the ventral stream does not merely provide a description of the elements in the visual world—it also plays a crucial role in judging the significance of these elements. Two visual systems: Damage to the ventral stream can cause inability to recognize faces or interpret facial expression. Two auditory systems: Ventral stream Along with the visual ventral pathway being important for visual processing, there is also a ventral auditory pathway emerging from the primary auditory cortex. In this pathway, phonemes are processed posteriorly to syllables and environmental sounds. The information then joins the visual ventral stream at the middle temporal gyrus and temporal pole. Here the auditory objects are converted into audio-visual concepts. Two auditory systems: Dorsal stream The function of the auditory dorsal pathway is to map the auditory sensory representations onto articulatory motor representations. Hickok & Poeppel claim that the auditory dorsal pathway is necessary because, "learning to speak is essentially a motor learning task. The primary input to this is sensory, speech in particular. So, there must be a neural mechanism that both codes and maintains instances of speech sounds, and can use these sensory traces to guide the tuning of speech gestures so that the sounds are accurately reproduced." In contrast to the ventral stream's auditory processing, information enters from the primary auditory cortex into the posterior superior temporal gyrus and posterior superior temporal sulcus. From there the information moves to the beginning of the dorsal pathway, which is located at the boundary of the temporal and parietal lobes near the Sylvian fissure. The first step of the dorsal pathway begins in the sensorimotor interface, located in the left Sylvian parietal temporal (Spt) (within the Sylvian fissure at the parietal-temporal boundary). The spt is important for perceiving and reproducing sounds. This is evident because its ability to acquire new vocabulary, be disrupted by lesions and auditory feedback on speech production, articulatory decline in late-onset deafness and the non-phonological residue of Wernicke's aphasia; deficient self-monitoring. It is also important for the basic neuronal mechanisms for phonological short-term memory. Without the Spt, language acquisition is impaired. The information then moves onto the articulatory network, which is divided into two separate parts. The articulatory network 1, which processes motor syllable programs, is located in the left posterior inferior temporal gyrus and Brodmann's area 44 (pIFG-BA44). The articulatory network 2 is for motor phoneme programs and is located in the left M1-vBA6.Conduction aphasia affects a subject's ability to reproduce speech (typically by repetition), though it has no influence on the subject's ability to comprehend spoken language. This shows that conduction aphasia must reflect not an impairment of the auditory ventral pathway but instead of the auditory dorsal pathway. Buchsbaum et al found that conduction aphasia can be the result of damage, particularly lesions, to the Spt (Sylvian parietal temporal). This is shown by the Spt's involvement in acquiring new vocabulary, for while experiments have shown that most conduction aphasiacs can repeat high-frequency, simple words, their ability to repeat low-frequency, complex words is impaired. The Spt is responsible for connecting the motor and auditory systems by making auditory code accessible to the motor cortex. It appears that the motor cortex recreates high-frequency, simple words (like cup) in order to more quickly and efficiently access them, while low-frequency, complex words (like Sylvian parietal temporal) require more active, online regulation by the Spt. This explains why conduction aphasiacs have particular difficulty with low-frequency words which requires a more hands-on process for speech production. "Functionally, conduction aphasia has been characterized as a deficit in the ability to encode phonological information for production," namely because of a disruption in the motor-auditory interface. Conduction aphasia has been more specifically related to damage of the arcuate fasciculus, which is vital for both speech and language comprehension, as the arcuate fasiculus makes up the connection between Broca and Wernicke's areas. Criticisms: Goodale & Milner's innovation was to shift the perspective from an emphasis on input distinctions, such as object location versus properties, to an emphasis on the functional relevance of vision to behaviour, for perception or for action. Contemporary perspectives however, informed by empirical work over the past two decades, offer a more complex account than a simple separation of function into two-streams. Recent experimental work for instance has challenged these findings, and has suggested that the apparent dissociation between the effects of illusions on perception and action is due to differences in attention, task demands, and other confounds. There are other empirical findings, however, that cannot be so easily dismissed which provide strong support for the idea that skilled actions such as grasping are not affected by pictorial illusions.Moreover, recent neuropsychological research has questioned the validity of the dissociation of the two streams that has provided the cornerstone of evidence for the model. The dissociation between visual agnosia and optic ataxia has been challenged by several researchers as not as strong as originally portrayed; Hesse and colleagues demonstrated dorsal stream impairments in patient DF; Himmelbach and colleagues reassessed DF's abilities and applied more rigorous statistical analysis demonstrating that the dissociation wasn't as strong as first thought.A 2009 review of the accumulated evidence for the model concluded that whilst the spirit of the model has been vindicated the independence of the two streams has been overemphasised. Criticisms: Goodale & Milner themselves have proposed the analogy of tele-assistance, one of the most efficient schemes devised for the remote control of robots working in hostile environments. In this account, the dorsal stream is viewed as a semi-autonomous function that operates under guidance of executive functions which themselves are informed by ventral stream processing. Criticisms: Thus the emerging perspective within neuropsychology and neurophysiology is that, whilst a two-systems framework was a necessary advance to stimulate study of the highly complex and differentiated functions of the two neural pathways; the reality is more likely to involve considerable interaction between vision-for-action and vision-for-perception. Robert McIntosh and Thomas Schenk summarize this position as follows: We should view the model not as a formal hypothesis, but as a set of heuristics to guide experiment and theory. The differing informational requirements of visual recognition and action guidance still offer a compelling explanation for the broad relative specializations of dorsal and ventral streams. However, to progress the field, we may need to abandon the idea that these streams work largely independently of one other, and to address the dynamic details of how the many visual brain areas arrange themselves from task to task into novel functional networks.: 62
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trofosfamide** Trofosfamide: Trofosfamide (INN) is a nitrogen mustard alkylating agent. It is sometimes abbreviated "TRO". It has been used in trials to study its effects on ependymoma, medulloblastoma, sarcoma, soft tissue, supratentorial PNET, and recurrent brain tumors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Datanomic** Datanomic: Datanomic was a software engineering company based in Cambridge, England. Datanomic: Founded in 2001, Datanomic was a UK-based software company developing data quality solutions. In 2006, Datanomic acquired Tranato and integrated Tranato's semantic profiling and parsing capabilities with Datanomic's data auditing and cleansing to produce a new data quality application. Launched in July 2007, dn:Director provided an end-to-end data quality tool kit encompassing, data profiling, auditing, cleansing and matching through a single graphical user interface and all written in Java. Datanomic: Although dn:Director had capabilities to handle data quality issues in all kinds of data, Datanomic targeted its dn:Director application at business users with customer data quality challenges. It adopted a strategy of building "applications", consisting of pre-configured rules and reference data, on top of the data quality platform to address specific business issues. Most successful of these was its Watchlist Screening application (in support of compliance with anti-money laundering and anti-terrorism know-your-customer regulation), which was adopted by clients including Barclays Bank, Bank of America, MetLife and Vodafone. Datanomic: Datanomic was funded by private investors and venture capital companies 3i and DN Capital. In July 2011, Datanomic was acquired by Oracle Corporation, which announced that it would combine Datanomic's technology with the Oracle product data quality capabilities it secured when it acquired Silver Creek Systems in 2010; the new combined suite is known as Oracle Enterprise Data Quality. Datanomic's compliance screening application has also been persisted as Oracle Watchlist Screening.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Medium-density fibreboard** Medium-density fibreboard: Medium-density fibreboard (MDF) is an engineered wood product made by breaking down hardwood or softwood residuals into wood fibres, often in a defibrator, combining it with wax and a resin binder, and forming it into panels by applying high temperature and pressure. MDF is generally denser than plywood. It is made up of separated fibres but can be used as a building material similar in application to plywood. It is stronger and denser than particle board.The name derives from the distinction in densities of fibreboard. Large-scale production of MDF began in the 1980s, in both North America and Europe. Physical properties: Over time, the term "MDF" has become a generic name for any dry-process fibre board. MDF is typically made up of 82% wood fibre, 9% urea-formaldehyde resin glue, 8% water, and 1% paraffin wax. The density is typically between 500 and 1,000 kg/m3 (31 and 62 lb/cu ft). The range of density and classification as light-, standard-, or high-density board is a misnomer and confusing. The density of the board, when evaluated in relation to the density of the fibre that goes into making the panel, is important. A thick MDF panel at a density of 700–720 kg/m3 (44–45 lb/cu ft) may be considered as high density in the case of softwood fibre panels, whereas a panel of the same density made of hardwood fibres is not regarded as so. The evolution of the various types of MDF has been driven by differing need for specific applications. Types: The different kinds of MDF (sometimes labeled by colour) are: Ultralight MDF plate (ULDF) Moisture-resistant board is typically green Fire retardant MDF is typically red or blueAlthough similar manufacturing processes are used in making all types of fibreboard, MDF has a typical density of 600–800 kg/m3 or 0.022–0.029 lb/in3, in contrast to particle board (500–800 kg/m3) and to high-density fibreboard (600–1,450 kg/m3). Manufacturing: In Australia and New Zealand, the main species of tree used for MDF is plantation-grown radiata pine, but a variety of other products have also been used, including other woods, waste paper, and fibres. Where moisture resistance is desired, a proportion of eucalypt species may be used, making use of the endemic oil content of such trees. Manufacturing: Chip production The trees are debarked after being cut. The bark can be sold for use in landscaping or used as biomass fuel in on-site furnaces. The debarked logs are sent to the MDF plant, where they go through the chipping process. A typical disk chipper contains four to 16 blades. Any resulting chips that are too large may be rechipped; undersized chips may be used as fuel. The chips are then washed and checked for defects. Chips may be stored in bulk, as a reserve for manufacturing. Manufacturing: Fibre production Compared to other fibre boards, such as Masonite, MDF is characterised by the next part of the process, and how the fibres are processed as individual, but intact, fibres and vessels, manufactured through a dry process. The chips are then compacted into small plugs using a screw feeder, heated for 30–120 seconds to soften the lignin in the wood, then fed into a defibrator. A typical defibrator consists of two counter-rotating discs with grooves in their faces. Chips are fed into the centre and are fed outwards between the discs by centrifugal force. The decreasing size of the grooves gradually separates the fibres, aided by the softened lignin between them.From the defibrator, the pulp enters a blowline, a distinctive part of the MDF process. This is an expanding circular pipeline, initially 40 mm in diameter, increasing to 1500 mm. Wax is injected in the first stage, which coats the fibres and is distributed evenly by the turbulent movement of the fibres. A urea-formaldehyde resin is then injected as the main bonding agent. The wax improves moisture resistance and the resin initially helps reduce clumping. The material dries quickly in the final heated expansion chamber of the blowline and expands into a fine, fluffy and lightweight fibre. The glue and the other components (hardener, dye, urea, and so on) can be injected into blowline even at a high pressure (100 bar, 10 MPa, 1,500 psi) and the drying process continues inside a long pipe to the exit cyclones, that is connected to the heating chamber. This fibre may be used immediately, or stored. Manufacturing: Sheet forming Dry fibre gets sucked into the top of a "pendistor", which evenly distributes fibre into a uniform mat below it, usually of 230–610 mm thickness. The mat is precompressed and either sent straight to a continuous hot press or cut into large sheets for a multiple-opening hot press. The hot press activates the bonding resin and sets the strength and density profile. The pressing cycle operates in stages, with the mat thickness being first compressed to around 1.5 times the finished board thickness, then compressed further in stages and held for a short period. This gives a board profile with zones of increased density, thus mechanical strength, near the two faces of the board and a less dense core.After pressing, MDF is cooled in a star dryer or cooling carousel, trimmed, and sanded. In certain applications, boards are also laminated for extra strength. Manufacturing: The environmental impact of MDF has greatly improved over the years. Today, many MDF boards are made from a variety of materials. These include other woods, scrap, recycled paper, bamboo, carbon fibres and polymers, forest thinnings, and sawmill off-cuts. As manufacturers are being pressured to come up with greener products, they have started testing and using nontoxic binders. New raw materials are being introduced. Straw and bamboo are becoming popular fibres because they are a fast-growing, renewable resource. Comparison with natural woods: MDF does not contain knots or rings, making it more uniform than natural woods during cutting and in service. However, MDF is not entirely isotropic since the fibres are pressed tightly together through the sheet. Typical MDF has a hard, flat, smooth surface that makes it ideal for veneering, as no underlying grain is available to telegraph through the thin veneer as with plywood. A so-called "premium" MDF is available that features more uniform density throughout the thickness of the panel. Comparison with natural woods: MDF may be glued, doweled, or laminated. Typical fasteners are T-nuts and pan-head machine screws. Smooth-shank nails do not hold well, and neither do fine-pitch screws, especially in the edge. Special screws are available with a coarse thread pitch, but sheet-metal screws also work well. MDF is not susceptible to splitting when screws are installed in the face of the material, but due to the alignment of the wood fibres, may split when screws are installed in the edge of the board without pilot holes. Comparison with natural woods: Advantages Consistent in strength and size Shapes well Stable dimensions (less expansion and contraction than natural wood) Takes paint well Takes wood glue well High screw pull-out strength in the face grain of the material Flexible Drawbacks Denser than plywood or chipboard Low-grade MDF may swell and break when saturated with water May warp or expand in humid environments if not sealed May release formaldehyde, which is a known human carcinogen and may cause allergy, eye and lung irritation when cutting and sanding Dulls blades more quickly than many woods: Use of tungsten carbide-edged cutting tools is almost mandatory, as high-speed steel dulls too quickly. Comparison with natural woods: Though it does not have a grain in the plane of the board, it does have one into the board. Screwing into the edge of a board will generally cause it to split in a fashion similar to delaminating. Applications: MDF is often used in school projects because of its flexibility. Slatwall panels made from MDF are used in the shop fitting industry. MDF is primarily used for indoor applications due to its poor moisture resistance. It is available in raw form, or with a finely sanded surface, or with a decorative overlay. MDF is also usable for furniture such as cabinets, because of its strong surface.MDF's density makes it a useful material for the walls of pipe-organ chambers, allowing sound, particularly bass, to be reflected out of the chamber into the hall. Safety concerns: When MDF is cut, a large quantity of dust particulate is released into the air. Safety concerns: Formaldehyde resins are commonly used to bind together the fibres in MDF, and testing has consistently revealed that MDF products emit free formaldehyde and other volatile organic compounds that pose health risks at concentrations considered unsafe, for at least several months after manufacture. Urea-formaldehyde is always being slowly released from the edges and surface of MDF. When painting, coating all sides of the finished piece is a good practice to seal in the free formaldehyde. Wax and oil finishes may be used as finishes, but they are less effective at sealing in the free formaldehyde.Whether these constant emissions of formaldehyde reach harmful levels in real-world environments is not fully determined. The primary concern is for the industries using formaldehyde. As far back as 1987, the United States Environmental Protection Agency classified it as a "probable human carcinogen", and after more studies, the World Health Organization's International Agency for Research on Cancer (IARC), in 1995, also classified it as a "probable human carcinogen". Further information and evaluation of all known data led the IARC to reclassify formaldehyde as a "known human carcinogen" associated with nasal sinus cancer and nasopharyngeal cancer, and possibly with leukaemia in June 2004.According to International Composite Board Emission Standards, three European formaldehyde classes are used, E0, E1, and E2, based on the measurement of formaldehyde emission levels. For instance, E0 is classified as having less than 3 mg of formaldehyde out of every 100 g of the glue used in particleboard and plywood fabrication. E1 and E2 are classified as having 9 and 30 g of formaldehyde per 100 g of glue, respectively. All around the world, variable certification and labeling schemes are there for such products that can be explicit to formaldehyde release, such as that of Californian Air Resources Board. Veneered MDF: Veneered MDF provides many of the advantages of MDF with a decorative wood veneer surface layer. In modern construction, spurred by the high costs of hardwoods, manufacturers have been adopting this approach to achieve a high-quality finishing wrap covering over a standard MDF board. One common type uses oak veneer. Making veneered MDF is a complex procedure, which involves taking an extremely thin slice of hardwood (about 1-2 mm thick) and then through high pressure and stretching methods wrapping them around the profiled MDF boards. This is only possible with very simple profiles; otherwise, when the thin wood layer dries it breaks at bends and angles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aperture (antenna)** Aperture (antenna): In electromagnetics and antenna theory, the aperture of an antenna is defined as "A surface, near or on an antenna, on which it is convenient to make assumptions regarding the field values for the purpose of computing fields at external points. The aperture is often taken as that portion of a plane surface near the antenna, perpendicular to the direction of maximum radiation, through which the major part of the radiation passes." Effective area: The effective area of an antenna is defined as "In a given direction, the ratio of the available power at the terminals of a receiving antenna to the power flux density of a plane wave incident on the antenna from that direction, the wave being polarization matched to the antenna." Of particular note in this definition is that both effective area and power flux density are functions of incident angle of a plane wave. Assume a plane wave from a particular direction (θ,ϕ) , which are the azimuth and elevation angles relative to the array normal, has a power flux density ‖S→‖ ; this is the amount of power passing through a unit area normal to the direction of the plane wave of one square meter. Effective area: By definition, if an antenna delivers PO watts to the transmission line connected to its output terminals when irradiated by a uniform field of power density |S(θ,ϕ)| watts per square meter, the antenna's effective area Ae for the direction of that plane wave is given by Ae(θ,ϕ)=PO‖S→(θ,ϕ)‖. Effective area: The power PO accepted by the antenna (the power at the antenna terminals) is less than the power PR received by an antenna by the radiation efficiency η of the antenna. PR is equal to the power density of the electromagnetic energy |S(θ,ϕ)|=|S→⋅a^| , where a^ is the unit vector normal to the array aperture, multiplied by the physical aperture area A . The incoming radiation is assumed to have the same polarization as the antenna. Therefore, cos cos ⁡ϕ, and cos cos ⁡ϕ. Effective area: The effective area of an antenna or aperture is based upon a receiving antenna. However, due to reciprocity, an antenna's directivity in receiving and transmitting are identical, so the power transmitted by an antenna in different directions (the radiation pattern) is also proportional to the effective area Ae . When no direction is specified, Ae is understood to refer to its maximal value. Effective area: Effective length Most antenna designs are not defined by a physical area but consist of wires or thin rods; then the effective aperture bears no clear relation to the size or area of the antenna. An alternate measure of antenna response that has a greater relationship to the physical length of such antennas is effective length eff measured in metres, which is defined for a receiving antenna as eff =V0/Es, where V0 is the open-circuit voltage appearing across the antenna's terminals, Es is the electric field strength of the radio signal, in volts per metre, at the antenna.The longer the effective length, the greater is the voltage appearing at its terminals. However, the actual power implied by that voltage depends on the antenna's feedpoint impedance, so this cannot be directly related to antenna gain, which is a measure of received power (but does not directly specify voltage or current). For instance, a half-wave dipole has a much longer effective length than a short dipole. However the effective area of the short dipole is almost as great as it is for the half-wave antenna, since (ideally), given an ideal impedance-matching network, it can receive almost as much power from that wave. Note that for a given antenna feedpoint impedance, an antenna's gain or eff increases according to the square of eff , so that the effective length for an antenna relative to different wave directions follows the square root of the gain in those directions. But since changing the physical size of an antenna inevitably changes the impedance (often by a great factor), the effective length is not by itself a useful figure of merit for describing an antenna's peak directivity and is more of theoretical importance. Aperture efficiency: In general, the aperture of an antenna cannot be directly inferred from its physical size. However so-called aperture antennas such as parabolic dishes and horn antennas, have a large (relative to the wavelength) physical area phys which is opaque to such radiation, essentially casting a shadow from a plane wave and thus removing an amount of power phys S from the original beam. That power removed from the plane wave can be actually received by the antenna (converted into electrical power), reflected or otherwise scattered, or absorbed (converted to heat). In this case the effective aperture Ae is always less than (or equal to) the area of the antenna's physical aperture phys , as it accounts only for the portion of that wave actually received as electrical power. An aperture antenna's aperture efficiency ea is defined as the ratio of these two areas: phys . Aperture efficiency: The aperture efficiency is a dimensionless parameter between 0 and 1 that measures how close the antenna comes to using all the radio wave power intersecting its physical aperture. If the aperture efficiency were 100%, then all the wave's power falling on its physical aperture would be converted to electrical power delivered to the load attached to its output terminals, so these two areas would be equal: phys . But due to nonuniform illumination by a parabolic dish's feed, as well as other scattering or loss mechanisms, this is not achieved in practice. Since a parabolic antenna's cost and wind load increase with the physical aperture size, there may be a strong motivation to reduce these (while achieving a specified antenna gain) by maximizing the aperture efficiency. Aperture efficiencies of typical aperture antennas vary from 0.35 to well over 0.70. Aperture efficiency: Note that when one simply speaks of an antenna's "efficiency", what is most often meant is the radiation efficiency, a measure which applies to all antennas (not just aperture antennas) and accounts only for the gain reduction due to losses. Outside of aperture antennas, most antennas consist of thin wires or rods with a small physical cross-sectional area (generally much smaller than Ae ) for which "aperture efficiency" is not even defined. Aperture and gain: The directivity of an antenna, its ability to direct radio waves preferentially in one direction or receive preferentially from a given direction, is expressed by a parameter G called antenna gain. This is most commonly defined as the ratio of the power Po received by that antenna from waves in a given direction to the power iso that would be received by an ideal isotropic antenna, that is, a hypothetical antenna that receives power equally well from all directions. It can be seen that (for antennas at a given frequency) gain is also equal to the ratio of the apertures of these antennas: iso iso . Aperture and gain: As shown below, the aperture of a lossless isotropic antenna, which by this definition has unity gain, is iso =λ24π, where λ is the wavelength of the radio waves. Thus iso =4πAeλ2. Aperture and gain: So antennas with large effective apertures are considered high-gain antennas (or beam antennas), which have relatively small angular beam widths. As receiving antennas, they are much more sensitive to radio waves coming from a preferred direction compared to waves coming from other directions (which would be considered interference). As transmitting antennas, most of their power is radiated in a particular direction at the expense of other directions. Although antenna gain and effective aperture are functions of direction, when no direction is specified, these are understood to refer to their maximal values, that is, in the direction(s) of the antenna's intended use (also referred to as the antenna's main lobe or boresight). Friis transmission formula: The fraction of the power delivered to a transmitting antenna that is received by a receiving antenna is proportional to the product of the apertures of both the antennas and inversely proportional to the squared values of the distance between the antennas and the wavelength. This is given by a form of the Friis transmission formula: PrPt=ArAtd2λ2, where Pt is the power fed into the transmitting antenna input terminals, Pr is the power available at receiving antenna output terminals, Ar is the effective area of the receiving antenna, At is the effective area of the transmitting antenna, d is the distance between antennas (the formula is only valid for d large enough to ensure a plane wave front at the receive antenna, sufficiently approximated by d≳2a2/λ , where a is the largest linear dimension of either of the antennas), λ is the wavelength of the radio frequency. Derivation of antenna aperture from thermodynamic considerations: The aperture of an isotropic antenna, the basis of the definition of gain above, can be derived on the basis of consistency with thermodynamics. Suppose that an ideal isotropic antenna A with a driving-point impedance of R sits within a closed system CA in thermodynamic equilibrium at temperature T. We connect the antenna terminals to a resistor also of resistance R inside a second closed system CR, also at temperature T. In between may be inserted an arbitrary lossless electronic filter Fν passing only some frequency components. Derivation of antenna aperture from thermodynamic considerations: Each cavity is in thermal equilibrium and thus filled with black-body radiation due to temperature T. The resistor, due to that temperature, will generate Johnson–Nyquist noise with an open-circuit voltage whose mean-squared spectral density is given by vn2¯=4kBTRη(f), where η(f) is a quantum-mechanical factor applying to frequency f; at normal temperatures and electronic frequencies η(f)=1 , but in general is given by η(f)=hf/kBTehf/kBT−1. Derivation of antenna aperture from thermodynamic considerations: The amount of power supplied by an electrical source of impedance R into a matched load (that is, something with an impedance of R, such as the antenna in CA) whose rms open-circuit voltage is vrms is given by rms 24R. The mean-squared voltage rms 2 can be found by integrating the above equation for the spectral density of mean-squared noise voltage over frequencies passed by the filter Fν. For simplicity, let us just consider Fν as a narrowband filter of bandwidth B1 around central frequency f1, in which case that integral simplifies as follows: PR=∫0∞4kBTRη(f)Fν(f)df4R =4kBTRη(f1)B14R=kBTη(f1)B1. This power due to Johnson noise from the resistor is received by the antenna, which radiates it into the closed system CA. The same antenna, being bathed in black-body radiation of temperature T, receives a spectral radiance (power per unit area per unit frequency per unit solid angle) given by Planck's law: Pf,A,Ω(f)=2hf3c21ehf/kBT−1=2f2c2kBTη(f), using the notation η(f) defined above. However, that radiation is unpolarized, whereas the antenna is only sensitive to one polarization, reducing it by a factor of 2. To find the total power from black-body radiation accepted by the antenna, we must integrate that quantity times the assumed cross-sectional area Aeff of the antenna over all solid angles Ω and over all frequencies f: eff (Ω,f)Fν(f)dΩdf. Derivation of antenna aperture from thermodynamic considerations: Since we have assumed an isotropic radiator, Aeff is independent of angle, so the integration over solid angles is trivial, introducing a factor of 4π. And again we can take the simple case of a narrowband electronic filter function Fν which only passes power of bandwidth B1 around frequency f1. The double integral then simplifies to eff eff B1, where λ1=c/f1 is the free-space wavelength corresponding to the frequency f1. Derivation of antenna aperture from thermodynamic considerations: Since each system is in thermodynamic equilibrium at the same temperature, we expect no net transfer of power between the cavities. Otherwise one cavity would heat up and the other would cool down in violation of the second law of thermodynamics. Therefore, the power flows in both directions must be equal: PA=PR. We can then solve for Aeff, the cross-sectional area intercepted by the isotropic antenna: eff B1=kBTη(f1)B1, eff =λ124π. Derivation of antenna aperture from thermodynamic considerations: We thus find that for a hypothetical isotropic antenna, thermodynamics demands that the effective cross-section of the receiving antenna to have an area of λ2/4π. This result could be further generalized if we allow the integral over frequency to be more general. Then we find that Aeff for the same antenna must vary with frequency according to that same formula, using λ = c/f. Moreover, the integral over solid angle can be generalized for an antenna that is not isotropic (that is, any real antenna). Since the angle of arriving electromagnetic radiation only enters into Aeff in the above integral, we arrive at the simple but powerful result that the average of the effective cross-section Aeff over all angles at wavelength λ must also be given by Although the above is sufficient proof, we can note that the condition of the antenna's impedance being R, the same as the resistor, can also be relaxed. In principle, any antenna impedance (that isn't totally reactive) can be impedance-matched to the resistor R by inserting a suitable (lossless) matching network. Since that network is lossless, the powers PA and PR will still flow in opposite directions, even though the voltage and currents seen at the antenna and resistor's terminals will differ. The spectral density of the power flow in either direction will still be given by kBTη(f) , and in fact this is the very thermal-noise power spectral density associated with one electromagnetic mode, be it in free-space or transmitted electrically. Since there is only a single connection to the resistor, the resistor itself represents a single mode. And an antenna, also having a single electrical connection, couples to one mode of the electromagnetic field according to its average effective cross-section of λ12/(4π)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PSE Mining and Oil Index** PSE Mining and Oil Index: The PSE Mining and Oil Index is the sub-index of the Philippine Stock Exchange for mining and oil companies. It is one of the six sub-indices of the PSE that provide a useful measurement of sectoral performance.The index is probably one of the few indices of the PSE that do not have companies currently listed in the PSE Composite Index. Lepanto Consolidated Mining Company and Philex Mining Corporation used to be listed in the PSEi until their removal in the 2010s. However, this index is known to be one of the best performing indices on the PSE in recent years with the recent revival of the Philippine mining industry. PSE Mining and Oil Index: The index is composed of the former PSE Mining Index and the PSE Oil Index. Both indices were merged in a reclassification of the PSE's indices on January 1, 2006. Companies: The following companies are listed on the PSE Mining and Oil Index: Abra Mining and Industrial Corporation (ticker symbol: AR) Atlas Consolidated Mining and Development Corporation (ticker symbol: AT) Lepanto Consolidated Mining Company (ticker symbols: LC and LCB) Manila Mining Corporation (ticker symbols: MA and MAB) Philex Mining Corporation (ticker symbols: PX and PXB) United Paragon Mining Corporation (ticker symbol: UPM)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital integration** Digital integration: Digital integration is the idea that data or information on any given electronic device can be read or manipulated by another device using a standard format. From the digital culture perspective, on the other hand, it is defined as an organization drive to leverage the broad capabilities and vast efficiencies of digital technology and media in order to provide consumers relevance and value. It is also employed in digital governance and could refer to the inter-agency cooperation and intergovernmental collaboration across units at multiple levels of government. The phenomenon is considered a basic megatrend in the so-called knowledge civilization. Applications: Cell phone calendar to public digital calendar (online calendar) In this example, a user has a cell phone with a calendar, as well as a calendar on the Internet. Digital Integration would allow the user to synchronize the two, and the following features could result: The user could plan events and have other users notified. If the Public Digital Calendar is integral with a Blog, then the user could write about the event in it. Applications: Product Development Digital integration is now considered a part of product development. For instance, modeling systems aims for the digital integration of the product development chain. It is also entailed in the digital automation of product design and credited for the 30 to 45 percent increase in productivity as part of the range of digital tools employed to augment project performance. Applications: Building services integration for energy management and building control A home owner or commercial building manager could utilize digital integration products to connect intelligent services within a built environment. Applications: An intruder detection or access control system could be used in conjunction with light level sensors to turn lights on and off. So when you walk into a dark room the lights turn on (if you are allowed to be there) and when you leave they turn off behind you, thus making energy savings by preventing lights from being left on. Applications: The same techniques could be used to control HVAC (Heating Ventilation and Air Conditioning) systems. Applications: Home owners and commercial building managers can use Web based digital integration to control and manage services within their buildings via a web browser interface. The intelligent controllers in Air Conditioning units for example may be "Web Enabled" using digital integration solutions and products.There is a growing market for these products. Many of the control systems used for security, lighting, HVAC and Fire detection do not conform to any communications protocol standard, so often interface software is used to convert the different languages into a common standard for the building or wide area network. Applications: Some control systems are now being supplied with communications ports that conform to recognized standards such as BACnet, LonTalk or Ethernet many more provide interfaces to and from their own specialised control networks. Projects/organizations working toward digital integration: World Wide Web Consortium (W3C) BACnet.org
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**C't** C't: c't – Magazin für Computertechnik (Magazine for Computer Technology) is a German computer magazine, published by the Heinz Heise publishing house. History and profile: The first issue of the magazine was the November/December 1983 edition. Originally a special section of the electronics magazine elrad, the magazine has been published monthly since December 1983 and biweekly since October 1997. A Dutch edition also exists which is published monthly. In addition, since 2008 a Russian licensed-title version named c’t – Журнал о компьютерной технике is published in Moscow.The magazine is the second most popular German-language computer magazine with a sold circulation of about 315,000 (as of March 2011; printed circulation: 419,000). With 241,000 subscriptions it is the computer magazine with the most subscribers in Europe. History and profile: c't covers both hardware and software; it focuses on software for the Microsoft Windows platform, but Linux and Apple Computer are also regularly featured. The magazine has a reputation of being very thorough, although critics claim that the magazine has been "dumbed down" in recent years to accommodate the mass market. History and profile: One of the numerous projects c't initiated is the WSUS Offline Update, a set of scripts to download Microsoft updates, combine it with an install script, and create a CD image. With Offline Update burned to a CD or DVD, a technician can update Windows 2000/XP/Vista and Microsoft Office 2003/2007 without an Internet connection. This is especially useful for people with no or slow Internet connections, or not exposing a vulnerable system to the Internet. History and profile: A sister magazine, iX, focuses on topics for IT professionals. Popularity: c't became widely known in 1995 when it rated the program SoftRAM "Placebo-Software" in a short test. When the German distributor of the program took legal action to forbid publishing this rating, c't followed up with an exhaustive test showing that the program had virtually no effect other than giving false information about system statistics. The subsequent media coverage forced SoftRAM out not only from the German market, but from the US market too.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Open-source-software movement** Open-source-software movement: The open-source-software movement is a movement that supports the use of open-source licenses for some or all software, as part of the broader notion of open collaboration. The open-source movement was started to spread the concept/idea of open-source software. Open-source-software movement: Programmers who support the open-source-movement philosophy contribute to the open-source community by voluntarily writing and exchanging programming code for software development. The term "open source" requires that no one can discriminate against a group in not sharing the edited code or hinder others from editing their already-edited work. This approach to software development allows anyone to obtain and modify open-source code. These modifications are distributed back to the developers within the open-source community of people who are working with the software. In this way, the identities of all individuals participating in code modification are disclosed and the transformation of the code is documented over time. This method makes it difficult to establish ownership of a particular bit of code but is in keeping with the open-source-movement philosophy. These goals promote the production of high-quality programs as well as working cooperatively with other similarly-minded people to improve open-source technology. Brief history: The label "open source" was created and adopted by a group of people in the free software movement at a strategy session held at Palo Alto, California, in reaction to Netscape's January 1998 announcement of a source-code release for Navigator. One of the reasons behind using the term was that "the advantage of using the term open source is that the business world usually tries to keep free technologies from being installed." Those people who adopted the term used the opportunity before the release of Navigator's source code to free themselves of the ideological and confrontational connotations of the term "free software". Later in February 1998, Bruce Perens and Eric S. Raymond founded an organization called Open Source Initiative (OSI) "as an educational, advocacy, and stewardship organization at a cusp moment in the history of that culture." Evolution In the beginning, a difference between hardware and software did not exist. The user and programmer of a computer were one and the same. When the first commercial electronic computer was introduced by IBM in 1952, the machine was hard to maintain and expensive. Putting the price of the machine aside, it was the software that caused the problem when owning one of these computers. Then in 1952, a collaboration of all the owners of the computer got together and created a set of tools. The collaboration of people were in a group called PACT (The Project for the Advancement of Coding techniques). After passing this hurdle, in 1956, the Eisenhower administration decided to put restrictions on the types of sales AT&T could make. This did not stop the inventors from developing new ideas of how to bring the computer to the mass population. The next step was making the computer more affordable which slowly developed through different companies. Then they had to develop software that would host multiple users. MIT computation center developed one of the first systems, CTSS (Compatible Time-Sharing System). This laid the foundation for many more systems, and what we now call the open-source software movement.The open-source movement is branched from the free software movement which began in the late 80s with the launching of the GNU project by Richard Stallman. Stallman is regarded within the open-source community as sharing a key role in the conceptualization of freely-shared source code for software development. The term "free software" in the free software movement is meant to imply freedom of software exchange and modification. The term does not refer to any monetary freedom. Both the free-software movement and the open-source movement share this view of free exchange of programming code, and this is often why both of the movements are sometimes referenced in literature as part of the FOSS or "Free and Open Software" or FLOSS "Free/Libre Open-Source" communities. Brief history: These movements share fundamental differences in the view on open software. The main, factionalizing difference between the groups is the relationship between open-source and proprietary software. Often, makers of proprietary software, such as Microsoft, may make efforts to support open-source software to remain competitive. Members of the open-source community are willing to coexist with the makers of proprietary software and feel that the issue of whether software is open source is a matter of practicality.In contrast, members of the free-software community maintain the vision that all software is a part of freedom of speech and that proprietary software is unethical and unjust. The free-software movement openly champions this belief through talks that denounce proprietary software. As a whole, the community refuses to support proprietary software. Further there are external motivations for these developers. One motivation is that, when a programmer fixes a bug or makes a program it benefits others in an open-source environment. Another motivation is that a programmer can work on multiple projects that they find interesting and enjoyable. Programming in the open-source world can also lead to commercial job offers or entrance into the venture capital community. These are just a few reasons why open-source programmers continue to create and advance software.While cognizant of the fact that both the free-software movement and the open-source movement share similarities in practical recommendations regarding open source, the free-software movement fervently continues to distinguish themselves from the open-source movement entirely. The free-software movement maintains that it has fundamentally different attitudes towards the relationship between open-source and proprietary software. The free-software community does not view the open-source community as their target grievance, however. Their target grievance is proprietary software itself. Legal issues: The open-source movement has faced a number of legal challenges. Companies that manage open-source products have some difficulty securing their trademarks. For example, the scope of "implied license" conjecture remains unclear and can compromise an enterprise's ability to patent productions made with open-source software. Another example is the case of companies offering add-ons for purchase; licensees who make additions to the open-source code that are similar to those for purchase may have immunity from patent suits. Legal issues: In the court case "Jacobsen v. Katzer", the plaintiff sued the defendant for failing to put the required attribution notices in his modified version of the software, thereby violating license. The defendant claimed Artistic License in not adhering to the conditions of the software's use, but the wording of the attribution notice decided that this was not the case. "Jacobsen v Katzer" established open-source software's equality to proprietary software in the eyes of the law. Legal issues: In a court case accusing Microsoft of being a monopoly, Linux and open-source software was introduced in court to prove that Microsoft had valid competitors and was grouped in with Apple.There are resources available for those involved open-source projects in need of legal advice. The Software Freedom Law Center features a primer on open-source legal issues. International Free and Open Source Software Law Review offers peer-reviewed information for lawyers on free-software issues. Formalization: The Open Source Initiative (OSI) was instrumental in the formalization of the open-source movement. The OSI was founded by Eric Raymond and Bruce Perens in February 1998 with the purpose of providing general education and advocacy of the open-source label through the creation of the Open Source Definition that was based on the Debian Free Software Guidelines. The OSI has become one of the main supporters and advocators of the open-source movement.In February 1998, the open-source movement was adopted, formalized, and spearheaded by the Open Source Initiative (OSI), an organization formed to market software "as something more amenable to commercial business use" The OSI applied to register "Open Source" with the US Patent and Trademark Office, but was denied due to the term being generic and/or descriptive. Consequently, the OSI does not own the trademark "Open Source" in a national or international sense, although it does assert common-law trademark rights in the term. Formalization: The main tool they adopted for this was The Open Source Definition.The open-source label was conceived at a strategy session that was held on February 3, 1998 in Palo Alto, California and on April 8 of the same year, the attendees of Tim O’Reilly's Free Software Summit voted to promote the use of the term "open source".Overall, the software developments that have come out of the open-source movement have not been unique to the computer-science field, but they have been successful in developing alternatives to propriety software. Members of the open-source community improve upon code and write programs that can rival much of the propriety software that is already available.The rhetorical discourse used in open-source movements is now being broadened to include a larger group of non-expert users as well as advocacy organizations. Several organized groups such as the Creative Commons and global development agencies have also adopted the open-source concepts according to their own aims and for their own purposes.The factors affecting the open-source movement's legal formalization are primarily based on recent political discussion over copyright, appropriation, and intellectual property. Social structure of open source contribution teams: Historically, researchers have characterized open source contributors as a centralized, onion-shaped group. The center of the onion consists of the core contributors who drive the project forward through large amounts of code and software design choices. The second-most layer are contributors who respond to pull requests and bug reports. The third-most layer out are contributors who mainly submit bug reports. The farthest out layer are those who watch the repository and users of the software that's generated. This model has been used in research to understand the lifecycle of open source software, understand contributors to open source software projects, how tools such as can help contributors at the various levels of involvement in the project, and further understand how the distributed nature of open source software may affect the productivity of developers.Some researchers have disagreed with this model. Crowston et al.'s work has found that some teams are much less centralized and follow a more distributed workflow pattern. The authors report that there's a weak correlation between project size and centralization, with smaller projects being more centralized and larger projects showing less centralization. However, the authors only looked at bug reporting and fixing, so it remains unclear whether this pattern is only associated with bug finding and fixing or if centralization does become more distributed with size for every aspect of the open source paradigm. Social structure of open source contribution teams: An understanding of a team's centralization versus distributed nature is important as it may inform tool design and aid new developers in understanding a team's dynamic. One concern with open source development is the high turnover rate of developers, even among core contributors (those at the center of the "onion"). In order to continue an open source project, new developers must continually join but must also have the necessary skill-set to contribute quality code to the project. Through a study of GitHub contribution on open source projects, Middleton et al. found that the largest predictor of contributors becoming full-fledged members of an open source team (moving to the "core" of the "onion") was whether they submitted and commented on pull requests. The authors then suggest that GitHub, as a tool, can aid in this process by supporting "checkbox" features on a team's open source project that urge contributors to take part in these activities. Motivations of programmers: With the growth and attention on the open-source movement, the reasons and motivations of programmers for creating code for free has been under investigation. In a paper from the 15th Annual Congress of the European Economic Association on the open-source movement, the incentives of programmers on an individual level as well as on a company or network level were analyzed. What is essentially the intellectual gift giving of talented programmers challenges the "self-interested-economic-agent paradigm", and has made both the public and economists search for an understanding of what the benefits are for programmers. Motivations of programmers: Altruism: The argument for altruism is limited as an explanation because though some exists, the programmers do not focus their kindness on more charitable causes. If the generosity of working for free was a viable motivation for such a prevalent movement, it is curious why such a trend has not been seen in industries such as biotechnology that would have a much bigger impact on the public good. Motivations of programmers: Community sharing and improvement: The online community is an environment that promotes continual improvements, modifications, and contributions to each other's work. A programmer can easily benefit from open-source software because by making it public, other testers and subprograms can remove bugs, tailor code to other purposes, and find problems. This kind of peer-editing feature of open-source software promotes better programs and a higher standard of code. Motivations of programmers: Recognition: Though a project may not be associated with a specific individual, the contributors are often recognized and marked on a project's server or awarded social reputation. This allows for programmers to receive public recognition for their skills, promoting career opportunities and exposure. In fact, the founders of Sun Microsystems and Netscape began as open-source programmers. Motivations of programmers: Ego: "If they are somehow assigned to a trivial problem and that is their only possible task, they may spend six months coming up with a bewildering architecture...merely to show their friends and colleagues what a tough nut they are trying to crack." Ego-gratification has been cited as a relevant motivation of programmers because of their competitive community. An OSS (open-source software) community has no clear distinction between developers and users, because all users are potential developers. There is a large community of programmers trying to essentially outshine or impress their colleagues. They enjoy having other programmers admire their works and accomplishments, contributing to why OSS projects have a recruiting advantage for unknown talent than a closed-source company. Motivations of programmers: Creative expression: Personal satisfaction also comes from the act of writing software as an equivalent to creative self‑expression – it is almost equivalent to creating a work of art. The rediscovery of creativity, which has been lost through the mass production of commercial software products can be a relevant motivation. Gender diversity of programmers: The vast majority of programmers in open-source communities are male. In a study for the European Union on free and open-source software communities, researchers found that only 1.5% of all contributors are female. Although women are generally underrepresented in computing, the percentage of women in tech professions is actually much higher, close to 25%. This discrepancy suggests that female programmers are overall less likely than male programmers to participate in open-source projects. Gender diversity of programmers: Some research and interviews with members of open-source projects have described a male-dominated culture within open-source communities that can be unwelcoming or hostile towards females. There are initiatives such as Outreachy that aim to support more women and other underrepresented gender identities to participate in open-source software. However, within the discussion forums of open-source projects the topic of gender diversity can be highly controversial and even inflammatory. A central vision in open-source software is that because the software is built and maintained on the merit of individual code contributions, open-source communities should act as a meritocracy. In a meritocracy, the importance of an individual in the community depends on the quality of their individual contributions and not demographic factors such as age, race, religion, or gender. Thus proposing changes to the community based on gender, for example, to make the community more inviting towards females, go against the ideal of a meritocracy by targeting certain programmers by gender and not based on their skill alone.There is evidence that gender does impact a programmer's perceived merit in the community. A 2016 study identified the gender of over one million programmers on GitHub, by linking the programmer's GitHub account to their other social media accounts. Between male and female programmers, the researchers found that female programmers were actually more likely to have their pull requests accepted into the project than male programmers, however only when the female had a gender-neutral profile. When females had profiles with a name or image that identified them as female, they were less likely than male programmers to have their pull requests accepted. Another study in 2015 found that of open-source projects on GitHub, gender diversity was a significant positive predictor of a team's productivity, meaning that open-source teams with a more even mix of different genders tended to be more highly productive.Many projects have adopted the Contributor Covenant code of conduct in an attempt to address concerns of harassment of minority developers. Anyone found breaking the code of conduct can be disciplined and ultimately removed from the project. Gender diversity of programmers: In order to avoid offense to minorities many software projects have started to mandate the use of inclusive language and terminology. Evidence of open-source adoption: Libraries are using open-source software to develop information as well as library services. The purpose of open source is to provide a software that is cheaper, reliable and has better quality. The one feature that makes this software so sought after is that it is free. Libraries in particular benefit from this movement because of the resources it provides. They also promote the same ideas of learning and understanding new information through the resources of other people. Open source allows a sense of community. It is an invitation for anyone to provide information about various topics. The open-source tools even allow libraries to create web-based catalogs. According to the IT source there are various library programs that benefit from this.Government agencies and infrastructure software — Government Agencies are utilizing open-source infrastructure software, like the Linux operating system and the Apache Web-server into software, to manage information. In 2005, a new government lobby was launched under the name National Center for Open Source Policy and Research (NCOSPR) "a non-profit organization promoting the use of open source software solutions within government IT enterprises."Open-source movement in the military — Open-source movement has potential to help in the military. The open-source software allows anyone to make changes that will improve it. This is a form of invitation for people to put their minds together to grow a software in a cost efficient manner. The reason the military is so interested is because it is possible that this software can increase speed and flexibility. Although there are security setbacks to this idea due to the fact that anyone has access to change the software, the advantages can outweigh the disadvantages. The fact that the open-source programs can be modified quickly is crucial. Evidence of open-source adoption: A support group was formed to test these theories. The Military Open Source Software Working Group was organized in 2009 and held over 120 military members. Their purpose was to bring together software developers and contractors from the military to discover new ideas for reuse and collaboration. Overall, open-source software in the military is an intriguing idea that has potential drawbacks but they are not enough to offset the advantages.Open source in education — Colleges and organizations use software predominantly online to educate their students. Open-source technology is being adopted by many institutions because it can save these institutions from paying companies to provide them with these administrative software systems. One of the first major colleges to adopt an open-source system was Colorado State University in 2009 with many others following after that. Colorado State Universities system was produced by the Kuali Foundation who has become a major player in open-source administrative systems. The Kuali Foundation defines itself as a group of organizations that aims to "build and sustain open-source software for higher education, by higher education." There are many other examples of open-source instruments being used in education other than the Kuali Foundation as well."For educators, The Open Source Movement allowed access to software that could be used in teaching students how to apply the theories they were learning". With open networks and software, teachers are able to share lessons, lectures, and other course materials within a community. OpenTechComm is a program that is dedicated to "open access, open use, and open edits- text book or pedagogical resource that teachers of technical and professional communication courses at every level can rely on to craft free offerings to their students." As stated earlier, access to programs like this would be much more cost efficient for educational departments. Evidence of open-source adoption: Open source in healthcare — Created in June 2009 by the nonprofit eHealthNigeria, the open-source software OpenMRS is used to document health care in Nigeria. The use of this software began in Kaduna, Nigeria to serve the purpose of public health. OpenMRS manages features such as alerting health care workers when patients show warning signs for conditions and records births and deaths daily, among other features. The success of this software is caused by its ease of use for those first being introduced to the technology, compared to more complex proprietary healthcare software available in first world countries. This software is community-developed and can be used freely by anyone, characteristic of open-source applications. So far, OpenMRS is being used in Rwanda, Mozambique, Haiti, India, China, and the Philippines. The impact of open source in healthcare is also observed by Apelon Inc, the "leading provider of terminology and data interoperability solutions". Recently, its Distributed Terminology System (Open DTS) began supporting the open-source MySQL database system. This essentially allows for open-source software to be used in healthcare, lessening the dependence on expensive proprietary healthcare software. Due to open-source software, the healthcare industry has available a free open-source solution to implement healthcare standards. Not only does open source benefit healthcare economically, but the lesser dependence on proprietary software allows for easier integration of various systems, regardless of the developer. Evidence of open-source adoption: Companies IBM IBM has been a leading proponent of the Open Source Initiative, and began supporting Linux in 1998. Evidence of open-source adoption: Microsoft Before summer of 2008, Microsoft has generally been known as an enemy of the open-source community. The company's anti-open-source sentiment was enforced by former CEO Steve Ballmer, who referred to Linux, a widely used open-source software, as a "cancer that attaches itself ... to everything it touches." Microsoft also threatened Linux that they would charge royalties for violating 235 of their patents. Evidence of open-source adoption: In 2004, Microsoft lost a European Union court case, and lost the appeal in 2007, and their further appeal in 2012: being convicted of abusing its dominant position. Specifically they had withheld inter-operability information with the open source Samba (software) project, which can be run on many platforms and aims to "removing barriers to interoperability".In 2008, however, Sam Ramji, the then head of open-source-software strategy in Microsoft, began working closely with Bill Gates to develop a pro-open-source attitude within the software industry as well as Microsoft itself. Ramji, before leaving the company in 2009, built Microsoft's familiarity and involvement with open source, which is evident in Microsoft's contributions of open-source code to Microsoft Azure among other projects. These contributions would have been previously unimaginable by Microsoft. Microsoft's change in attitude about open source and efforts to build a stronger open-source community is evidence of the growing adoption and adaptation of open source.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nelson Cowan** Nelson Cowan: Nelson Cowan is the Curators' Distinguished Professor of Psychological Sciences at the University of Missouri. He specializes in working memory, the small amount of information held in mind and used for language processing and various kinds of problem solving. To overcome conceptual difficulties that arise for models of information processing in which different functions occur in separate boxes, Cowan proposed a more organically organized "embedded processes" model. Within it, representations held in working memory comprise an activated subset of the representations held in long-term memory, with a smaller subset held in a more integrated form in the current focus of attention. Other work has been on the developmental growth of working memory capacity and the scientific method. His work, funded by the National Institutes of Health since 1984 (primarily NICHD), has been cited over 41,000 times according to Google Scholar. The work has resulted in over 250 peer-reviewed articles, over 60 book chapters, 2 sole-authored books, and 4 edited volumes. Nelson Cowan: In addition to basic scientific work, Cowan's collaborative research related to working memory has led to clarification of the role of memory in language disorders, dyslexia, autism, schizophrenia, Parkinson's disease, amnesia, and alcoholic intoxication, as explained further on his web site and CV. For example, the work on amnesia indicates that individuals who usually cannot form new memories because of stroke or brain damage often demonstrate considerable ability to do so when the information to be memorized is surrounded by several minutes with minimal visual or acoustic interference. Main scientific contributions: Working memory capacity limits Cowan's theoretical model addresses key puzzles in information processing using a new approach in which there are two aspects of working memory: the activated portion of long-term memory, which includes rapidly-learned information limited only by decay and interference among similar features and, within this activated portion, a focus of attention limited to about 3-4 separate items or chunks in typical adults. Cowan contends that previous models did not sufficiently distinguish between these temporary-storage mechanisms. In this theory, why is there interference between words and visual objects like colors when both are held in mind? Because the focus of attention is involved in maintaining information of all types and, when the procedure discourages mnemonic strategies like grouping and rehearsal, the focus of attention is limited to just a few separate units of information - as argued in a review cited over 6,900 times according to Google Scholar. In the brain, an area of the intraparietal sulcus plays a large role in the focus of attention, perhaps serving as an index connected to posterior areas representing the content of active memories. Main scientific contributions: Attention filtering by habituation of orienting In another part of the theory of Cowan, conceptual difficulties of the idea of an attention filter were addressed. If unattended information is filtered out, how can it come to attract attention? In the theory, the attention filter is replaced by the well-known mechanism of orienting of attention. Stimuli with changed physical features attract attention, whereas stimulus features or patterns that are repeated or continuous become a part of the neural model of the environment; there is habituation of the orienting response, and such stimuli stop attracting attention. For example, Emily Elliott and Cowan showed that pre-exposure to sounds to be used as distractors reduced their capability to distract.In another kind of research on attention, Noelle Wood and Cowan replicated an often-discussed but until then poorly-understood phenomenon termed the cocktail party phenomenon. Using methodology improved from the 1950s, they found that people take a long time to notice subtle acoustic changes in an ignored channel of speech while repeating different speech presented in the other ear, a selective listening task. They also used the improved methodology to replicate the early, poorly-studied finding that about a third of participants notice their names unexpectedly presented in a channel to be ignored . That finding, however, was ambiguous. It could be that high-working-memory-span individuals are better able to monitor the channel to be ignored, or it could be that the low-span individuals cannot fix their attention on the assigned task, so that it wanders over intermittently to sample the channel to be ignored. The results have come out strongly in that direction, with many more low-span individuals noticing their names in the ignored channel. Proving that the results did not have to turn out that way, they were different for healthy older adults; their spans are like relatively low-span younger adults, yet the older adults rarely noticed their names in the channel to be ignored, suggesting that their focus of attention is strategically intact but with possibly a smaller capacity than young adults. Main scientific contributions: Development of working memory In Cowan's work on the childhood development of working memory, a major task has been to deconfound development given that many processes develop together and need to be disentangled. Could it be that working memory capacity increases with age only because of some other factor? Cowan has examined this question repeatedly in different ways and has found that a number of factors are not sufficient. These factors that could not completely account for working memory capacity growth include the allocation of attention to relevant items, encoding speed and rehearsal, and knowledge. In memory for simple, spoken sentences, for example, more mature participants remembered more units, not larger ones. Recent evidence suggests that older children become better able to notice patterns in the stimuli that allow them quickly to memorize information and thereby ease the load on the focus of attention. Consequently, older participants can remember tones or words and colors at the same time, better than younger children with less interference between the two modalities Similar findings have been obtained in the area of adult aging, with a U-shaped development across the life span in the number of items that can be held in working memory without mnemonic strategies. Simple working memory tasks account for aptitudes better in children too young to apply mnemonic strategies, and Cowan has made considerable use of a simple task that maximizes the correlation with aptitudes by making the endpoint of a list unpredictable, known as running memory span. Minimizing mnemonic strategies may mean that more attention is needed for recall, which may also be needed in typical tasks of intellectual aptitudes. Early life: Cowan provides many biographical details on his web site. He was born in 1951 in Washington, D.C. as the first child (son) of Jewish parents, Arthur Cowan from Boston, an optometrist, and Shirly B. Cowan (nee Frankle) of Baltimore. He grew up in Wheaton, Maryland and attended Wheaton High School. From oldest to youngest he has a younger brother with high-functioning autism diagnosed only at the age of about 50 (Mitchell), who has long been a valuable employee of the Veterans Administration, another younger brother (Elliott) who is an attorney, and a younger sister (Barbara) who is a social worker. Cowan was interested in science including making a telescope out of trial lenses in his father's office, tinkering with electricity and electronics at home, and expressing interest when Francis Crick won the Nobel prize and in the Washington Post indicated that he next wanted to study "how the brain works." Cowan's first experimental project, in a high school research class, involved supercooling suspended animation of rotifers, with guidance from his instructor and Commander Perry at the Bethesda Naval Hospital. Also in high school, reading a description of research studies in sleep and dreams inspired his interest in a career involving research on the brain and mind centered on understanding consciousness, which he hoped would also be of clinical, educational, or practical value. Cowan's home was within biking distance along Rock Creek Park to the National Institutes of Health in Bethesda, MD, and in the summers when home from college, he volunteered there one year (with Monte Buchsbaum), learning computer programming and studying hemispheric laterality, and had a paid assistantship the next summer (with David Jacobowitz). The latter led to his first publication on a study that he suggested to the scientists, on examining the synergic and antagonistic actions of two neurotransmitter systems in rats. Academic history: Education and positions Cowan received a B.S. from the University of Michigan with an independent major in neuroscience in 1973 and a M.S. and Ph.D. in psychology from the University of Wisconsin in 1977 and 1980, respectively, after which he completed a postdoctoral fellowship at New York University. He subsequently was hired as a professor at the University of Massachusetts Amherst in 1982, and in 1985, he joined the faculty of the University of Missouri, where he has remained since. Additionally, Cowan has served as a Distinguished Visiting Professor at the University of Helsinki, the University of Leipzig, the University of Western Australia, the University of Bristol, and the University of Edinburgh, where he also served as a professorial fellow. Academic history: Professional activities and honors Since 2017, Cowan has been the editor-in-chief of the Journal of Experimental Psychology: General and previously was associate editor of the Journal of Experimental Psychology: Learning, Memory, and Cognition, Quarterly Journal of Experimental Psychology, and the European Journal of Cognitive Psychology. He was awarded honorary doctorates at the University of Helsinki, Finland (2003) and the University of Liège, Belgium (2015). He is a fellow of the Society of Experimental Psychologists and the American Association for the Advancement of Science. Elected posts include member of the Governing Board of the Psychonomic Society (2006-2011) and President of the Experimental Psychology Division (3) of the American Psychological Association (2008-2009). He won the Lifetime Achievement Award from the Society for Experimental Psychology and Cognitive Science (2020).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GAUSS (software)** GAUSS (software): GAUSS is a matrix programming language for mathematics and statistics, developed and marketed by Aptech Systems. Its primary purpose is the solution of numerical problems in statistics, econometrics, time-series, optimization and 2D- and 3D-visualization. It was first published in 1984 for MS-DOS and is available for Linux, macOS and Windows. Examples: GAUSS has several Application Modules as well as functions in its Run-Time Library (i.e., functions that come with GAUSS without extra cost) Qprog – Quadratic programming SqpSolvemt – Sequential quadratic programming QNewton - Quasi-Newton unconstrained optimization EQsolve - Nonlinear equations solver GAUSS Applications: A range of toolboxes are available for GAUSS at additional cost.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**History of scientific method** History of scientific method: The history of scientific method considers changes in the methodology of scientific inquiry, as distinct from the history of science itself. The development of rules for scientific reasoning has not been straightforward; scientific method has been the subject of intense and recurring debate throughout the history of science, and eminent natural philosophers and scientists have argued for the primacy of one or another approach to establishing scientific knowledge. History of scientific method: Rationalist explanations of nature, including atomism, appeared both in ancient Greece in the thought of Leucippus and Democritus, and in ancient India, in the Nyaya, Vaisesika and Buddhist schools, while Charvaka materialism rejected inference as a source of knowledge in favour of an empiricism that was always subject to doubt. Aristotle pioneered scientific method in ancient Greece alongside his empirical biology and his work on logic, rejecting a purely deductive framework in favour of generalisations made from observations of nature. History of scientific method: Some of the most important debates in the history of scientific method center on: rationalism, especially as advocated by René Descartes; inductivism, which rose to particular prominence with Isaac Newton and his followers; and hypothetico-deductivism, which came to the fore in the early 19th century. In the late 19th and early 20th centuries, a debate over realism vs. antirealism was central to discussions of scientific method as powerful scientific theories extended beyond the realm of the observable, while in the mid-20th century some prominent philosophers argued against any universal rules of science at all. Early methodology: Ancient Egypt and Babylonia There are few explicit discussions of scientific methodologies in surviving records from early cultures. The most that can be inferred about the approaches to undertaking science in this period stems from descriptions of early investigations into nature, in the surviving records. An Egyptian medical textbook, the Edwin Smith papyrus, (c. 1600 BCE), applies the following components: examination, diagnosis, treatment and prognosis, to the treatment of disease, which display strong parallels to the basic empirical method of science and according to G. E. R. Lloyd played a significant role in the development of this methodology. The Ebers papyrus (c. 1550 BCE) also contains evidence of traditional empiricism. Early methodology: By the middle of the 1st millennium BCE in Mesopotamia, Babylonian astronomy had evolved into the earliest example of a scientific astronomy, as it was "the first and highly successful attempt at giving a refined mathematical description of astronomical phenomena." According to the historian Asger Aaboe, "all subsequent varieties of scientific astronomy, in the Hellenistic world, in India, in the Islamic world, and in the West – if not indeed all subsequent endeavour in the exact sciences – depend upon Babylonian astronomy in decisive and fundamental ways."The early Babylonians and Egyptians developed much technical knowledge, crafts, and mathematics used in practical tasks of divination, as well as a knowledge of medicine, and made lists of various kinds. While the Babylonians in particular had engaged in the earliest forms of an empirical mathematical science, with their early attempts at mathematically describing natural phenomena, they generally lacked underlying rational theories of nature. Early methodology: Classical antiquity Greek-speaking ancient philosophers engaged in the earliest known forms of what is today recognized as a rational theoretical science, with the move towards a more rational understanding of nature which began at least since the Archaic Period (650 – 480 BCE) with the Presocratic school. Thales was the first known philosopher to use natural explanations, proclaiming that every event had a natural cause, even though he is known for saying "all things are full of gods" and sacrificed an ox when he discovered his theorem. Leucippus, went on to develop the theory of atomism – the idea that everything is composed entirely of various imperishable, indivisible elements called atoms. This was elaborated in great detail by Democritus. Early methodology: Similar atomist ideas emerged independently among ancient Indian philosophers of the Nyaya, Vaisesika and Buddhist schools. In particular, like the Nyaya, Vaisesika, and Buddhist schools, the Cārvāka epistemology was materialist, and skeptical enough to admit perception as the basis for unconditionally true knowledge, while cautioning that if one could only infer a truth, then one must also harbor a doubt about that truth; an inferred truth could not be unconditional.Towards the middle of the 5th century BCE, some of the components of a scientific tradition were already heavily established, even before Plato, who was an important contributor to this emerging tradition, thanks to the development of deductive reasoning, as propounded by his student, Aristotle. In Protagoras (318d-f), Plato mentioned the teaching of arithmetic, astronomy and geometry in schools. The philosophical ideas of this time were mostly freed from the constraints of everyday phenomena and common sense. This denial of reality as we experience it reached an extreme in Parmenides who argued that the world is one and that change and subdivision do not exist. Early methodology: As early as the 4th century BCE, armillary spheres had been invented in China, and in the 3rd century BCE in Greece for use in astronomy; their use was promulgated thereafter, for example by § Ibn al-Haytham, and by § Tycho Brahe. In the 3rd and 4th centuries BCE, the Greek physicians Herophilos (335–280 BCE) and Erasistratus of Chios employed experiments to further their medical research; Erasistratus at one time repeatedly weighed a caged bird, and noted its weight loss between feeding times. Early methodology: Aristotle Aristotle's inductive-deductive method used inductions from observations to infer general principles, deductions from those principles to check against further observations, and more cycles of induction and deduction to continue the advance of knowledge.The Organon (Greek: Ὄργανον, meaning "instrument, tool, organ") is the standard collection of Aristotle's six works on logic. The name Organon was given by Aristotle's followers, the Peripatetics. Early methodology: The order of the works is not chronological (the chronology is now difficult to determine) but was deliberately chosen by Theophrastus to constitute a well-structured system. Indeed, parts of them seem to be a scheme of a lecture on logic. The arrangement of the works was made by Andronicus of Rhodes around 40 BCE.The Organon comprises the following six works: The Categories (Greek: Κατηγορίαι, Latin: Categoriae) introduces Aristotle's 10-fold classification of that which exists: substance, quantity, quality, relation, place, time, situation, condition, action, and passion. Early methodology: On Interpretation (Greek: Περὶ Ἑρμηνείας, Latin: De Interpretatione) introduces Aristotle's conception of proposition and judgment, and the various relations between affirmative, negative, universal, and particular propositions. Aristotle discusses the square of opposition or square of Apuleius in Chapter 7 and its appendix Chapter 8. Chapter 9 deals with the problem of future contingents. The Prior Analytics (Greek: Ἀναλυτικὰ Πρότερα, Latin: Analytica Priora) introduces Aristotle's syllogistic method (see term logic), argues for its correctness, and discusses inductive inference. The Posterior Analytics (Greek: Ἀναλυτικὰ Ὕστερα, Latin: Analytica Posteriora) deals with demonstration, definition, and scientific knowledge. The Topics (Greek: Τοπικά, Latin: Topica) treats of issues in constructing valid arguments, and of inference that is probable, rather than certain. It is in this treatise that Aristotle mentions the predicables, later discussed by Porphyry and by the scholastic logicians. Early methodology: The Sophistical Refutations (Greek: Περὶ Σοφιστικῶν Ἐλέγχων, Latin: De Sophisticis Elenchis) gives a treatment of logical fallacies, and provides a key link to Aristotle's work on rhetoric.Aristotle's Metaphysics has some points of overlap with the works making up the Organon but is not traditionally considered part of it; additionally there are works on logic attributed, with varying degrees of plausibility, to Aristotle that were not known to the Peripatetics. Early methodology: Aristotle has been called the founder of modern science by De Lacy O'Leary. His demonstration method is found in Posterior Analytics. He provided another of the ingredients of scientific tradition: empiricism. For Aristotle, universal truths can be known from particular things via induction. To some extent then, Aristotle reconciles abstract thought with observation, although it would be a mistake to imply that Aristotelian science is empirical in form. Indeed, Aristotle did not accept that knowledge acquired by induction could rightly be counted as scientific knowledge. Nevertheless, induction was for him a necessary preliminary to the main business of scientific enquiry, providing the primary premises required for scientific demonstrations. Early methodology: Aristotle largely ignored inductive reasoning in his treatment of scientific enquiry. To make it clear why this is so, consider this statement in the Posterior Analytics: We suppose ourselves to possess unqualified scientific knowledge of a thing, as opposed to knowing it in the accidental way in which the sophist knows, when we think that we know the cause on which the fact depends, as the cause of that fact and of no other, and, further, that the fact could not be other than it is. Early methodology: It was therefore the work of the philosopher to demonstrate universal truths and to discover their causes. While induction was sufficient for discovering universals by generalization, it did not succeed in identifying causes. For this task Aristotle used the tool of deductive reasoning in the form of syllogisms. Using the syllogism, scientists could infer new universal truths from those already established. Early methodology: Aristotle developed a complete normative approach to scientific inquiry involving the syllogism, which he discusses at length in his Posterior Analytics. A difficulty with this scheme lay in showing that derived truths have solid primary premises. Aristotle would not allow that demonstrations could be circular (supporting the conclusion by the premises, and the premises by the conclusion). Nor would he allow an infinite number of middle terms between the primary premises and the conclusion. This leads to the question of how the primary premises are found or developed, and as mentioned above, Aristotle allowed that induction would be required for this task. Early methodology: Towards the end of the Posterior Analytics, Aristotle discusses knowledge imparted by induction. Early methodology: Thus it is clear that we must get to know the primary premises by induction; for the method by which even sense-perception implants the universal is inductive. [...] it follows that there will be no scientific knowledge of the primary premises, and since except intuition nothing can be truer than scientific knowledge, it will be intuition that apprehends the primary premises. [...] If, therefore, it is the only other kind of true thinking except scientific knowing, intuition will be the originative source of scientific knowledge. Early methodology: The account leaves room for doubt regarding the nature and extent of Aristotle's empiricism. In particular, it seems that Aristotle considers sense-perception only as a vehicle for knowledge through intuition. He restricted his investigations in natural history to their natural settings, such as at the Pyrrha lagoon, now called Kalloni, at Lesbos. Aristotle and Theophrastus together formulated the new science of biology, inductively, case by case, for two years before Aristotle was called to tutor Alexander. Aristotle performed no modern-style experiments in the form in which they appear in today's physics and chemistry laboratories. Early methodology: Induction is not afforded the status of scientific reasoning, and so it is left to intuition to provide a solid foundation for Aristotle's science. With that said, Aristotle brings us somewhat closer an empirical science than his predecessors. Early methodology: Epicurus In his work Kαvώv ('canon', a straight edge or ruler, thus any type of measure or standard, referred to as 'canonic'), Epicurus laid out his first rule for inquiry in physics: 'that the first concepts be seen,: p.20  and that they not require demonstration '.: pp.35–47 His second rule for inquiry was that prior to an investigation, we are to have self-evident concepts,: pp.61–80  so that we might infer [ἔχωμεν οἷς σημειωσόμεθα] both what is expected [τò προσμένον], and also what is non-apparent [τò ἄδηλον].: pp.83–103 Epicurus applies his method of inference (the use of observations as signs, Asmis' summary, p. 333: the method of using the phenomena as signs (σημεῖα) of what is unobserved): pp.175–196  immediately to the atomic theory of Democritus. In Aristotle's Prior Analytics, Aristotle himself employs the use of signs.: pp.212–224  But Epicurus presented his 'canonic' as rival to Aristotle's logic.: pp.19–34  See: Lucretius (c. 99 BCE – c. 55 BCE) De rerum natura (On the nature of things) a didactic poem explaining Epicurus' philosophy and physics. Emergence of inductive experimental method: During the Middle Ages issues of what is now termed science began to be addressed. There was greater emphasis on combining theory with practice in the Islamic world than there had been in Classical times, and it was common for those studying the sciences to be artisans as well, something that had been "considered an aberration in the ancient world." Islamic experts in the sciences were often expert instrument makers who enhanced their powers of observation and calculation with them. Starting in the early ninth century, early Muslim scientists such al-Kindi (801–873) and the authors writing under the name of Jābir ibn Hayyān (writings dated to c. 850–950) started to put a greater emphasis on the use of experiment as a source of knowledge. Several scientific methods thus emerged from the medieval Muslim world by the early 11th century, all of which emphasized experimentation as well as quantification to varying degrees. Emergence of inductive experimental method: Ibn al-Haytham The Arab physicist Ibn al-Haytham (Alhazen) used experimentation to obtain the results in his Book of Optics (1021). He combined observations, experiments and rational arguments to support his intromission theory of vision, in which rays of light are emitted from objects rather than from the eyes. He used similar arguments to show that the ancient emission theory of vision supported by Ptolemy and Euclid (in which the eyes emit the rays of light used for seeing), and the ancient intromission theory supported by Aristotle (where objects emit physical particles to the eyes), were both wrong.Experimental evidence supported most of the propositions in his Book of Optics and grounded his theories of vision, light and colour, as well as his research in catoptrics and dioptrics. His legacy was elaborated through the 'reforming' of his Optics by Kamal al-Din al-Farisi (d. c. 1320) in the latter's Kitab Tanqih al-Manazir (The Revision of [Ibn al-Haytham's] Optics).Alhazen viewed his scientific studies as a search for truth: "Truth is sought for its own sake. And those who are engaged upon the quest for anything for its own sake are not interested in other things. Finding the truth is difficult, and the road to it is rough. ...Alhazen's work included the conjecture that "Light travels through transparent bodies in straight lines only", which he was able to corroborate only after years of effort. He stated, "[This] is clearly observed in the lights which enter into dark rooms through holes. ... the entering light will be clearly observable in the dust which fills the air." He also demonstrated the conjecture by placing a straight stick or a taut thread next to the light beam.Ibn al-Haytham also employed scientific skepticism and emphasized the role of empiricism. He also explained the role of induction in syllogism, and criticized Aristotle for his lack of contribution to the method of induction, which Ibn al-Haytham regarded as superior to syllogism, and he considered induction to be the basic requirement for true scientific research.Something like Occam's razor is also present in the Book of Optics. For example, after demonstrating that light is generated by luminous objects and emitted or reflected into the eyes, he states that therefore "the extramission of [visual] rays is superfluous and useless." He may also have been the first scientist to adopt a form of positivism in his approach. He wrote that "we do not go beyond experience, and we cannot be content to use pure concepts in investigating natural phenomena", and that the understanding of these cannot be acquired without mathematics. After assuming that light is a material substance, he does not further discuss its nature but confines his investigations to the diffusion and propagation of light. The only properties of light he takes into account are those treatable by geometry and verifiable by experiment. Emergence of inductive experimental method: Al-Biruni The Persian scientist Abū Rayhān al-Bīrūnī introduced early scientific methods for several different fields of inquiry during the 1020s and 1030s. For example, in his treatise on mineralogy, Kitab al-Jawahir (Book of Precious Stones), al-Biruni is "the most exact of experimental scientists", while in the introduction to his study of India, he declares that "to execute our project, it has not been possible to follow the geometric method" and thus became one of the pioneers of comparative sociology in insisting on field experience and information. He also developed an early experimental method for mechanics.Al-Biruni's methods resembled the modern scientific method, particularly in his emphasis on repeated experimentation. Biruni was concerned with how to conceptualize and prevent both systematic errors and observational biases, such as "errors caused by the use of small instruments and errors made by human observers." He argued that if instruments produce errors because of their imperfections or idiosyncratic qualities, then multiple observations must be taken, analyzed qualitatively, and on this basis, arrive at a "common-sense single value for the constant sought", whether an arithmetic mean or a "reliable estimate." In his scientific method, "universals came out of practical, experimental work" and "theories are formulated after discoveries", as with inductivism. Emergence of inductive experimental method: Ibn Sina (Avicenna) In the On Demonstration section of The Book of Healing (1027), the Persian philosopher and scientist Avicenna (Ibn Sina) discussed philosophy of science and described an early scientific method of inquiry. He discussed Aristotle's Posterior Analytics and significantly diverged from it on several points. Avicenna discussed the issue of a proper procedure for scientific inquiry and the question of "How does one acquire the first principles of a science?" He asked how a scientist might find "the initial axioms or hypotheses of a deductive science without inferring them from some more basic premises?" He explained that the ideal situation is when one grasps that a "relation holds between the terms, which would allow for absolute, universal certainty." Avicenna added two further methods for finding a first principle: the ancient Aristotelian method of induction (istiqra), and the more recent method of examination and experimentation (tajriba). Avicenna criticized Aristotelian induction, arguing that "it does not lead to the absolute, universal, and certain premises that it purports to provide." In its place, he advocated "a method of experimentation as a means for scientific inquiry."Earlier, in The Canon of Medicine (1025), Avicenna was also the first to describe what is essentially methods of agreement, difference and concomitant variation which are critical to inductive logic and the scientific method. However, unlike his contemporary al-Biruni's scientific method, in which "universals came out of practical, experimental work" and "theories are formulated after discoveries", Avicenna developed a scientific procedure in which "general and universal questions came first and led to experimental work." Due to the differences between their methods, al-Biruni referred to himself as a mathematical scientist and to Avicenna as a philosopher, during a debate between the two scholars. Emergence of inductive experimental method: Robert Grosseteste During the European Renaissance of the 12th century, ideas on scientific methodology, including Aristotle's empiricism and the experimental approaches of Alhazen and Avicenna, were introduced to medieval Europe via Latin translations of Arabic and Greek texts and commentaries. Robert Grosseteste's commentary on the Posterior Analytics places Grosseteste among the first scholastic thinkers in Europe to understand Aristotle's vision of the dual nature of scientific reasoning. Concluding from particular observations into a universal law, and then back again, from universal laws to prediction of particulars. Grosseteste called this "resolution and composition". Further, Grosseteste said that both paths should be verified through experimentation to verify the principles. Emergence of inductive experimental method: Roger Bacon Roger Bacon was inspired by the writings of Grosseteste. In his account of a method, Bacon described a repeating cycle of observation, hypothesis, experimentation, and the need for independent verification. He recorded the way he had conducted his experiments in precise detail, perhaps with the idea that others could reproduce and independently test his results. Emergence of inductive experimental method: About 1256 he joined the Franciscan Order and became subject to the Franciscan statute forbidding Friars from publishing books or pamphlets without specific approval. After the accession of Pope Clement IV in 1265, the Pope granted Bacon a special commission to write to him on scientific matters. In eighteen months he completed three large treatises, the Opus Majus, Opus Minus, and Opus Tertium which he sent to the Pope. William Whewell has called Opus Majus at once the Encyclopaedia and Organon of the 13th century. Emergence of inductive experimental method: Part I (pp. 1–22) treats of the four causes of error: authority, custom, the opinion of the unskilled many, and the concealment of real ignorance by a pretense of knowledge. Part VI (pp. 445–477) treats of experimental science, domina omnium scientiarum. There are two methods of knowledge: the one by argument, the other by experience. Mere argument is never sufficient; it may decide a question, but gives no satisfaction or certainty to the mind, which can only be convinced by immediate inspection or intuition, which is what experience gives. Emergence of inductive experimental method: Experimental science, which in the Opus Tertium (p. 46) is distinguished from the speculative sciences and the operative arts, is said to have three great prerogatives over all sciences: It verifies their conclusions by direct experiment; It discovers truths which they could never reach; It investigates the secrets of nature, and opens to us a knowledge of past and future. Emergence of inductive experimental method: Roger Bacon illustrated his method by an investigation into the nature and cause of the rainbow, as a specimen of inductive research. Emergence of inductive experimental method: Renaissance humanism and medicine Aristotle's ideas became a framework for critical debate beginning with absorption of the Aristotelian texts into the university curriculum in the first half of the 13th century. Contributing to this was the success of medieval theologians in reconciling Aristotelian philosophy with Christian theology. Within the sciences, medieval philosophers were not afraid of disagreeing with Aristotle on many specific issues, although their disagreements were stated within the language of Aristotelian philosophy. All medieval natural philosophers were Aristotelians, but "Aristotelianism" had become a somewhat broad and flexible concept. With the end of Middle Ages, the Renaissance rejection of medieval traditions coupled with an extreme reverence for classical sources led to a recovery of other ancient philosophical traditions, especially the teachings of Plato. By the 17th century, those who clung dogmatically to Aristotle's teachings were faced with several competing approaches to nature. Emergence of inductive experimental method: The discovery of the Americas at the close of the 15th century showed the scholars of Europe that new discoveries could be found outside of the authoritative works of Aristotle, Pliny, Galen, and other ancient writers. Emergence of inductive experimental method: Galen of Pergamon (129 – c. 200 AD) had studied with four schools in antiquity — Platonists, Aristotelians, Stoics, and Epicureans, and at Alexandria, the center of medicine at the time. In his Methodus Medendi, Galen had synthesized the empirical and dogmatic schools of medicine into his own method, which was preserved by Arab scholars. After the translations from Arabic were critically scrutinized, a backlash occurred and demand arose in Europe for translations of Galen's medical text from the original Greek. Galen's method became very popular in Europe. Thomas Linacre, the teacher of Erasmus, thereupon translated Methodus Medendi from Greek into Latin for a larger audience in 1519. Limbrick 1988 notes that 630 editions, translations, and commentaries on Galen were produced in Europe in the 16th century, eventually eclipsing Arabic medicine there, and peaking in 1560, at the time of the scientific revolution.By the late 15th century, the physician-scholar Niccolò Leoniceno was finding errors in Pliny's Natural History. As a physician, Leoniceno was concerned about these botanical errors propagating to the materia medica on which medicines were based. To counter this, a botanical garden was established at Orto botanico di Padova, University of Padua (in use for teaching by 1546), in order that medical students might have empirical access to the plants of a pharmacopia. Other Renaissance teaching gardens were established, notably by the physician Leonhart Fuchs, one of the founders of botany. Emergence of inductive experimental method: The first printed work devoted to the concept of method is Jodocus Willichius, De methodo omnium artium et disciplinarum informanda opusculum (1550). An Informative Essay on the Method of All Arts and Disciplines (1550) Skepticism as a basis for understanding In 1562 Outlines of Pyrrhonism by the ancient Pyrrhonist philosopher Sextus Empiricus (c. 160-210 AD) was published in a Latin translation (from Greek), quickly placing the arguments of classical skepticism in the European mainstream. These arguments establish seemingly insurmountable challenges for the possibility of certain knowledge. Descartes' famous "Cogito" argument is an attempt to overcome skepticism and reestablish a foundation for certainty but other thinkers responded by revising what the search for knowledge, particularly physical knowledge, might be. Emergence of inductive experimental method: The first of these, philosopher and physician Francisco Sanches, was led by his medical training at Rome, 1571–73, to search for a true method of knowing (modus sciendi), as nothing clear can be known by the methods of Aristotle and his followers — for example, 1) syllogism fails upon circular reasoning; 2) Aristotle's modal logic was not stated clearly enough for use in medieval times, and remains a research problem to this day. Following the physician Galen's method of medicine, Sanches lists the methods of judgement and experience, which are faulty in the wrong hands, and we are left with the bleak statement That Nothing is Known (1581, in Latin Quod Nihil Scitur). This challenge was taken up by René Descartes in the next generation (1637), but at the least, Sanches warns us that we ought to refrain from the methods, summaries, and commentaries on Aristotle, if we seek scientific knowledge. In this, he is echoed by Francis Bacon who was influenced by another prominent exponent of skepticism, Montaigne; Sanches cites the humanist Juan Luis Vives who sought a better educational system, as well as a statement of human rights as a pathway for improvement of the lot of the poor. Emergence of inductive experimental method: "Sanches develops his scepticism by means of an intellectual critique of Aristotelianism, rather than by an appeal to the history of human stupidity and the variety and contrariety of previous theories." —Popkin 1979, p. 37, as cited by Sanches, Limbrick & Thomson 1988, pp. 24–5 "To work, then; and if you know something, then teach me; I shall be extremely grateful to you. In the meantime, as I prepare to examine Things, I shall raise the question anything is known, and if so, how, in the introductory passages of another book, a book in which I will expound, as far as human frailty allows, the method of knowing. Farewell. Emergence of inductive experimental method: WHAT IS TAUGHT HAS NO MORE STRENGTH THAN IT DERIVES FROM HIM WHO IS TAUGHT. Emergence of inductive experimental method: WHAT?" —Francisco Sanches (1581) Quod Nihil Scitur p. 100 Tycho Brahe See History of astronomy § Renaissance and Early Modern Europe, Kepler's laws of planetary motion, and History of optics § Renaissance and Early ModernThe first modern science, in which practioners were prepared to revise or reject long-held beliefs in the light of new evidence, was astronomy, and Tycho Brahe was the first modern astronomer. See Sextant, right. Note the explicit reduction of geometrical diagrams to practice (real objects with actual lengths and angles). Emergence of inductive experimental method: In 1572, Tycho noticed a completely new star that was brighter than any star or planet. Astonished by the existence of a star that ought not to have been there and gaining the patronage of King Frederick II of Denmark, Tycho built the Uraniborg observatory at enormous cost. Over a period of fifteen years (1576–91), Tycho and upwards of thirty assistants charted the positions of stars, planets, and other celestial bodies at Uraniborg with unprecedented accuracy. In 1600, Tycho hired Johannes Kepler to assist him in analyzing and publishing his observations. Kepler later used Tycho's observations of the motion of Mars to deduce the laws of planetary motion, which were later explained in terms of Newton's law of universal gravitation.Besides Tycho's specific role in advancing astronomical knowledge, Tycho's single-minded pursuit of ever-more-accurate measurement was enormously influential in creating a modern scientific culture in which theory and evidence were understood to be inseparably linked. See Sextant, right. Emergence of inductive experimental method: By 1723, standard units of measure had spread to § terrestrial mass and length. Emergence of inductive experimental method: Francis Bacon's eliminative induction "If a man will begin with certainties, he shall end in doubts; but if he will be content to begin with doubts, he shall end in certainties." —Francis Bacon (1605) The Advancement of Learning, Book 1, v, 8 Francis Bacon (1561–1626) entered Trinity College, Cambridge in April 1573, where he applied himself diligently to the several sciences as then taught, and came to the conclusion that the methods employed and the results attained were alike erroneous; he learned to despise the current Aristotelian philosophy. He believed philosophy must be taught its true purpose, and for this purpose a new method must be devised. With this conception in his mind, Bacon left the university.Bacon attempted to describe a rational procedure for establishing causation between phenomena based on induction. Bacon's induction was, however, radically different than that employed by the Aristotelians. As Bacon put it, [A]nother form of induction must be devised than has hitherto been employed, and it must be used for proving and discovering not first principles (as they are called) only, but also the lesser axioms, and the middle, and indeed all. For the induction which proceeds by simple enumeration is childish. —Novum Organum section CV Bacon's method relied on experimental histories to eliminate alternative theories. Bacon explains how his method is applied in his Novum Organum (published 1620). In an example he gives on the examination of the nature of heat, Bacon creates two tables, the first of which he names "Table of Essence and Presence", enumerating the many various circumstances under which we find heat. In the other table, labelled "Table of Deviation, or of Absence in Proximity", he lists circumstances which bear resemblance to those of the first table except for the absence of heat. From an analysis of what he calls the natures (light emitting, heavy, colored, etc.) of the items in these lists we are brought to conclusions about the form nature, or cause, of heat. Those natures which are always present in the first table, but never in the second are deemed to be the cause of heat. Emergence of inductive experimental method: The role experimentation played in this process was twofold. The most laborious job of the scientist would be to gather the facts, or 'histories', required to create the tables of presence and absence. Such histories would document a mixture of common knowledge and experimental results. Secondly, experiments of light, or, as we might say, crucial experiments would be needed to resolve any remaining ambiguities over causes. Emergence of inductive experimental method: Bacon showed an uncompromising commitment to experimentation. Despite this, he did not make any great scientific discoveries during his lifetime. This may be because he was not the most able experimenter. It may also be because hypothesising plays only a small role in Bacon's method compared to modern science. Hypotheses, in Bacon's method, are supposed to emerge during the process of investigation, with the help of mathematics and logic. Bacon gave a substantial but secondary role to mathematics "which ought only to give definiteness to natural philosophy, not to generate or give it birth" (Novum Organum XCVI). An over-emphasis on axiomatic reasoning had rendered previous non-empirical philosophy impotent, in Bacon's view, which was expressed in his Novum Organum: XIX. There are and can be only two ways of searching into and discovering truth. The one flies from the senses and particulars to the most general axioms, and from these principles, the truth of which it takes for settled and immoveable, proceeds to judgment and to the discovery of middle axioms. And this way is now in fashion. The other derives axioms from the senses and particulars, rising by a gradual and unbroken ascent, so that it arrives at the most general axioms last of all. This is the true way, but as yet untried. Emergence of inductive experimental method: In Bacon's utopian novel, The New Atlantis, the ultimate role is given for inductive reasoning: Lastly, we have three that raise the former discoveries by experiments into greater observations, axioms, and aphorisms. These we call interpreters of nature. Descartes In 1619, René Descartes began writing his first major treatise on proper scientific and philosophical thinking, the unfinished Rules for the Direction of the Mind. His aim was to create a complete science that he hoped would overthrow the Aristotelian system and establish himself as the sole architect of a new system of guiding principles for scientific research. This work was continued and clarified in his 1637 treatise, Discourse on Method, and in his 1641 Meditations. Descartes describes the intriguing and disciplined thought experiments he used to arrive at the idea we instantly associate with him: I think therefore I am. Emergence of inductive experimental method: From this foundational thought, Descartes finds proof of the existence of a God who, possessing all possible perfections, will not deceive him provided he resolves "[...] never to accept anything for true which I did not clearly know to be such; that is to say, carefully to avoid precipitancy and prejudice, and to comprise nothing more in my judgment than what was presented to my mind so clearly and distinctly as to exclude all ground of methodic doubt."This rule allowed Descartes to progress beyond his own thoughts and judge that there exist extended bodies outside of his own thoughts. Descartes published seven sets of objections to the Meditations from various sources along with his replies to them. Despite his apparent departure from the Aristotelian system, a number of his critics felt that Descartes had done little more than replace the primary premises of Aristotle with those of his own. Descartes says as much himself in a letter written in 1647 to the translator of Principles of Philosophy, a perfect knowledge [...] must necessarily be deduced from first causes [...] we must try to deduce from these principles knowledge of the things which depend on them, that there be nothing in the whole chain of deductions deriving from them that is not perfectly manifest. Emergence of inductive experimental method: And again, some years earlier, speaking of Galileo's physics in a letter to his friend and critic Mersenne from 1638, without having considered the first causes of nature, [Galileo] has merely looked for the explanations of a few particular effects, and he has thereby built without foundations. Emergence of inductive experimental method: Whereas Aristotle purported to arrive at his first principles by induction, Descartes believed he could obtain them using reason only. In this sense, he was a Platonist, as he believed in the innate ideas, as opposed to Aristotle's blank slate (tabula rasa), and stated that the seeds of science are inside us.Unlike Bacon, Descartes successfully applied his own ideas in practice. He made significant contributions to science, in particular in aberration-corrected optics. His work in analytic geometry was a necessary precedent to differential calculus and instrumental in bringing mathematical analysis to bear on scientific matters. Emergence of inductive experimental method: Galileo Galilei During the period of religious conservatism brought about by the Reformation and Counter-Reformation, Galileo Galilei unveiled his new science of motion. Neither the contents of Galileo's science, nor the methods of study he selected were in keeping with Aristotelian teachings. Whereas Aristotle thought that a science should be demonstrated from first principles, Galileo had used experiments as a research tool. Galileo nevertheless presented his treatise in the form of mathematical demonstrations without reference to experimental results. It is important to understand that this in itself was a bold and innovative step in terms of scientific method. The usefulness of mathematics in obtaining scientific results was far from obvious. This is because mathematics did not lend itself to the primary pursuit of Aristotelian science: the discovery of causes. Emergence of inductive experimental method: Whether it is because Galileo was realistic about the acceptability of presenting experimental results as evidence or because he himself had doubts about the epistemological status of experimental findings is not known. Nevertheless, it is not in his Latin treatise on motion that we find reference to experiments, but in his supplementary dialogues written in the Italian vernacular. In these dialogues experimental results are given, although Galileo may have found them inadequate for persuading his audience. Thought experiments showing logical contradictions in Aristotelian thinking, presented in the skilled rhetoric of Galileo's dialogue were further enticements for the reader. Emergence of inductive experimental method: As an example, in the dramatic dialogue titled Third Day from his Two New Sciences, Galileo has the characters of the dialogue discuss an experiment involving two free falling objects of differing weight. An outline of the Aristotelian view is offered by the character Simplicio. For this experiment he expects that "a body which is ten times as heavy as another will move ten times as rapidly as the other". The character Salviati, representing Galileo's persona in the dialogue, replies by voicing his doubt that Aristotle ever attempted the experiment. Salviati then asks the two other characters of the dialogue to consider a thought experiment whereby two stones of differing weights are tied together before being released. Following Aristotle, Salviati reasons that "the more rapid one will be partly retarded by the slower, and the slower will be somewhat hastened by the swifter". But this leads to a contradiction, since the two stones together make a heavier object than either stone apart, the heavier object should in fact fall with a speed greater than that of either stone. From this contradiction, Salviati concludes that Aristotle must, in fact, be wrong and the objects will fall at the same speed regardless of their weight, a conclusion that is borne out by experiment. Emergence of inductive experimental method: In his 1991 survey of developments in the modern accumulation of knowledge such as this, Charles Van Doren considers that the Copernican Revolution really is the Galilean Cartesian (René Descartes) or simply the Galilean revolution on account of the courage and depth of change brought about by the work of Galileo. Emergence of inductive experimental method: Isaac Newton Both Bacon and Descartes wanted to provide a firm foundation for scientific thought that avoided the deceptions of the mind and senses. Bacon envisaged that foundation as essentially empirical, whereas Descartes provides a metaphysical foundation for knowledge. If there were any doubts about the direction in which scientific method would develop, they were set to rest by the success of Isaac Newton. Implicitly rejecting Descartes' emphasis on rationalism in favor of Bacon's empirical approach, he outlines his four "rules of reasoning" in the Principia, We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Emergence of inductive experimental method: Therefore to the same natural effects we must, as far as possible, assign the same causes. The qualities of bodies, which admit neither intension nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever. In experimental philosophy we are to look upon propositions collected by general induction from phænomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, until such time as other phænomena occur, by which they may either be made more accurate, or liable to exceptions. But Newton also left an admonition about a theory of everything: To explain all nature is too difficult a task for any one man or even for any one age. 'Tis much better to do a little with certainty, and leave the rest for others that come after you, than to explain all things. Emergence of inductive experimental method: Newton's work became a model that other sciences sought to emulate, and his inductive approach formed the basis for much of natural philosophy through the 18th and early 19th centuries. Some methods of reasoning were later systematized by Mill's Methods (or Mill's canon), which are five explicit statements of what can be discarded and what can be kept while building a hypothesis. George Boole and William Stanley Jevons also wrote on the principles of reasoning. Integrating deductive and inductive method: Attempts to systematize a scientific method were confronted in the mid-18th century by the problem of induction, a positivist logic formulation which, in short, asserts that nothing can be known with certainty except what is actually observed. David Hume took empiricism to the skeptical extreme; among his positions was that there is no logical necessity that the future should resemble the past, thus we are unable to justify inductive reasoning itself by appealing to its past success. Hume's arguments, of course, came on the heels of many, many centuries of excessive speculation upon excessive speculation not grounded in empirical observation and testing. Many of Hume's radically skeptical arguments were argued against, but not resolutely refuted, by Immanuel Kant's Critique of Pure Reason in the late 18th century. Hume's arguments continue to hold a strong lingering influence and certainly on the consciousness of the educated classes for the better part of the 19th century when the argument at the time became the focus on whether or not the inductive method was valid. Integrating deductive and inductive method: Hans Christian Ørsted, (Ørsted is the Danish spelling; Oersted in other languages) (1777–1851) was heavily influenced by Kant, in particular, Kant's Metaphysische Anfangsgründe der Naturwissenschaft (Metaphysical Foundations of Natural Science). The following sections on Ørsted encapsulate our current, common view of scientific method. His work appeared in Danish, most accessibly in public lectures, which he translated into German, French, English, and occasionally Latin. But some of his views go beyond Kant: "In order to achieve completeness in our knowledge of nature, we must start from two extremes, from experience and from the intellect itself. ... The former method must conclude with natural laws, which it has abstracted from experience, while the latter must begin with principles, and gradually, as it develops more and more, it becomes ever more detailed. Of course, I speak here about the method as manifested in the process of the human intellect itself, not as found in textbooks, where the laws of nature which have been abstracted from the consequent experiences are placed first because they are required to explain the experiences. When the empiricist in his regression towards general laws of nature meets the metaphysician in his progression, science will reach its perfection."Ørsted's "First Introduction to General Physics" (1811) exemplified the steps of observation, hypothesis, deduction and experiment. In 1805, based on his researches on electromagnetism Ørsted came to believe that electricity is propagated by undulatory action (i.e., fluctuation). By 1820, he felt confident enough in his beliefs that he resolved to demonstrate them in a public lecture, and in fact observed a small magnetic effect from a galvanic circuit (i.e., voltaic circuit), without rehearsal;In 1831 John Herschel (1792–1871) published A Preliminary Discourse on the study of Natural Philosophy, setting out the principles of science. Measuring and comparing observations was to be used to find generalisations in "empirical laws", which described regularities in phenomena, then natural philosophers were to work towards the higher aim of finding a universal "law of nature" which explained the causes and effects producing such regularities. An explanatory hypothesis was to be found by evaluating true causes (Newton's "vera causae") derived from experience, for example evidence of past climate change could be due to changes in the shape of continents, or to changes in Earth's orbit. Possible causes could be inferred by analogy to known causes of similar phenomena. It was essential to evaluate the importance of a hypothesis; "our next step in the verification of an induction must, therefore, consist in extending its application to cases not originally contemplated; in studiously varying the circumstances under which our causes act, with a view to ascertain whether their effect is general; and in pushing the application of our laws to extreme cases."William Whewell (1794–1866) regarded his History of the Inductive Sciences, from the Earliest to the Present Time (1837) to be an introduction to the Philosophy of the Inductive Sciences (1840) which analyzes the method exemplified in the formation of ideas. Whewell attempts to follow Bacon's plan for discovery of an effectual art of discovery. He named the hypothetico-deductive method (which Encyclopædia Britannica credits to Newton); Whewell also coined the term scientist. Whewell examines ideas and attempts to construct science by uniting ideas to facts. He analyses induction into three steps: the selection of the fundamental idea, such as space, number, cause, or likeness a more special modification of those ideas, such as a circle, a uniform force, etc. Integrating deductive and inductive method: the determination of magnitudesUpon these follow special techniques applicable for quantity, such as the method of least squares, curves, means, and special methods depending on resemblance (such as pattern matching, the method of gradation, and the method of natural classification (such as cladistics). Integrating deductive and inductive method: But no art of discovery, such as Bacon anticipated, follows, for "invention, sagacity, genius" are needed at every step. Whewell's sophisticated concept of science had similarities to that shown by Herschel, and he considered that a good hypothesis should connect fields that had previously been thought unrelated, a process he called consilience. However, where Herschel held that the origin of new biological species would be found in a natural rather than a miraculous process, Whewell opposed this and considered that no natural cause had been shown for adaptation so an unknown divine cause was appropriate.John Stuart Mill (1806–1873) was stimulated to publish A System of Logic (1843) upon reading Whewell's History of the Inductive Sciences. Mill may be regarded as the final exponent of the empirical school of philosophy begun by John Locke, whose fundamental characteristic is the duty incumbent upon all thinkers to investigate for themselves rather than to accept the authority of others. Knowledge must be based on experience.In the mid-19th century Claude Bernard was also influential, especially in bringing the scientific method to medicine. In his discourse on scientific method, An Introduction to the Study of Experimental Medicine (1865), he described what makes a scientific theory good and what makes a scientist a true discoverer. Unlike many scientific writers of his time, Bernard wrote about his own experiments and thoughts, and used the first person.William Stanley Jevons' The Principles of Science: a treatise on logic and scientific method (1873, 1877) Chapter XII "The Inductive or Inverse Method", Summary of the Theory of Inductive Inference, states "Thus there are but three steps in the process of induction :- Framing some hypothesis as to the character of the general law. Integrating deductive and inductive method: Deducing some consequences of that law. Observing whether the consequences agree with the particular tasks under consideration."Jevons then frames those steps in terms of probability, which he then applied to economic laws. Ernest Nagel notes that Jevons and Whewell were not the first writers to argue for the centrality of the hypothetico-deductive method in the logic of science. Integrating deductive and inductive method: Charles Sanders Peirce In the late 19th century, Charles Sanders Peirce proposed a schema that would turn out to have considerable influence in the further development of scientific method generally. Peirce's work quickly accelerated the progress on several fronts. Firstly, speaking in broader context in "How to Make Our Ideas Clear" (1878), Peirce outlined an objectively verifiable method to test the truth of putative knowledge on a way that goes beyond mere foundational alternatives, focusing upon both Deduction and Induction. He thus placed induction and deduction in a complementary rather than competitive context (the latter of which had been the primary trend at least since David Hume a century before). Secondly, and of more direct importance to scientific method, Peirce put forth the basic schema for hypothesis-testing that continues to prevail today. Extracting the theory of inquiry from its raw materials in classical logic, he refined it in parallel with the early development of symbolic logic to address the then-current problems in scientific reasoning. Peirce examined and articulated the three fundamental modes of reasoning that play a role in scientific inquiry today, the processes that are currently known as abductive, deductive, and inductive inference. Thirdly, he played a major role in the progress of symbolic logic itself – indeed this was his primary specialty. Integrating deductive and inductive method: Charles S. Peirce was also a pioneer in statistics. Peirce held that science achieves statistical probabilities, not certainties, and that chance, a veering from law, is very real. He assigned probability to an argument's conclusion rather than to a proposition, event, etc., as such. Most of his statistical writings promote the frequency interpretation of probability (objective ratios of cases), and many of his writings express skepticism about (and criticize the use of) probability when such models are not based on objective randomization. Though Peirce was largely a frequentist, his possible world semantics introduced the "propensity" theory of probability. Peirce (sometimes with Jastrow) investigated the probability judgments of experimental subjects, pioneering decision analysis. Integrating deductive and inductive method: Peirce was one of the founders of statistics. He formulated modern statistics in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883). With a repeated measures design, he introduced blinded, controlled randomized experiments (before Fisher). He invented an optimal design for experiments on gravity, in which he "corrected the means". He used logistic regression, correlation, and smoothing, and improved the treatment of outliers. He introduced terms "confidence" and "likelihood" (before Neyman and Fisher). (See the historical books of Stephen Stigler.) Many of Peirce's ideas were later popularized and developed by Ronald A. Fisher, Jerzy Neyman, Frank P. Ramsey, Bruno de Finetti, and Karl Popper. Integrating deductive and inductive method: Modern perspectives Karl Popper (1902–1994) is generally credited with providing major improvements in the understanding of the scientific method in the mid-to-late 20th century. In 1934 Popper published The Logic of Scientific Discovery, which repudiated the by then traditional observationalist-inductivist account of the scientific method. He advocated empirical falsifiability as the criterion for distinguishing scientific work from non-science. According to Popper, scientific theory should make predictions (preferably predictions not made by a competing theory) which can be tested and the theory rejected if these predictions are shown not to be correct. Following Peirce and others, he argued that science would best progress using deductive reasoning as its primary emphasis, known as critical rationalism. His astute formulations of logical procedure helped to rein in the excessive use of inductive speculation upon inductive speculation, and also helped to strengthen the conceptual foundations for today's peer review procedures.Ludwik Fleck, a Polish epidemiologist who was contemporary with Karl Popper but who influenced Kuhn and others with his Genesis and Development of a Scientific Fact (in German 1935, English 1979). Before Fleck, scientific fact was thought to spring fully formed (in the view of Max Jammer, for example), when a gestation period is now recognized to be essential before acceptance of a phenomenon as fact.Critics of Popper, chiefly Thomas Kuhn, Paul Feyerabend and Imre Lakatos, rejected the idea that there exists a single method that applies to all science and could account for its progress. In 1962 Kuhn published the influential book The Structure of Scientific Revolutions which suggested that scientists worked within a series of paradigms, and argued there was little evidence of scientists actually following a falsificationist methodology. Kuhn quoted Max Planck who had said in his autobiography, "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."A well quoted source on the subject of the scientific method and statistical models, George E. P. Box (1919-2013) wrote "Since all models are wrong the scientist cannot obtain a correct one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist, so over-elaboration and over-parameterization is often the mark of mediocrity" and "Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad."These debates clearly show that there is no universal agreement as to what constitutes the "scientific method". There remain, nonetheless, certain core principles that are the foundation of scientific inquiry today. Mention of the topic: In Quod Nihil Scitur (1581), Francisco Sanches refers to another book title, De modo sciendi (on the method of knowing). This work appeared in Spanish as Método universal de las ciencias.In 1833 Robert and William Chambers published their 'Chambers's information for the people'. Under the rubric 'Logic' we find a description of investigation that is familiar as scientific method, Investigation, or the art of inquiring into the nature of causes and their operation, is a leading characteristic of reason [...] Investigation implies three things – Observation, Hypothesis, and Experiment [...] The first step in the process, it will be perceived, is to observe... Mention of the topic: In 1885, the words "Scientific method" appear together with a description of the method in Francis Ellingwood Abbot's 'Scientific Theism', Now all the established truths which are formulated in the multifarious propositions of science have been won by the use of Scientific Method. This method consists in essentially three distinct steps (1) observation and experiment, (2) hypothesis, (3) verification by fresh observation and experiment. Mention of the topic: The Eleventh Edition of Encyclopædia Britannica did not include an article on scientific method; the Thirteenth Edition listed scientific management, but not method. By the Fifteenth Edition, a 1-inch article in the Micropædia of Britannica was part of the 1975 printing, while a fuller treatment (extending across multiple articles, and accessible mostly via the index volumes of Britannica) was available in later printings. Current issues: In the past few centuries, some statistical methods have been developed, for reasoning in the face of uncertainty, as an outgrowth of methods for eliminating error. This was an echo of the program of Francis Bacon's Novum Organum of 1620. Bayesian inference acknowledges one's ability to alter one's beliefs in the face of evidence. This has been called belief revision, or defeasible reasoning: the models in play during the phases of scientific method can be reviewed, revisited and revised, in the light of further evidence. This arose from the work of Frank P. Ramsey (1903–1930), of John Maynard Keynes (1883–1946), and earlier, of William Stanley Jevons (1835–1882) in economics. Science and pseudoscience: The question of how science operates and therefore how to distinguish genuine science from pseudoscience has importance well beyond scientific circles or the academic community. In the judicial system and in public policy controversies, for example, a study's deviation from accepted scientific practice is grounds for rejecting it as junk science or pseudoscience. However, the high public perception of science means that pseudoscience is widespread. An advertisement in which an actor wears a white coat and product ingredients are given Greek or Latin sounding names is intended to give the impression of scientific endorsement. Richard Feynman has likened pseudoscience to cargo cults in which many of the external forms are followed, but the underlying basis is missing: that is, fringe or alternative theories often present themselves with a pseudoscientific appearance to gain acceptance. Sources: Asmis, Elizabeth (January 1984), Epicurus' Scientific method, vol. 42, Cornell University Press, p. 386, ISBN 978-0-8014-6682-3, JSTOR 10.7591/j.cttq45z9 Debus, Allen G. (1978), Man and Nature in the Renaissance, Cambridge: Cambridge University Press, ISBN 0-521-29328-6 Morelon, Régis; Rashed, Roshdi, eds. (1996), Encyclopedia of the History of Arabic Science, vol. 3, Routledge, ISBN 978-0415124102 Popkin, Richard H. (1979), The History of Scepticism from Erasmus to Spinoza, University of California Press, ISBN 0-520-03876-2 Popkin, Richard H. (2003), The History of Scepticism from Savonarola to Bayle, Oxford University Press, ISBN 0-19-510768-3. Third enlarged edition. Sources: Sanches, Francisco (1636), Opera medica. His iuncti sunt tratus quidam philosophici non insubtiles, Toulosae tectosagum as cited by Sanches, Limbrick & Thomson 1988 Sanches, Francisco (1649), Tractatus philosophici. Quod Nihil Scitur. De divinatione per somnum, ad Aristotlem. In lib. Aristoteles Physionomicon commentarius. De longitudine et brevitate vitae., Roterodami: ex officina Arnoldi Leers as cited by Sanches, Limbrick & Thomson 1988 Sanches, Francisco; Limbrick, Elaine. Introduction, Notes, and Bibliography; Thomson, Douglas F.S. Latin text established, annotated, and translated. (1988), That Nothing is Known, Cambridge: Cambridge University Press, ISBN 0-521-35077-8{{citation}}: CS1 maint: multiple names: authors list (link) Critical edition of Sanches' Quod Nihil Scitur Latin:(1581, 1618, 1649, 1665), Portuguese:( 1948, 1955, 1957), Spanish:(1944, 1972), French:(1976, 1984), German:(2007) Vives, Ioannes Lodovicus (1531), De Disciplinis libri XX, Antwerpiae: exudebat M. Hillenius English translation: On Discipline. Sources: Part 1: De causis corruptarum artium, Part 2: De tradendis disciplinis Part 3: De artibus
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chemokine receptor** Chemokine receptor: Chemokine receptors are cytokine receptors found on the surface of certain cells that interact with a type of cytokine called a chemokine. There have been 20 distinct chemokine receptors discovered in humans. Each has a rhodopsin-like 7-transmembrane (7TM) structure and couples to G-protein for signal transduction within a cell, making them members of a large protein family of G protein-coupled receptors. Following interaction with their specific chemokine ligands, chemokine receptors trigger a flux in intracellular calcium (Ca2+) ions (calcium signaling). This causes cell responses, including the onset of a process known as chemotaxis that traffics the cell to a desired location within the organism. Chemokine receptors are divided into different families, CXC chemokine receptors, CC chemokine receptors, CX3C chemokine receptors and XC chemokine receptors that correspond to the 4 distinct subfamilies of chemokines they bind. Four families of chemokine receptors differ in spacing of cysteine residues near N-terminal of the receptor. Structural characteristics: Chemokine receptors are G protein-coupled receptors containing 7 transmembrane domains that are found predominantly on the surface of leukocytes, making it one of the rhodopsin-like receptors. Approximately 19 different chemokine receptors have been characterized to date, which share many common structural features; they are composed of about 350 amino acids that are divided into a short and acidic N-terminal end, seven helical transmembrane domains with three intracellular and three extracellular hydrophilic loops, and an intracellular C-terminus containing serine and threonine residues that act as phosphorylation sites during receptor regulation. The first two extracellular loops of chemokine receptors are linked together by disulfide bonding between two conserved cysteine residues. The N-terminal end of a chemokine receptor binds to chemokines and is important for ligand specificity. G-proteins couple to the C-terminal end, which is important for receptor signaling following ligand binding. Although chemokine receptors share high amino acid identity in their primary sequences, they typically bind a limited number of ligands. Chemokine receptors are redundant in their function as more than one chemokine is able to bind to a single receptor. Signal transduction: Intracellular signaling by chemokine receptors is dependent on neighbouring G-proteins. G-proteins exist as a heterotrimer; they are composed of three distinct subunits. When the molecule GDP is bound to the G-protein subunit, the G-protein is in an inactive state. Following binding of the chemokine ligand, chemokine receptors associate with G-proteins, allowing the exchange of GDP for another molecule called GTP, and the dissociation of the different G protein subunits. The subunit called Gα activates an enzyme known as Phospholipase C (PLC) that is associated with the cell membrane. PLC cleaves Phosphatidylinositol (4,5)-bisphosphate (PIP2) to form two second messenger molecules called inositol triphosphate (IP3) and diacylglycerol (DAG); DAG activates another enzyme called protein kinase C (PKC), and IP3 triggers the release of calcium from intracellular stores. These events promote many signaling cascades, effecting a cellular response.For example, when CXCL8 (IL-8) binds to its specific receptors, CXCR1 or CXCR2, a rise in intracellular calcium activates the enzyme phospholipase D (PLD) that goes on to initiate an intracellular signaling cascade called the MAP kinase pathway. At the same time, the G-protein subunit Gα directly activates an enzyme called protein tyrosine kinase (PTK), which phosphorylates serine and threonine residues in the tail of the chemokine receptor, causing its desensitisation or inactivation. The initiated MAP kinase pathway activates specific cellular mechanisms involved in chemotaxis, degranulation, release of superoxide anions, and changes in the avidity of cell adhesion molecules called integrins. Chemokines and their receptors play a crucial role in cancer metastasis as they are involved in extravasation, migration, micrometastasis, and angiogenesis. This role of chemokine is strikingly similar to their normal function of localizing leukocytes to an inflammatory site. Selective pressures on Chemokine receptor 5 (CCR5): Human Immunodeficiency virus uses CCR5 receptor to target and infect host T-cells in humans. It weakens the immune system by destroying the CD4+ T-helper cells, making the body more susceptible to other infections. CCR5-Δ32 is an allelic variant of CCR5 gene with a 32 base pair deletion that results in a truncated receptor. People with this allele are resistant to AIDS as HIV cannot bind to the non-functional CCR5 receptor. An unusually high frequency of this allele is found in European Caucasian population, with an observed cline towards the north. Most researchers have attributed the current frequency of this allele to two major epidemics of human history: plague and smallpox. Although this allele originated much earlier, its frequency rose dramatically about 700 years ago. This led scientists to believe that bubonic plague acted as a selective pressure that drove CCR5-Δ32 to high frequency. It was speculated that allele may have provided protection against the Yersinia pestis, which is the causative agent for plague. Many in vivo mouse studies have refuted this claim by showing no protective effects of CCR5-Δ32 allele in mice infected with Y. pestis. Another theory that has gained more scientific support links the current frequency of the allele to smallpox epidemic. Although plague has killed a greater number people in a given time period, smallpox has collectively taken more lives. As smallpox has been dated back to 2000 years, a longer time period would have given smallpox enough time to exert selective pressure given an earlier origin of CCR5-Δ32. Population genetic models that analyzed geographic and temporal distribution of both plague and smallpox provide a much stronger evidence for smallpox as the driving factor of CCR5-Δ32. Smallpox has a higher mortality rate than plague, and it mostly affects children under the age of ten. From an evolutionary viewpoint, this results in greater loss of reproductive potential from a population which may explain increased selective pressure by smallpox. Smallpox was more prevalent in regions where higher CCR5-Δ32 frequencies are seen. Myxoma and variola major belong to the same family of viruses and myxoma has been shown to use CCR5 receptor to enter its host. Moreover, Yersinia is a bacterium which is biologically distinct from viruses and is unlikely to have similar mechanism of transmission. Recent evidence provides a strong support for smallpox as the selective agent for CCR5-Δ32. Families: CXC chemokine receptors (six members) CC chemokine receptors (ten/eleven members) C chemokine receptors (one member, XCR1) CX3C chemokine receptors (one member, CX3CR1)Fifty chemokines have been discovered so far, and most bind onto CXC and CC families. Two types of chemokines that bind to these receptors are inflammatory chemokines and homeostatic chemokines. Inflammatory chemokines are expressed upon leukocyte activation, whereas homeostatic chemokines show continual expression.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chileatole** Chileatole: Chileatole is a Mexican cuisine dish. It is a type of thick soup made of corn masa or corn kernels, which is cooked with corn chunks, epazote, salt, and a sauce made of chili peppers and pumpkin leaves. It is served hot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DnaQ** DnaQ: dnaQ is the gene encoding the ε subunit of DNA polymerase III in Escherichia coli. The ε subunit is one of three core proteins in the DNA polymerase complex. It functions as a 3’→5’ DNA directed proofreading exonuclease that removes incorrectly incorporated bases during replication. dnaQ may also be referred to as mutD. Biological role: Missense mutations in the dnaQ gene lead to the induction of the SOS DNA repair mechanism. Mutating the essential amino acid in the catalytic center of the ε subunit leads to complete loss of function.Overexpression of the ε subunit decreases the incidence of mutations with exposure to UV, proving that the epsilon subunit has an essential function in DNA editing and preventing the initiation of SOS DNA repair.The ε subunit has also been proven to have some impact on the growth rate of E. coli. Silencing of the dnaQ gene is correlated to significantly reduced growth. Interactions: The ε subunit is stabilized by the θ subunit within the complete polymerase complex.The gene encodes two functional domains: the N-terminus of the gene product binds the θ subunit and carries out the exonuclease function and the C-terminus binds the α subunit responsible for polymerase activity.A Q-linker peptide of 22 residues has been identified that links the α subunit to the ε subunit, conferring flexibility that sets the α:ε complex apart from other more restricted multi-domain proofreading polymerases.There is interaction between the missense suppressor glycine tRNA encoded by the mutA gene that is correlated to significantly increased mutation rate in cells that express the gene. The uncharged MutA tRNA possesses complementarity to a region in the 5' end of the dnaQ mRNA. This allows it to act as an antisense mRNA that directs the degradation of the dnaQ transcript and thus, a lower abundance of the subunit and increased frequency of mutation. More recently, it was suggested that the tRNA directs replacement of essential glutamate residues with glycine, leading to aberrant ε subunits and resulting in an increase in mutations. Studies with T4 bacteriophage and E. coli with defective dnaQ genes give evidence that the mutA tRNA may not have any effect on the transcription of the dnaQ gene but may affect the translation of the gene product. Related sequences: Sequences have been found in other organisms that encode gene products with a similar function to dnaQ: In Mycobaterium tuberculosis, the gene dnaE1 encodes a polymerase and histidinol-phosphatase (PHP) domain that carries out the 3’→5’ exonuclease and proofreading function.TREX1, the major 3'→5' exonuclease in humans, was initially called DNase III because it showed sequence homology with dnaQ in E. coli and with eukaryotic DNA polymerase epsilon and to possess biochemical characteristics that associate with the capability of DNA proofreading. It is responsible for metabolizing both single stranded DNA (ssDNA) and double stranded DNA (dsDNA) with mismatched 3' ends and is directed by endogenous retroelements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fire** Fire: Fire is the rapid oxidation of a material (the fuel) in the exothermic chemical process of combustion, releasing heat, light, and various reaction products. Fire: At a certain point in the combustion reaction, called the ignition point, flames are produced. The flame is the visible portion of the fire. Flames consist primarily of carbon dioxide, water vapor, oxygen and nitrogen. If hot enough, the gases may become ionized to produce plasma. Depending on the substances alight, and any impurities outside, the color of the flame and the fire's intensity will be different.Fire, in its most common form, has the potential to result in conflagration, which can lead to physical damage through burning. Fire is a significant process that influences ecological systems worldwide. The positive effects of fire include stimulating growth and maintaining various ecological systems. Fire: Its negative effects include hazard to life and property, atmospheric pollution, and water contamination. When fire removes protective vegetation, heavy rainfall can contribute to increased soil erosion by water. Additionally, the burning of vegetation releases nitrogen into the atmosphere, unlike elements such as potassium and phosphorus which remain in the ash and are quickly recycled into the soil. This loss of nitrogen caused by a fire produces a long-term reduction in the fertility of the soil, which can be recovered as atmospheric nitrogen is fixed and converted to ammonia by natural phenomena such as lightning or by leguminous plants such as clover, peas, and green beans. Fire: Fire is one of the four classical elements and has been used by humans in rituals, in agriculture for clearing land, for cooking, generating heat and light, for signaling, propulsion purposes, smelting, forging, incineration of waste, cremation, and as a weapon or mode of destruction. Etymology: The word "fire" originated from Old English Fyr 'Fire, a fire', which can be traced back to the Germanic root *fūr-, which itself comes from the Proto-Indo-European *perjos from the root *paewr- 'Fire'. The current spelling of "fire" has been in use since as early as 1200, but it was not until around 1600 that it completely replaced the Middle English term "fier" (which is still preserved in the word "fiery"). Prevention and protection systems: Wildfire prevention programs around the world may employ techniques such as wildland fire use and prescribed or controlled burns. Wildland fire use refers to any fire of natural causes that is monitored but allowed to burn. Controlled burns are fires ignited by government agencies under less dangerous weather conditions.Fire fighting services are provided in most developed areas to extinguish or contain uncontrolled fires. Trained firefighters use fire apparatus, water supply resources such as water mains and fire hydrants or they might use A and B class foam depending on what is feeding the fire. Prevention and protection systems: Fire prevention is intended to reduce sources of ignition. Fire prevention also includes education to teach people how to avoid causing fires. Buildings, especially schools and tall buildings, often conduct fire drills to inform and prepare citizens on how to react to a building fire. Purposely starting destructive fires constitutes arson and is a crime in most jurisdictions.Model building codes require passive fire protection and active fire protection systems to minimize damage resulting from a fire. The most common form of active fire protection is fire sprinklers. To maximize passive fire protection of buildings, building materials and furnishings in most developed countries are tested for fire-resistance, combustibility and flammability. Upholstery, carpeting and plastics used in vehicles and vessels are also tested. Prevention and protection systems: Where fire prevention and fire protection have failed to prevent damage, fire insurance can mitigate the financial impact. Physical properties: Chemistry Fire is a chemical process in which a fuel and an oxidizing agent react, yielding carbon dioxide and water. This process, known as a combustion reaction, does not proceed directly and involves intermediates. Although the oxidizing agent is typically oxygen, other compounds are able to fulfill the role. For instance, chlorine trifluoride is able to ignite sand.Fires start when a flammable or a combustible material, in combination with a sufficient quantity of an oxidizer such as oxygen gas or another oxygen-rich compound (though non-oxygen oxidizers exist), is exposed to a source of heat or ambient temperature above the flash point for the fuel/oxidizer mix, and is able to sustain a rate of rapid oxidation that produces a chain reaction. This is commonly called the fire tetrahedron. Fire cannot exist without all of these elements in place and in the right proportions. For example, a flammable liquid will start burning only if the fuel and oxygen are in the right proportions. Some fuel-oxygen mixes may require a catalyst, a substance that is not consumed, when added, in any chemical reaction during combustion, but which enables the reactants to combust more readily. Physical properties: Once ignited, a chain reaction must take place whereby fires can sustain their own heat by the further release of heat energy in the process of combustion and may propagate, provided there is a continuous supply of an oxidizer and fuel. Physical properties: If the oxidizer is oxygen from the surrounding air, the presence of a force of gravity, or of some similar force caused by acceleration, is necessary to produce convection, which removes combustion products and brings a supply of oxygen to the fire. Without gravity, a fire rapidly surrounds itself with its own combustion products and non-oxidizing gases from the air, which exclude oxygen and extinguish the fire. Because of this, the risk of fire in a spacecraft is small when it is coasting in inertial flight. This does not apply if oxygen is supplied to the fire by some process other than thermal convection. Physical properties: Fire can be extinguished by removing any one of the elements of the fire tetrahedron. Consider a natural gas flame, such as from a stove-top burner. The fire can be extinguished by any of the following: turning off the gas supply, which removes the fuel source; covering the flame completely, which smothers the flame as the combustion both uses the available oxidizer (the oxygen in the air) and displaces it from the area around the flame with CO2; application of an inert gas such as carbon dioxide, smothering the flame by displacing the available oxidizer; application of water, which removes heat from the fire faster than the fire can produce it (similarly, blowing hard on a flame will displace the heat of the currently burning gas from its fuel source, to the same end); or application of a retardant chemical such as Halon (largely banned in some countries as of 2023) to the flame, which retards the chemical reaction itself until the rate of combustion is too slow to maintain the chain reaction.In contrast, fire is intensified by increasing the overall rate of combustion. Methods to do this include balancing the input of fuel and oxidizer to stoichiometric proportions, increasing fuel and oxidizer input in this balanced mix, increasing the ambient temperature so the fire's own heat is better able to sustain combustion, or providing a catalyst, a non-reactant medium in which the fuel and oxidizer can more readily react. Physical properties: Flame A flame is a mixture of reacting gases and solids emitting visible, infrared, and sometimes ultraviolet light, the frequency spectrum of which depends on the chemical composition of the burning material and intermediate reaction products. In many cases, such as the burning of organic matter, for example wood, or the incomplete combustion of gas, incandescent solid particles called soot produce the familiar red-orange glow of "fire". This light has a continuous spectrum. Complete combustion of gas has a dim blue color due to the emission of single-wavelength radiation from various electron transitions in the excited molecules formed in the flame. Usually oxygen is involved, but hydrogen burning in chlorine also produces a flame, producing hydrogen chloride (HCl). Other possible combinations producing flames, amongst many, are fluorine and hydrogen, and hydrazine and nitrogen tetroxide. Hydrogen and hydrazine/UDMH flames are similarly pale blue, while burning boron and its compounds, evaluated in mid-20th century as a high energy fuel for jet and rocket engines, emits intense green flame, leading to its informal nickname of "Green Dragon". Physical properties: The glow of a flame is complex. Black-body radiation is emitted from soot, gas, and fuel particles, though the soot particles are too small to behave like perfect blackbodies. There is also photon emission by de-excited atoms and molecules in the gases. Much of the radiation is emitted in the visible and infrared bands. The color depends on temperature for the black-body radiation, and on chemical makeup for the emission spectra. Physical properties: The common distribution of a flame under normal gravity conditions depends on convection, as soot tends to rise to the top of a general flame, as in a candle in normal gravity conditions, making it yellow. In micro gravity or zero gravity, such as an environment in outer space, convection no longer occurs, and the flame becomes spherical, with a tendency to become more blue and more efficient (although it may go out if not moved steadily, as the CO2 from combustion does not disperse as readily in micro gravity, and tends to smother the flame). There are several possible explanations for this difference, of which the most likely is that the temperature is sufficiently evenly distributed that soot is not formed and complete combustion occurs. Experiments by NASA reveal that diffusion flames in micro gravity allow more soot to be completely oxidized after they are produced than diffusion flames on Earth, because of a series of mechanisms that behave differently in micro gravity when compared to normal gravity conditions. These discoveries have potential applications in applied science and industry, especially concerning fuel efficiency. Physical properties: Typical adiabatic temperatures The adiabatic flame temperature of a given fuel and oxidizer pair is that at which the gases achieve stable combustion. Oxy–dicyanoacetylene 4,990 °C (9,000 °F) Oxy–acetylene 3,480 °C (6,300 °F) Oxyhydrogen 2,800 °C (5,100 °F) Air–acetylene 2,534 °C (4,600 °F) Blowtorch (air–MAPP gas) 2,200 °C (4,000 °F) Bunsen burner (air–natural gas) 1,300 to 1,600 °C (2,400 to 2,900 °F) Candle (air–paraffin) 1,000 °C (1,800 °F)t. Fire science: Fire science is a branch of physical science which includes fire behavior, dynamics, and combustion. Applications of fire science include fire protection, fire investigation, and wildfire management. Fire ecology: Every natural ecosystem on land has its own fire regime, and the organisms in those ecosystems are adapted to or dependent upon that fire regime. Fire creates a mosaic of different habitat patches, each at a different stage of succession. Different species of plants, animals, and microbes specialize in exploiting a particular stage, and by creating these different types of patches, fire allows a greater number of species to exist within a landscape. Fossil record: The fossil record of fire first appears with the establishment of a land-based flora in the Middle Ordovician period, 470 million years ago, permitting the accumulation of oxygen in the atmosphere as never before, as the new hordes of land plants pumped it out as a waste product. When this concentration rose above 13%, it permitted the possibility of wildfire. Wildfire is first recorded in the Late Silurian fossil record, 420 million years ago, by fossils of charcoalified plants. Apart from a controversial gap in the Late Devonian, charcoal is present ever since. The level of atmospheric oxygen is closely related to the prevalence of charcoal: clearly oxygen is the key factor in the abundance of wildfire. Fire also became more abundant when grasses radiated and became the dominant component of many ecosystems, around 6 to 7 million years ago; this kindling provided tinder which allowed for the more rapid spread of fire. These widespread fires may have initiated a positive feedback process, whereby they produced a warmer, drier climate more conducive to fire. History of human control of fire: Early human control The ability to control fire was a dramatic change in the habits of early humans. Making fire to generate heat and light made it possible for people to cook food, simultaneously increasing the variety and availability of nutrients and reducing disease by killing pathogenic microorganisms in the food. The heat produced would also help people stay warm in cold weather, enabling them to live in cooler climates. Fire also kept nocturnal predators at bay. Evidence of occasional cooked food is found from 1 million years ago. Although this evidence shows that fire may have been used in a controlled fashion about 1 million years ago, other sources put the date of regular use at 400,000 years ago. Evidence becomes widespread around 50 to 100 thousand years ago, suggesting regular use from this time; interestingly, resistance to air pollution started to evolve in human populations at a similar point in time. The use of fire became progressively more sophisticated, as it was used to create charcoal and to control wildlife from tens of thousands of years ago.Fire has also been used for centuries as a method of torture and execution, as evidenced by death by burning as well as torture devices such as the iron boot, which could be filled with water, oil, or even lead and then heated over an open fire to the agony of the wearer. History of human control of fire: By the Neolithic Revolution, during the introduction of grain-based agriculture, people all over the world used fire as a tool in landscape management. These fires were typically controlled burns or "cool fires", as opposed to uncontrolled "hot fires", which damage the soil. Hot fires destroy plants and animals, and endanger communities. This is especially a problem in the forests of today where traditional burning is prevented in order to encourage the growth of timber crops. Cool fires are generally conducted in the spring and autumn. They clear undergrowth, burning up biomass that could trigger a hot fire should it get too dense. They provide a greater variety of environments, which encourages game and plant diversity. For humans, they make dense, impassable forests traversable. Another human use for fire in regards to landscape management is its use to clear land for agriculture. Slash-and-burn agriculture is still common across much of tropical Africa, Asia and South America. "For small farmers, it is a convenient way to clear overgrown areas and release nutrients from standing vegetation back into the soil", said Miguel Pinedo-Vasquez, an ecologist at the Earth Institute’s Center for Environmental Research and Conservation. However, this useful strategy is also problematic. Growing population, fragmentation of forests and warming climate are making the earth's surface more prone to ever-larger escaped fires. These harm ecosystems and human infrastructure, cause health problems, and send up spirals of carbon and soot that may encourage even more warming of the atmosphere – and thus feed back into more fires. Globally today, as much as 5 million square kilometres – an area more than half the size of the United States – burns in a given year. History of human control of fire: Later human control There are numerous modern applications of fire. In its broadest sense, fire is used by nearly every human being on earth in a controlled setting every day. Users of internal combustion vehicles employ fire every time they drive. Thermal power stations provide electricity for a large percentage of humanity by igniting fuels such as coal, oil or natural gas, then using the resultant heat to boil water into steam, which then drives turbines. History of human control of fire: Use of fire in war The use of fire in warfare has a long history. Fire was the basis of all early thermal weapons. Homer detailed the use of fire by Greek soldiers who hid in a wooden horse to burn Troy during the Trojan war. Later the Byzantine fleet used Greek fire to attack ships and men. The invention of gunpowder in China led to the fire lance, a flame-thrower weapon dating to around 1000 CE which was a precursor to projectile weapons driven by burning gunpowder. History of human control of fire: In the First World War, the earliest modern flamethrowers were used by infantry, and were successfully mounted on armoured vehicles in the Second World War. Hand-thrown incendiary bombs improvised from glass bottles, later known as Molotov cocktails, were deployed during the Spanish Civil War in the 1930s, which also saw the deployment of incendiary bombs against Guernica by Fascist Italian and Nazi German air forces that had been created specifically to support Franco's Nationalists. History of human control of fire: Incendiary bombs were dropped by Axis and Allies during the Second World War, notably on Coventry, Tokyo, Rotterdam, London, Hamburg and Dresden; in the latter two cases firestorms were deliberately caused in which a ring of fire surrounding each city was drawn inward by an updraft caused by a central cluster of fires. The United States Army Air Force also extensively used incendiaries against Japanese targets in the latter months of the war, devastating entire cities constructed primarily of wood and paper houses. The incendiary fluid napalm was used in July 1944, towards the end of the Second World War, although its use did not gain public attention until the Vietnam War. History of human control of fire: Productive use for energy Burning fuel converts chemical energy into heat energy; wood has been used as fuel since prehistory. The International Energy Agency states that nearly 80% of the world's power has consistently come from fossil fuels such as petroleum, natural gas, and coal in the past decades. The fire in a power station is used to heat water, creating steam that drives turbines. The turbines then spin an electric generator to produce electricity. Fire is also used to provide mechanical work directly by thermal expansion, in both external and internal combustion engines. History of human control of fire: The unburnable solid remains of a combustible material left after a fire is called clinker if its melting point is below the flame temperature, so that it fuses and then solidifies as it cools, and ash if its melting point is above the flame temperature. History of human control of fire: Fire management Controlling a fire to optimize its size, shape, and intensity is generally called fire management, and the more advanced forms of it, as traditionally (and sometimes still) practiced by skilled cooks, blacksmiths, ironmasters, and others, are highly skilled activities. They include knowledge of which fuel to burn; how to arrange the fuel; how to stoke the fire both in early phases and in maintenance phases; how to modulate the heat, flame, and smoke as suited to the desired application; how best to bank a fire to be revived later; how to choose, design, or modify stoves, fireplaces, bakery ovens, industrial furnaces; and so on. Detailed expositions of fire management are available in various books about blacksmithing, about skilled camping or military scouting, and about domestic arts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Muxtape** Muxtape: Muxtape was a website that allowed bands to promote their music and users to discover artists. Muxtape allowed bands to upload music they own for free streaming to fans, on the bands profile and as an embeddable player, as well as configure profiles with images, videos, and a show calendar. First Iteration: Muxtape was created by Justin Ouellette in March 2008. Initial funding for the site was reported to be $95,000 and was provided by Jakob Lodwick, Justin's ex-Boss from Vimeo, and funding was agreed to via a contract written on a napkin. However, Justin Ouellette has since stated that the $95,000 amount concerned funding for more than just Muxtape alone.The original version of Muxtape allowed music fans themselves to upload playlists of MP3s, based on the idea of a mixtape. This version was launched on March 25, 2008. Ouellette came up with the idea after being a disc jockey at his university's radio station. The site became unexpectedly popular immediately after launching, with 8,685 users registered in its first day and 97,748 in its first month. The site was supported by affiliate links to Amazon.com. A hallmark of the site was its very simple design. In-site searching of streamable tracks was a feature purposely absent (this feature likely contributed to the litigation brought against the Muxtape-inspired site Favtape). Ouellette explains that the important part of a mixtape, which he tried to preserve on his site, is about discovering new music instead of someone finding music they are already familiar with.Muxtape was one in a long line of websites which permit users to create online playlists, either from user-uploaded tracks or selected from a library curated by the site. Past legal issues involving music sharing sites and programs led many observers to predict that Muxtape would eventually run into legal troubles with the music industry. However, Ouellette stated that Muxtape is different from the likes of Napster: "Its intended purpose is to introduce you to new music that you would then hopefully go and buy." He reported that he has spoken with many record labels who are excited about Muxtape's ability to bring new music to consumers. Some individuals predicted that if Muxtape were to gain significant momentum it would be unlikely to be shut down by legal pressure; it was more likely to have changes forced upon it as it became a legal licensor of the music on the site.On August 18, 2008, Muxtape services ceased to be available, and the main page displayed the following message: "Muxtape will be unavailable for a brief period while we sort out a problem with the RIAA." Second iteration: On September 25, 2008, the Muxtape homepage began displaying a long message from the creator stating that the site's format was changing to a platform for independent artists to distribute their music. The site reappeared in preview mode on January 27, 2009; a blurb on the front page stated that "We’ve invited 12 of our favorite artists to help test, and in the coming weeks we'll begin allowing bands to sign up themselves for free." On April 21, additional bands were added and allowed to invite other bands to sign up for the service. The website continued to grow through trusted band to band invites. Ouellette originally created the site using PHP but after hiring Luke Crawford as Chief technical officer, Crawford rewrote the site and it is now powered by Ruby on Rails. The site uses Amazon AWS for storage and uses SoundManager 2 to play audio files. Second iteration: As of May 2010, the Muxtape site is inactive, evidently while its creator re-envisions the site using HTML5 technologies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IlvH RNA motif** IlvH RNA motif: The ilvH RNA motif is a conserved RNA structure that was discovered by bioinformatics.ilvH motifs are found in Betaproteobacteria. ilvH motif RNAs likely function as cis-regulatory elements, in view of their positions upstream of protein-coding genes. Specifically, the RNAs are upstream of genes that encode a predicted acetolactate synthase, which is involved in the synthesis of branched-chain amino acids.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mailuoning** Mailuoning: Mailuoning is compound based on herbs which is widely used in Traditional Chinese medicine in an attempt to treat people who have had a stroke. Efficacy: There is no good evidence that Mailuoning is of any benefit in treating people who have had a stroke. Pharmacology: Mailuoning is a herbal compound made from extracts of: Dendrobium Scrophulariae Radix Flos Lonicerae Radix Achyranthis Bidentatae.The principal bioactive substances are scoparone and ayapin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Premature thelarche** Premature thelarche: Premature thelarche (PT) is a medical condition, characterised by isolated breast development in female infants. It occurs in females younger than 8 years, with the highest occurrence before the age of 2. PT is rare, occurring in 2.2-4.7% of females aged 0 to 2 years old. The exact cause of the condition is still unknown, but it has been linked to a variety of genetic, dietary and physiological factors.PT is a form of Incomplete Precocious Puberty (IPP). IPP is the presence of a secondary sex characteristic in an infant, without a change in their sex hormone levels. Central Precocious Puberty (CPP) is a more severe condition than IPP. CPP is the presentation of secondary sex characteristics, with a change in sex hormones due to alteration of the hypothalamic-pituitary-gonadal (HPG) axis. Premature thelarche: CPP is an aggressive endocrine disorder with harmful developmental consequences for the patient. At the presentation of PT, diagnostics are used to ensure it is not early stage CPP. CPP can be differentiated from PT through biochemical testing, ultrasounds and ongoing observation. There is no treatment for PT but regular observation is important to ensure it does not progress to CPP. CPP diagnosis is important as treatment is necessary. Symptoms and signs: Premature thelarche is breast hypertrophy before puberty. This form of hypertrophy is an increase in breast tissue. PT occurs in pre-pubescent females, under the age of 8, having a peak occurrence in the first two years of life. The breast development is usually bi-lateral: both breasts show development. In some cases development may be unilateral: one breast develops. Symptoms and signs: Patterns of PT There are four patterns of PT development. Most patients have hypertrophy followed by complete loss of the excess breast tissue (51% of cases) or loss of most excess tissue, but some remains until puberty (36% of cases). Less commonly patients have ongoing patterns of thelarche: 9.7% suffer from a cyclic pattern where the size of the breast tissue varies over time, and 3.2% experience continual increase in tissue size. Symptoms and signs: Associated Symptoms The main symptom of PT is enlarged breast tissue in infants. estrogen's role in PT, also leads to increased bone age and growth in some cases. In PT these secondary symptoms are minimal: bone age only varies from actual age by a few months and growth velocity only slightly varies from the norm. Diagnostic tests will distinguish these PT secondary symptoms from the more severe bone aging and growth occurring in early CPP. Pathophysiology: The direct pathophysiology behind PT is still unknown, but there are many postulated causes. Estrogen PT is linked to increased sensitivity of the breast tissue to estradiol, an estrogen derivative, in certain prepubertal individuals. Sporadic estrogen or estradiol production in the adrenal glands, follicles or ovarian cysts is also linked to the condition. Pathophysiology: Follicle stimulating hormone Follicle Stimulating Hormone (FSH) is secreted from the anterior pituitary. FSH plays a key role in development, growth and puberty, thus it is suspected to play a role in PT. Gondotropin-releasing hormone (GnRH) stimulation testing in some patients with PT has shown a dominant response from FSH. This response is linked to active mutations in the FSH receptor and Gs-a subunit in PT. Genetic investigation indicated these mutations only account for few cases of premature PT. PT may also be caused by transient partial activation of the HPG axis. Partial activation would release a surplus of FSH from the anterior pituitary without further disruption of the HPG axis. Pathophysiology: Other causes The consumption or exposure to certain endocrine disrupters have also been linked to PT. CPP and PT: PT is the benign growth of breasts in infants, while CPP is a condition that involves the frequent activation of the HPG axis in patients. PT does not require treatment, as the condition is limited to enlarged breast tissue that usually subsides with time. CPP is associated with a wider range of symptoms including thelarche, pubic hair growth, accelerated bone aging, increased growth velocity and early epiphyseal growth. If an individual is affected with CPP they will need to begin treatment immediately. CPP is treated with lutenizing hormone (LH) releasing hormone agonists. PT can impact growth velocity and bone age slightly, but CPP affects these characteristics to the point of detriment to the adult stature. Patients with suspected PT must undergo diagnostic testing to ensure it isn’t CPP or exaggerated thelarche, the intermediate stage before CPP.Notable hormone differences occur between CPP and PT patients, so studying these hormone levels is the main biochemical diagnostic used in CPP. Individuals with CPP usually have a higher basal LH levels and LH:FSH ratios.Few PT patients, 9 to 14%, are predicted to develop CPP. Observation allows clinicians to identify the presentation of CPP indicative symptoms in PT patients. No diagnostics tests can indicate if a PT patient is at risk of developing CPP. Diagnosis: Premature thelarche does not require treatment. In PT, breast hypertrophy will usually stop completely and patients will experience regression of the breast tissue over 3 to 60 months. Less commonly, patients may remain with residual breast tissue or continue through cycles of breast hypertrophy and regression until puberty.Diagnostics are utilised in individuals with PT, especially at the presentation of other secondary sex characteristics. Diagnostics aim to ensure PT patients are not suffering from CPP. Diagnosis: Pelvic Ultrasounds Pelvic ultrasounds are important in diagnosing CPP. Patients with CPP have an increased ovary and uterus size. The ovary and uterus volume of CPP patients is similar to that of females undergoing puberty. The pelvis ultrasound is problematic as a diagnostic, as there is not a specific cut-off for the uterine and ovary volumes that indicate the patient has CPP. Patients with PT should have a uterine and ovarian volume within the normal range for their age. Pelvic ultrasounds are a desirable diagnostic as they are non-invasive and easy to continually review. The pelvic ultrasound should be paired with biochemical tests to determine the presence of CPP. Diagnosis: Biochemical tests Biochemical tests study the hormone levels in patients. CPP patients have elevated LH levels and peak LH:FSH ratios when compared to PT patients. It is hard to use LH as a diagnostic for CPP, as the LH assay has varying sensitivity and specificity. The GnRH stimulation test is the main diagnostic biochemical test used to distinguish PT from CPP. The GnRH test demonstrates the pituitary responsiveness to GnRH. GnRH stimulates the release of LH and FSH from the anterior pituitary. The peak LH:FSH ratio in CPP patients is similar to the ratio of pubertal females. Females with PT demonstrated a LH:FSH ratio lower than pubertal females. The disadvantages of the GnRH stimulation test is it takes a long time to perform and requires multiple collections from the patient, making the process time consuming and inconvenient. The test is highly specific but has low sensitivity as the LH hormone response is usually observed in later stages of CPP. There are also overlaps in the expected value in the GnRH test results of individuals with CPP and PT. Diagnosis: Combined diagnostic approach The diagnostic inconsistency in CPP means that a combination of all of pelvic ultrasounds and biochemical tests should be paired with observation, to ensure PT doesn’t progress to CPP. Research: Exposure to environmental agents Natural commodities like fennel, lavender and tea tree oils have been linked to PT. Lavender and tea tree oil have weak estrogenic activities. These estrogenic properties may cause an imbalance in endocrine signalling pathways, leading to PT in regular users of these products. Fennel tea has been studied as an endocrine disrupter linked to PT. Fennel seed oil contains anethole a compound with estrogenic effects. The tea contains fennel seed oil and regular use results in increased estradiol levels in the infant. Infants with fennel tea related PT, were given the tea as a homeopathic remedy for restlessness. The tea was consumed for at least four months before the presentation of PT symptoms. PT resulting from fennel tea subsides approximately six months after stopping the use of fennel tea. Research: Leptin Leptin is an adipocyte hormone that has important implications of puberty and sex hormone secretion. Increased leptin has been linked to estrogen and estradiol secretion. Leptin has key roles in maintaining age appropriate body composition and desired weight. Leptin receptors are also found in mammary epithelial cells and leptin has been observed as a growth factor in breast tissue. Increased leptin levels have been observed in some cases of PT. The increase in leptin levels cause increased estradiol levels and development of breast tissue. Research: GNAS1 gene mutation The form of PT with fluctuating hypertrophy in patients has been linked to activating mutations in the GNAS1 gene. This mutation accounts for a small number of cases of PT.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Typhoon Ester** Typhoon Ester: The name Ester has been used for five tropical cyclones in the Philippines by PAGASA in the Western Pacific. Typhoon Ewiniar (2006) (T0603, 04W, Ester) Severe Tropical Storm Dianmu (2010) (T1004, 05W, Ester) Tropical Storm Mitag (2014) (T1407, Ester) – was only recognized by PAGASA and JMA as a tropical storm, and by JTWC as a subtropical storm. Tropical Storm Gaemi (2018) (T1806, 08W, Ester) Tropical Storm Trases (2022) (T2206, 07W, Ester)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Telenoid R1** Telenoid R1: The Telenoid R1 is a remote-controlled telepresence android created by Japanese roboticist Hiroshi Ishiguro. The R1 model, released in August 2010, is approximately 80 cm tall, weighs 5 kg and is made out of silicone rubber. The primary usage of the Telenoid R1 is an audio and movement transmitter through which people can relay messages over long distances. The purpose is for the user to feel as though they are communicating with a far-away acquaintance. Cameras and microphones capture the voice and movements of an operator which are projected through the Telenoid R1 to the user. Features: The Telenoid R1, unlike its counterparts the Geminoid HI-1 and Geminoid F, is designed to be an ambiguous figure, able to be recognized as any gender and any age. The Telenoid R1 has a minimalistic design; it is roughly the size of an infant with a bald head, a doll-like face, and automated stubs instead of arms. It contains 9 actuators which allows the R1 to have 9 degrees of freedom. Each eye can move horizontally independent from each other, but their vertical movement is synced. The mouth is able to open and close to emulate talking. The 3 actuators in the neck provide yaw, pitch, and roll rotations for the neck. The final two actuators are used for motion in the arms. A webcam or other video capturing device can record a person's movements and voice and send them to the R1 using Wi-Fi connection. Some movements and expressions are pre-programmed into the Telenoid R1. Some of these controllable behaviors are saying bye, being happy, and motioning for a hug. Other actions are random such was breathing and blinking which gives the R1 a sense of being "alive." Uses: The R1's main use is being an advanced video conferencing tool. The Telenoid is able to interact with the owner as if it was the person sending the message. Researchers shared that they hope the Telenoid R1 will be used mainly as a communication device that can be applied to work, education, and elderly care. Work communication Employees who are unable to go into work can use the Telenoid R1 to give their input into a conversation or meeting. Furthermore, entire meetings can be held through Telenoids so that workers never even have to leave their homes. Education One education application for the robot is teaching a language. Audio lessons can be programmed into the Telenoid R1 and used to teach people who would find it easier to learn from a human-like being rather than an audio tape. Uses: Elderly care Elderly citizens in care homes are able to use the Telenoid R1 to communicate with family who are not able to visit them personally. Research has shown that elderly people have reacted positively to interactions with the Telenoid R1. In experiments with the R1, the elderly have given feedback such as "very cute, like my grandchild" and "very soft and nice to touch." Cost: The Telenoid R1 uses DC motors as actuators and because of its smaller body, only uses 9. This helped to reduce the development and production costs for the automaton. A version of the robot used for research costs about $35,000 while a commercial version costs about $8,000. Previous models: Geminoid HI-1 Hiroshi Ishiguro created an automaton with looks that reflect his own, the Geminoid HI-1. The materials for this robot include Ishiguro's own hair along with silicone rubber, pneumatic actuators, and various electronic parts. The purpose for the HI-1 is to mimic the actions of a human. It cannot move by itself, but is instead remotely operated by Ishiguro. Ishiguro's voice is captured by a microphone while his facial movements are recorded by a camera. The HI-1 is able to imitate the actions made by the operator. When asked the purpose of a human-like robot, Ishiguro replied with "my research question is to know what is human." Ishiguro hopes to use the HI-1 as a way to decipher the feeling of being in the presence of a human being. Previous models: Geminoid F Another of Hiroshi Ishiguro's creations is the Geminoid F, a female android modeled after a woman in her twenties. The Geminoid F can show facial expressions, such as smiling or frowning, in a more natural looking way than Ishiguro's previous androids. This Geminoid is also controlled remotely by cameras with face-tracking software. The goal in making the Geminoid F was to create a robot that can display a wide range of facial expressions using less actuators than earlier models. While the Geminoid HI-1 has 50 actuators, the Geminoid F only has 12. Instead of filling a large external box with compressors and valves, as seen in the HI-1, the researchers implemented these parts into the body of the Geminoid F, so it requires only a small external compressor. Researchers hope that the Geminoid F has a more friendly face that people are more eager to interact with. Geminoid F co-starred in the 2015 Japanese film Sayonara, promoted as "the first movie to feature an android performing opposite a human actor".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bryant's traction** Bryant's traction: Bryant's traction is a form of orthopedic traction. It is mainly used in young children who have fractures of the femur or congenital abnormalities of the hip. Both the patient's limbs are suspended in the air vertically at a ninety degree angle from the hips and knees slightly flexed. Over a period of days, the hips are gradually moved outward from the body using a pulley system. The patient's body provides the counter-traction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nominal identity** Nominal identity: Nominal identity is the identity in name only as opposed to the individual experience of that identity. The concept is often used in sociology, psychology and linguistics. Social sciences: Nominal identity is the name to which one identifies, or calls oneself (i.e. general "African American," "Irish," "Straight," "Gay," "Female," "Male"). Whereas virtual identity is the experience of that identity, "The latter is, in a sense, what the name means; this is primarily a matter of its consequences for those who bear it, and can change while the nominal identity remains the same (and vice versa)."Among those who self-identify as "gay," the term may not confer the same experience for two people or even between various geographical or cultural regions. Similarly, while one may talk about a "chair," "chair" itself can entail many forms, from arm chair to ladder back to even tree stump, if the experience of "chair" is something upon which a person sits. Social sciences: Pierre Bourdieu uses the term nominal identity in Distinction: A Social Critique of the Judgment of Taste to mean both that which the identity of a subject is named and also where identity is an insignificant measurement or representation of the "perceived reality" of a subject or phenomenon. To further clarify, for Bourdieu nominal identity can often mean "face value" or "appearance." He often mentions the term nominal identity in order to illustrate the idea of a more complex reality, beyond the name, within the studied subject. Social sciences: Nominal identity in ethnicity Ethnic identity is a "social identity arising through group formation, individual identification with a group, and interaction between different ethnic groups."Henry E. Brady and Cynthia S. Kaplan compiled a study called "Categorically Wrong? Nominal versus Graded Measures of Ethnic Identity" that takes a look at ethnicity as a nominal identity. Their study proposed whether or not "the attitudes of members of the group with the more salient identity can be completely explained by its nominal identity while the attitudes of the members of the group with less salient identity require a graded measure of ethnicity."Brady and Kaplan focused on the country Estonia, where they posited two groups: "Estonians," and one that they call the "Slavs," a collective group of Russians, Ukrainians, or Belarusians. They chose this geographical area in particular because of the "centrality of ethnicity in the politics of transition in the USSR." media usage (such as television, radio, or newspaper, whether it was the Estonia language Republic television or Russian-language media), individuals who identify themselves with another nationality, and the language used at home. Social sciences: Brady and Kaplan concluded that "ethnicity is not always a nominal characteristic" for these two groups in Estonia. It is only nominal when most salient. "Ethnic identity ... causes individuals within a group to form their attitudes based upon their nominal identity". Individuals may generalize themselves in a certain category such as their nationality, but when it comes down to variables of different degrees in formulating their ethnicity, it is no longer nominal. It is their way of dividing themselves from a generalized name. Linguistics: Nominal identity in linguistics pertains to the identity of a word or group of words functioning as a noun or an adjective within a sentence's structure. Specifically, it relates to how one can look at a sentence and propose a different understanding of that sentence through the analysis of its identity defined in its lexical construct, such as in the example used by Chris Barker when discussing one of Manfred Krifka's study "Four thousand ships passed through the lock: Object-induced measure functions on events": "(1) Four thousand ships passed through the lock last year." On the surface, the proposition suggests that 4,000 distinct ships passed through the lock last year. However, as Krifka points out in his study, one could say that there were fewer than 4,000 distinct ships and that some of those ships passed through the lock more than once. Individuals reading this sentence could argue about the interpretation of this sentence. The outcome could be many interpretations when looking at the sentence more closely and determining what variables were taken into consideration when making this statement.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Convolutional code** Convolutional code: In telecommunication, a convolutional code is a type of error-correcting code that generates parity symbols via the sliding application of a boolean polynomial function to a data stream. The sliding application represents the 'convolution' of the encoder over the data, which gives rise to the term 'convolutional coding'. The sliding nature of the convolutional codes facilitates trellis decoding using a time-invariant trellis. Time invariant trellis decoding allows convolutional codes to be maximum-likelihood soft-decision decoded with reasonable complexity. Convolutional code: The ability to perform economical maximum likelihood soft decision decoding is one of the major benefits of convolutional codes. This is in contrast to classic block codes, which are generally represented by a time-variant trellis and therefore are typically hard-decision decoded. Convolutional codes are often characterized by the base code rate and the depth (or memory) of the encoder [n,k,K] . The base code rate is typically given as n/k , where n is the raw input data rate and k is the data rate of output channel encoded stream. n is less than k because channel coding inserts redundancy in the input bits. The memory is often called the "constraint length" K, where the output is a function of the current input as well as the previous K−1 inputs. The depth may also be given as the number of memory elements v in the polynomial or the maximum possible number of states of the encoder (typically : 2v ). Convolutional code: Convolutional codes are often described as continuous. However, it may also be said that convolutional codes have arbitrary block length, rather than being continuous, since most real-world convolutional encoding is performed on blocks of data. Convolutionally encoded block codes typically employ termination. The arbitrary block length of convolutional codes can also be contrasted to classic block codes, which generally have fixed block lengths that are determined by algebraic properties. Convolutional code: The code rate of a convolutional code is commonly modified via symbol puncturing. For example, a convolutional code with a 'mother' code rate n/k=1/2 may be punctured to a higher rate of, for example, 7/8 simply by not transmitting a portion of code symbols. The performance of a punctured convolutional code generally scales well with the amount of parity transmitted. The ability to perform economical soft decision decoding on convolutional codes, as well as the block length and code rate flexibility of convolutional codes, makes them very popular for digital communications. History: Convolutional codes were introduced in 1955 by Peter Elias. It was thought that convolutional codes could be decoded with arbitrary quality at the expense of computation and delay. In 1967, Andrew Viterbi determined that convolutional codes could be maximum-likelihood decoded with reasonable complexity using time invariant trellis based decoders — the Viterbi algorithm. Other trellis-based decoder algorithms were later developed, including the BCJR decoding algorithm. History: Recursive systematic convolutional codes were invented by Claude Berrou around 1991. These codes proved especially useful for iterative processing including the processing of concatenated codes such as turbo codes.Using the "convolutional" terminology, a classic convolutional code might be considered a Finite impulse response (FIR) filter, while a recursive convolutional code might be considered an Infinite impulse response (IIR) filter. Where convolutional codes are used: Convolutional codes are used extensively to achieve reliable data transfer in numerous applications, such as digital video, radio, mobile communications (e.g., in GSM, GPRS, EDGE and 3G networks (until 3GPP Release 7)) and satellite communications. These codes are often implemented in concatenation with a hard-decision code, particularly Reed–Solomon. Prior to turbo codes such constructions were the most efficient, coming closest to the Shannon limit. Convolutional encoding: To convolutionally encode data, start with k memory registers, each holding one input bit. Unless otherwise specified, all memory registers start with a value of 0. The encoder has n modulo-2 adders (a modulo 2 adder can be implemented with a single Boolean XOR gate, where the logic is: 0+0 = 0, 0+1 = 1, 1+0 = 1, 1+1 = 0), and n generator polynomials — one for each adder (see figure below). An input bit m1 is fed into the leftmost register. Using the generator polynomials and the existing values in the remaining registers, the encoder outputs n symbols. These symbols may be transmitted or punctured depending on the desired code rate. Now bit shift all register values to the right (m1 moves to m0, m0 moves to m−1) and wait for the next input bit. If there are no remaining input bits, the encoder continues shifting until all registers have returned to the zero state (flush bit termination). Convolutional encoding: The figure below is a rate 1⁄3 (m⁄n) encoder with constraint length (k) of 3. Generator polynomials are G1 = (1,1,1), G2 = (0,1,1), and G3 = (1,0,1). Therefore, output bits are calculated (modulo 2) as follows: n1 = m1 + m0 + m−1 n2 = m0 + m−1 n3 = m1 + m−1.Convolutional codes can be systematic and non-systematic: systematic repeats the structure of the message before encoding non-systematic changes the initial structureNon-systematic convolutional codes are more popular due to better noise immunity. It relates to the free distance of the convolutional code. Recursive and non-recursive codes: The encoder on the picture above is a non-recursive encoder. Here's an example of a recursive one and as such it admits a feedback structure: The example encoder is systematic because the input data is also used in the output symbols (Output 2). Codes with output symbols that do not include the input data are called non-systematic. Recursive codes are typically systematic and, conversely, non-recursive codes are typically non-systematic. It isn't a strict requirement, but a common practice. The example encoder in Img. 2. is an 8-state encoder because the 3 registers will create 8 possible encoder states (23). A corresponding decoder trellis will typically use 8 states as well. Recursive systematic convolutional (RSC) codes have become more popular due to their use in Turbo Codes. Recursive systematic codes are also referred to as pseudo-systematic codes. Other RSC codes and example applications include: Useful for LDPC code implementation and as inner constituent code for serial concatenated convolutional codes (SCCC's). Useful for SCCC's and multidimensional turbo codes. Useful as constituent code in low error rate turbo codes for applications such as satellite links. Also suitable as SCCC outer code. Impulse response, transfer function, and constraint length: A convolutional encoder is called so because it performs a convolution of the input stream with the encoder's impulse responses: yij=∑k=0∞hkjxi−k=(x∗hj)[i], where x is an input sequence, yj is a sequence from output j, hj is an impulse response for output j and ∗ denotes convolution. A convolutional encoder is a discrete linear time-invariant system. Every output of an encoder can be described by its own transfer function, which is closely related to the generator polynomial. An impulse response is connected with a transfer function through Z-transform. Transfer functions for the first (non-recursive) encoder are: H1(z)=1+z−1+z−2, H2(z)=z−1+z−2, H3(z)=1+z−2. Transfer functions for the second (recursive) encoder are: H1(z)=1+z−1+z−31−z−2−z−3, 1. Define m by max polydeg ⁡(Hi(1/z)) where, for any rational function f(z)=P(z)/Q(z) polydeg max deg deg ⁡(Q)) .Then m is the maximum of the polynomial degrees of the Hi(1/z) , and the constraint length is defined as K=m+1 . For instance, in the first example the constraint length is 3, and in the second the constraint length is 4. Trellis diagram: A convolutional encoder is a finite state machine. An encoder with n binary cells will have 2n states. Trellis diagram: Imagine that the encoder (shown on Img.1, above) has '1' in the left memory cell (m0), and '0' in the right one (m−1). (m1 is not really a memory cell because it represents a current value). We will designate such a state as "10". According to an input bit the encoder at the next turn can convert either to the "01" state or the "11" state. One can see that not all transitions are possible for (e.g., a decoder can't convert from "10" state to "00" or even stay in "10" state). Trellis diagram: All possible transitions can be shown as below: An actual encoded sequence can be represented as a path on this graph. One valid path is shown in red as an example. This diagram gives us an idea about decoding: if a received sequence doesn't fit this graph, then it was received with errors, and we must choose the nearest correct (fitting the graph) sequence. The real decoding algorithms exploit this idea. Free distance and error distribution: The free distance (d) is the minimal Hamming distance between different encoded sequences. The correcting capability (t) of a convolutional code is the number of errors that can be corrected by the code. It can be calculated as t=⌊d−12⌋. Since a convolutional code doesn't use blocks, processing instead a continuous bitstream, the value of t applies to a quantity of errors located relatively near to each other. That is, multiple groups of t errors can usually be fixed when they are relatively far apart. Free distance and error distribution: Free distance can be interpreted as the minimal length of an erroneous "burst" at the output of a convolutional decoder. The fact that errors appear as "bursts" should be accounted for when designing a concatenated code with an inner convolutional code. The popular solution for this problem is to interleave data before convolutional encoding, so that the outer block (usually Reed–Solomon) code can correct most of the errors. Decoding convolutional codes: Several algorithms exist for decoding convolutional codes. For relatively small values of k, the Viterbi algorithm is universally used as it provides maximum likelihood performance and is highly parallelizable. Viterbi decoders are thus easy to implement in VLSI hardware and in software on CPUs with SIMD instruction sets. Decoding convolutional codes: Longer constraint length codes are more practically decoded with any of several sequential decoding algorithms, of which the Fano algorithm is the best known. Unlike Viterbi decoding, sequential decoding is not maximum likelihood but its complexity increases only slightly with constraint length, allowing the use of strong, long-constraint-length codes. Such codes were used in the Pioneer program of the early 1970s to Jupiter and Saturn, but gave way to shorter, Viterbi-decoded codes, usually concatenated with large Reed–Solomon error correction codes that steepen the overall bit-error-rate curve and produce extremely low residual undetected error rates. Decoding convolutional codes: Both Viterbi and sequential decoding algorithms return hard decisions: the bits that form the most likely codeword. An approximate confidence measure can be added to each bit by use of the Soft output Viterbi algorithm. Maximum a posteriori (MAP) soft decisions for each bit can be obtained by use of the BCJR algorithm. Popular convolutional codes: In fact, predefined convolutional codes structures obtained during scientific researches are used in the industry. This relates to the possibility to select catastrophic convolutional codes (causes larger number of errors). Popular convolutional codes: An especially popular Viterbi-decoded convolutional code, used at least since the Voyager program has a constraint length K of 7 and a rate r of 1/2.Mars Pathfinder, Mars Exploration Rover and the Cassini probe to Saturn use a K of 15 and a rate of 1/6; this code performs about 2 dB better than the simpler K=7 code at a cost of 256× in decoding complexity (compared to Voyager mission codes). Popular convolutional codes: The convolutional code with a constraint length of 2 and a rate of 1/2 is used in GSM as an error correction technique. Punctured convolutional codes: Convolutional code with any code rate can be designed based on polynomial selection; however, in practice, a puncturing procedure is often used to achieve the required code rate. Puncturing is a technique used to make a m/n rate code from a "basic" low-rate (e.g., 1/n) code. It is achieved by deleting of some bits in the encoder output. Bits are deleted according to a puncturing matrix. The following puncturing matrices are the most frequently used: For example, if we want to make a code with rate 2/3 using the appropriate matrix from the above table, we should take a basic encoder output and transmit every first bit from the first branch and every bit from the second one. The specific order of transmission is defined by the respective communication standard. Punctured convolutional codes: Punctured convolutional codes are widely used in the satellite communications, for example, in INTELSAT systems and Digital Video Broadcasting. Punctured convolutional codes are also called "perforated". Turbo codes: replacing convolutional codes: Simple Viterbi-decoded convolutional codes are now giving way to turbo codes, a new class of iterated short convolutional codes that closely approach the theoretical limits imposed by Shannon's theorem with much less decoding complexity than the Viterbi algorithm on the long convolutional codes that would be required for the same performance. Concatenation with an outer algebraic code (e.g., Reed–Solomon) addresses the issue of error floors inherent to turbo code designs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Warning label** Warning label: A warning label is a label attached to a product, or contained in a product's instruction manual, warning the user about risks associated with its use, and may include restrictions by the manufacturer or seller on certain uses. Most of them are placed to limit civil liability in lawsuits against the item's manufacturer or seller (see product liability). That sometimes results in labels which for some people seem to state the obvious. Government regulation: In the United States warning labels were instituted under the Federal Food, Drug, and Cosmetic Act of 1938. Cigarettes were not required to have warning labels in the United States until in 1965 Congress passed the Federal Cigarette Labeling and Advertising Act (FCLAA).In the EEA, a product containing hazardous mixtures must have a Unique formula identifier (UFI) code. This is not a warning label per se, but a code that helps poison control centres identify the exact formula of the hazardous product. Abnormal warning labels: Warning labels have been produced for different items. In some cases, rumors have developed of labels warning against some very strange occurrences, such as the legendary microwave warning that allegedly states 'do not dry pets in microwave'.Some companies hold 'strange warning label competitions' such as the former M-law wacky warning labels competition.While many safe products intended for human consumption may require warning labels due to the health risks associated with using them, it is only tobacco products that have strongly worded warnings on their health risks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Life zones of the Mediterranean region** Life zones of the Mediterranean region: The climate and ecology of land immediately surrounding the Mediterranean Sea is influenced by several factors. Overall, the land has a Mediterranean climate, with mild, rainy winters and hot, dry summers. The climate induces characteristic Mediterranean forests, woodlands, and scrub vegetation. Plant life immediately near the Mediterranean is in the Mediterranean Floristic region, while mountainous areas further from the sea supports the Sub-Mediterranean Floristic province. Life zones of the Mediterranean region: An important factor in the local climate and ecology of the lands in the Mediterranean basin is the elevation: an increase of elevation by 1,000 metres (3,300 ft) causes the average air temperature to drop by 5 C/ 9 F and decreases the amount of water that can be held by the atmosphere by 30%. This decrease in temperature and increase in rainfall result in altitudinal zonation, where the land can be divided into life zones of similar climate and ecology, depending on elevation. Life zones of the Mediterranean region: Mediterranean vegetation shows a variety of ecological adaptations to hot and dry summer conditions. As Mediterranean vegetation differ both in species and composition from temperate vegetation, ecologists use special terminology for the Mediterranean altitudinal zonation: Eu-mediterranean belt: 20- 16 °C (avg annual temperature) Sub-mediterranean belt: 15- 12 °C Hilly region: 11- 8 °C Mountainous belt: 7- 4 °C Alpine belt: 3- 0 °C Subnival belt: 0- minus 4 °C Even within the Mediterranean Basin, differences in aridity alter the life zones as a function of elevation. For example, the wetter Maritime and Dinaric Alps have a North-Mediterranean zonation pattern, while the southern Apennine Mountains and the Spanish Sierra Nevada have a moderate Eu-Mediterranean zonation pattern. Finally, the drier Atlas Mountains of Africa have a Xero-Mediterranean pattern.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Horseback riding simulators** Horseback riding simulators: Horseback riding simulators are intended to allow people to gain the benefits of therapeutic horseback riding or to gain skill and conditioning for equestrian activity while diminishing the issues of surrounding cost, availability, and individual comfort level around horses. Horseback therapy has been used by many types of therapists (i.e.: physical, occupational, and speech therapists) to advance their physical, mental, emotional, and social skills. Horseback riding simulators: Simulators used for therapeutic purposes can be used anywhere (i.e.: clinic or a patient home), do not take up much space, and can be programmed to achieve the type of therapy desired. Additionally, difficulty level can be set by the therapist and increased gradually in subsequent sessions to reflect the patient's progress and abilities. Some people use these simulators as personal exercise machines to tone core muscles in an easy and low-impact manner. Commercial products: Products that attempt to accurately imitate the movement of a real horse and are sometimes used for therapeutic purposes as well as for developing equestrian skills or conditioning are the Equicizer, an American-developed mechanical product that resembles the body of a horse, imitates the movement of a horse, and can be used at slower speeds for therapeutic and rehabilitation purposes. Another product that resembles and moves like a real horse is the line of Racewood Equestrian Simulators, with 13 models to imitate actual movement of horses in various disciplines, including a simple walk and trot model.Simulators that do not resemble horses but imitate certain aspects of equine motion are popular in some Asian countries such as Japan and South Korea, in part because land for keeping actual horses is quite limited. One such commercial product is the Joba, created in Japan by rehabilitation doctor Testuhiko Kimura and the Matsushita Electric Industrial Company. The Joba does not resemble a horse, but rather just looks like a saddle, with plastic handle and stirrups, attached to a base that allows it to pitch and roll, exercising core muscles. A similar product manufactured in the US is a stool-like device called the iGallop, which was commercially available in the mid 2000s and moves in a side-to-side and circular motion with various speed settings. However, it was criticized for not delivering the results claimed. Research: Cerebral Palsy There has been increased research regarding use of horseback riding simulators compared to conventional therapy methods. One 2011 study by Borges et al. compared children with cerebral palsy and postural issues who received conventional therapy to similar children who received therapy involving a riding simulator. The results from this study showed that children who received riding simulator therapy exhibited a statistically significant improvement regarding postural control in the sitting position, specifically regarding the maximal displacement in the mediolateral and anteroposterior directions. Parents of these children noted that their children executed activities of daily living that demanded greater mobility and postural control better than before. In a 2014 study by Lee et al., 26 children with cerebral palsy were divided into two groups: a hippotherapy group and a horseback riding simulator group. The children in each group underwent the same kind of therapy for the same amount of time using either a real horse or the simulator. Conventional physical therapy sessions were attended before each hippotherapy or horseback riding simulator session. It was found that both static and dynamic balance improved for the children in both groups following their 12-week-long programs and there was not a statistically significant difference between the results from the two groups. This indicates that using a horseback riding simulator can be as effective as hippotherapy for improving balance in children with cerebral palsy. Research: Stroke Another area of research involves horseback riding simulation with stroke patients. Trunk balance and gait were assessed before and after the stroke patients were treated using a horseback riding simulator. Because stroke patients are not able to keep both feet on the floor and weight distributed equally between them, it is very easy for them to lose trunk muscle strength and control of the trunk on one or both sides. In a 2014 study, 20 non-traumatic, unilateral stroke patients underwent therapy using a horseback riding simulator. Their therapy included six 30-minute sessions a week for five weeks. The Trunk Impairment Scale (TIS) used to assess the patients before and after their therapy showed that they had better trunk control in a seated position following their sessions. Upon gait analysis, improvements in the areas of velocity, cadence, and stride length of the affected and non-affected sides were all observed. Additionally, the percentage of time spent in the double support phase was decreased. More research studies in which more subjects are tested for longer amounts of time are currently being investigated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peutz–Jeghers syndrome** Peutz–Jeghers syndrome: Peutz–Jeghers syndrome (often abbreviated PJS) is an autosomal dominant genetic disorder characterized by the development of benign hamartomatous polyps in the gastrointestinal tract and hyperpigmented macules on the lips and oral mucosa (melanosis). This syndrome can be classed as one of various hereditary intestinal polyposis syndromes and one of various hamartomatous polyposis syndromes. It has an incidence of approximately 1 in 25,000 to 300,000 births. Signs and symptoms: The risks associated with this syndrome include a substantial risk of cancer, especially of the breast and gastrointestinal tracts. Colorectal is the most common malignancy, with a lifetime risk of 39 percent, followed by breast cancer in females with a lifetime risk of 32 to 54 percent.Patients with the syndrome also have an increased risk of developing carcinomas of the liver, lungs, breast, ovaries, uterus, testes, and other organs. Specifically, it is associated with an increased risk of sex-cord stromal tumor with annular tubules in the ovaries.Due to the increased risk of malignancies, direct surveillance is recommended. Signs and symptoms: The average age of first diagnosis is 23. The first presentation is often bowel obstruction or intussuseption from the hamartomatous gastrointestinal polyps. Dark blue, brown, and black pigmented mucocutaneous macules, are present in over 95 percent of individuals with Peutz-Jeghers syndrome. Pigmented lesions are rarely present at birth, but often appear before 5 years of age. The macules may fade during puberty. The melanocytic macules are not associated with malignant transformation.Complications associated with Peutz-Jeghers syndrome include obstruction and intussusception, which occur in up to 69 percent of patients, typically first between the ages of 6 and 18, though surveillance for them is controversial. Anemia is also common due to gastrointestinal bleeding from the polyps. Genetics: In 1998, a gene was found to be associated with the mutation. On chromosome 19, the gene known as STK11 (LKB1) is a possible tumor suppressor gene. It is inherited in an autosomal dominant pattern, which means that anyone who has PJS has a 50% chance of passing the disease on to their offspring.Peutz–Jeghers syndrome is rare and studies typically include only a small number of patients. Even in those few studies that do contain a large number of patients, the quality of the evidence is limited due to pooling patients from many centers, selection bias (only patients with health problems coming from treatment are included), and historical bias (the patients reported are from a time before advances in the diagnosis of treatment of Peutz–Jeghers syndrome were made). Probably due to this limited evidence base, cancer risk estimates for Peutz–Jeghers syndrome vary from study to study. There is an estimated 18–21% risk of ovarian cancer, 9% risk of endometrial cancer, and 10% risk of cervical cancer, specifically adenoma malignum. Diagnosis: The main criteria for clinical diagnosis are: Family history Mucocutaneous lesions causing patches of hyperpigmentation in the mouth and on the hands and feet. The oral pigmentations are the first on the body to appear, and thus play an important part in early diagnosis. Intraorally, they are most frequently seen on the gingiva, hard palate and inside of the cheek. The mucosa of the lower lip is almost invariably involved as well. Diagnosis: Hamartomatous polyps in the gastrointestinal tract. These are benign polyps with an extraordinarily low potential for malignancy.Having 2 of the 3 listed clinical criteria indicates a positive diagnosis. The oral findings are consistent with other conditions, such as Addison's disease and McCune–Albright syndrome, and these should be included in the differential diagnosis. 90–100% of patients with a clinical diagnosis of PJS have a mutation in the STK11/LKB1 gene. Molecular genetic testing for this mutation is available clinically. Management: Resection of the polyps is required only if serious bleeding or intussusception occurs. Enterotomy is performed for removing large, single nodules. Short lengths of heavily involved intestinal segments can be resected. Colonoscopy can be used to snare the polyps if they are within reach. Prognosis: Most patients will develop flat, brownish spots (melanotic macules) on the skin, especially on the lips and oral mucosa, during the first year of life, and a patient's first bowel obstruction due to intussusception usually occurs between the ages of six and 18 years. The cumulative lifetime cancer risk begins to rise in middle age. Cumulative risks by age 70 for all cancers, gastrointestinal (GI) cancers, and pancreatic cancer are 85%, 57%, and 11%, respectively.A 2011 Dutch study followed 133 patients for 14 years. The cumulative risk for cancer was 40% and 76% at ages 40 and 70, respectively. 42 (32%) of the patients died during the study, of which 28 (67%) were cancer related. They died at a median age of 45. Mortality was increased compared with the general population.A family with sinonasal polyposis were followed up for 28 years. Two cases of sinonasal type adenocarcinoma developed. This is a rare cancer. This report suggested that follow up of sinus polyps in this syndrome may be indicated. Prognosis: Monitoring Some suggestions for surveillance for cancer include the following: Small intestine with small bowel radiography every 2 years, Esophagogastroduodenoscopy and colonoscopy every 2 years, CT scan or MRI of the pancreas yearly, Ultrasound of the pelvis and testes yearly Mammography from age 25 annually Papanicolaou smear (Pap smear) annually beginning at age 18-20Follow-up care should be supervised by a physician familiar with Peutz–Jeghers syndrome. Genetic consultation and counseling as well as urological and gynecological consultations are often needed. Eponym: First described in a published case report in 1921 by Jan Peutz (1886–1957), a Dutch Internist, it was later formalized into the syndrome by American physicians at Boston City Hospital, Harold Joseph Jeghers (1904–1990) and Kermit Harry Katz (1914–2003), and Victor Almon McKusick (1921–2008) in 1949 and published in the New England Journal of Medicine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamic combinatorial chemistry** Dynamic combinatorial chemistry: Dynamic combinatorial chemistry (DCC); also known as constitutional dynamic chemistry (CDC) is a method to the generation of new molecules formed by reversible reaction of simple building blocks under thermodynamic control. The library of these reversibly interconverting building blocks is called a dynamic combinatorial library (DCL). All constituents in a DCL are in equilibrium, and their distribution is determined by their thermodynamic stability within the DCL. The interconversion of these building blocks may involve covalent or non-covalent interactions. When a DCL is exposed to an external influence (such as proteins or nucleic acids), the equilibrium shifts and those components that interact with the external influence are stabilised and amplified, allowing more of the active compound to be formed. History: By modern definition, dynamic combinatorial chemistry is generally considered to be a method of facilitating the generation of new chemical species by the reversible linkage of simple building blocks, under thermodynamic control. This principle is known to select the most thermodynamically stable product from an equilibrating mixture of a number of components, a concept commonly utilised in synthetic chemistry to direct the control of reaction selectivity. Although this approach was arguably utilised in the work of Fischer and Werner as early as the 19th century, their respective studies of carbohydrate and coordination chemistry were restricted to rudimentary speculation, requiring the rationale of modern thermodynamics. It was not until supramolecular chemistry revealed early concepts of molecular recognition, complementarity and self-organisation that chemists could begin to employ strategies for the rational design and synthesis of macromolecular targets. The concept of template synthesis was further developed and rationalised through the pioneering work of Busch in the 1960s, which clearly defined the role of a metal ion template in stabilising the desired ‘thermodynamic’ product, allowing for its isolation from the complex equilibrating mixture. Although the work of Busch helped to establish the template method as a powerful synthetic route to stable macrocyclic structures, this approach remained exclusively within the domain of inorganic chemistry until the early 1990s, when Sanders et al. first proposed the concept of dynamic combinatorial chemistry. Their work combined thermodynamic templation in tandem with combinatorial chemistry, to generate an ensemble complex porphyrin and imine macrocycles using a modest selection of simple building blocks. History: Sanders then developed this early manifestation of dynamic combinatorial chemistry as a strategy for organic synthesis; the first example being the thermodynamically-controlled macrolactonisation of oligocholates to assemble cyclic steroid-derived macrocycles capable of interconversion via component exchange. Early work by Sanders et al. employed transesterification to generate dynamic combinatorial libraries. In retrospect, it was unfortunate that esters were selected for mediating component exchange, as transesterification processes are inherently slow and require vigorous anhydrous conditions. However, their subsequent investigations identified that both the disulfide and hydrazone covalent bonds exhibit effective component exchange processes and so present a reliable means of generating dynamic combinatorial libraries capable of thermodynamic templation. This chemistry now forms the basis of much research in the developing field of dynamic covalent chemistry, and has in recent years emerged as a powerful tool for the discovery of molecular receptors. Protein-directed: One of the key developments within the field of DCC is the use of proteins (or other biological macromolecules, such as nucleic acids) to influence the evolution and generation of components within a DCL. Protein-directed DCC provides a way to generate, identify and rank novel protein ligands, and therefore have huge potential in the areas of enzyme inhibition and drug discovery. Protein-directed: Reversible covalent reactions The development of protein-directed DCC has not been straightforward because the reversible reactions employed must occur in aqueous solution at biological pH and temperature, and the components of the DCL must be compatible with proteins.Several reversible reactions have been proposed and/or applied in protein-directed DCC. These included boronate ester formation, diselenides-disulfides exchange, disulphide formation, hemithiolacetal formation, hydrazone formation, imine formation and thiol-enone exchange. Protein-directed: Pre-equilibrated DCL For reversible reactions that do not occur in aqueous buffers, the pre-equilibrated DCC approach can be used. The DCL was initially generated (or pre-equilibrated) in organic solvent, and then diluted into aqueous buffer containing the protein target for selection. Organic based reversible reactions, including Diels-Alder and alkene cross metathesis reactions, have been proposed or applied to protein-directed DCC using this method. Protein-directed: Reversible non-covalent reactions Reversible non-covalent reactions, such as metal-ligand coordination, has also been applied in protein-directed DCC. This strategy is useful for the investigation of the optimal ligand stereochemistry to the binding site of the target protein. Enzyme-catalysed reversible reactions Enzyme-catalysed reversible reactions, such as protease-catalysed amide bond formation/hydrolysis reactions and the aldolase-catalysed aldol reactions, have also been applied to protein-directed DCC. Analytical methods Protein-directed DCC system must be amenable to efficient screening. Several analytical techniques have been applied to the analysis of protein-directed DCL. These include HPLC, mass spectrometry, NMR spectroscopy, and X-ray crystallography. Protein-directed: Multi-protein approach Although most applications of protein-directed DCC to date involved the use of single protein in the DCL, it is possible to identify protein ligands by using multiple proteins simultaneously, as long as a suitable analytical technique is available to detect the protein species that interact with the DCL components. This approach may be used to identify specific inhibitors or broad-spectrum enzyme inhibitors. Other applications: DCC is useful in identifying molecules with unusual binding properties, and provides synthetic routes to complex molecules that aren't easily accessible by other means. These include smart materials, foldamers, self-assembling molecules with interlocking architectures and new soft materials. The application of DCC to detect volatile bioactive compounds, i.e. the amplification and sensing of scent, was proposed in a concept paper. Recently, DCC was also used to study the abiotic origins of life.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rope (unit)** Rope (unit): A rope may refer to any of several units of measurement initially determined or formed by ropes or knotted cords. Length: The Greco-Roman schoenus, supposedly based on an Egyptian unit derived from a wound reed measuring rope, may also be given in translation as a "rope". According to Strabo, it varied in length between 30 and 120 stadia (roughly 5 to 20 km) depending on local custom. The Byzantine equivalent, the schoinion or "little rope", varied between 60 and 72 Greek feet depending upon the location. The Thai sen of 20 Thai fathoms or 40 m also means and is translated "rope". The Somerset rope was a former English unit used in drainage and hedging. It was 20 feet (now precisely 6.096 m). Area: The Romans used the schoenus as an alternative name for the half-jugerum formed by a square with sides of 120 Roman feet. In Somerset, the rope could also double as a measure of area equivalent to 20 feet by 1 foot. Walls in Somerset were formerly sold "per rope" of 20 sq ft. Garlic: In medieval English units, the rope of garlic was a set unit of 15 heads of garlic. 15 such ropes made up the "hundred" of garlic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mathematics (UIL)** Mathematics (UIL): Mathematics (sometimes referred to as General Math, to distinguish it from other mathematics-related events) is one of several academic events sanctioned by the University Interscholastic League. It is also a competition held by the Texas Math and Science Coaches Association, using the same rules as the UIL. Mathematics is designed to test students' understanding of advanced mathematics. The UIL contest began in 1943, and is among the oldest of all UIL academic contests. Eligibility: Students in Grade 6 through Grade 12 are eligible to enter this event. For competition purposes, separate divisions are held for Grades 6-8 and Grades 9-12, with separate subjects covered on each test as follows: The test for Grades 6-8 covers numeration systems, arithmetic operations involving whole numbers, integers, fractions, decimals, exponents, order of operations, probability, statistics, number theory, simple interest, measurements and conversions, plus possibly geometry and algebra problems (as appropriate for the grade level). Eligibility: The test for Grades 9-12 covers algebra I and II, geometry, trigonometry, math analysis, analytic geometry, pre-calculus, and elementary calculus.For Grades 6-8 each school may send up to three students per division. In order for a school to participate in team competition in a division, the school must send three students in that division. For Grades 9-12 each school may send up to four students; however, in districts with more than eight schools the district executive committee can limit participation to three students per school. In order for a school to participate in team competition, the school must send at least three students. Rules and Scoring: At the junior high level, the test consists of 50 questions and is limited to only 30 minutes. At the high school level, the test consists of 60 questions and is limited to only 40 minutes. Both tests are multiple choice. Rules and Scoring: There is no intermediate time signal given; at the end of the allotted time the students must immediately stop writing (they are not allowed to finish incomplete answers started before the stop signal). If contestants are in the process of writing down an answer, they may finish; they may not do additional work on a test question.The questions can be answered in any order; a skipped question is not scored. Rules and Scoring: Calculators are permitted provided they are (or were) commercially available models, run quietly, and do not require auxiliary power. One calculator plus one spare is permitted. Five points are awarded for each correct answer at the junior high level while six points are awarded at the high school level. Two points are deducted for each wrong answer. Skipped or unanswered questions are not scored. Determining the Winner: Elementary and Junior High Scoring is posted for only the top six individual places and the top three teams. There are no tiebreakers for either individual or team competition. Determining the Winner: High School Level The top three individuals and the top team (determined based on the scores of the top three individuals) advance to the next round. In addition, within each region, the highest-scoring second place team from all district competitions advances as the "wild card" to regional competition (provided the team has four members), and within the state, the highest-scoring second place team from all regional competitions advances as the wild card to the state competition. Members of advancing teams who did not place individually remain eligible to compete for individual awards at higher levels. Determining the Winner: For individual competition, the tiebreaker is percent accuracy (number of questions answered correctly divided by number of questions attempted). If a tie still exists all tied individuals will advance. For team competition, the score of the fourth-place individual is used as the tiebreaker. If a team has only three members it is not eligible to participate in the tiebreaker. If the fourth-place score still results in a tie, all remaining tied teams will advance. At the state level, ties for first place are not broken. For district meet academic championship and district meet sweepstakes awards, points are awarded to the school as follows: Individual places: 1st—15, 2nd—12, 3rd—10, 4th—8, 5th—6, and 6th—4. Team places: 1st—10 and 2nd—5. The maximum number of points a school can earn in Mathematics is 47 (15, 12, and 10 points for an individual and 10 points for a top team ranking), though all teams obtaining this number of points is extremely rare. List of prior winners: Individual NOTE: For privacy reasons, only the winning school is shown. Team NOTE: The team competition did not start until the 1992-93 scholastic year.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TBRG1** TBRG1: Transforming growth factor beta regulator 1 is a protein that in humans is encoded by the TBRG1 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Marker (linguistics)** Marker (linguistics): In linguistics, a marker is a free or bound morpheme that indicates the grammatical function of the marked word, phrase, or sentence. Most characteristically, markers occur as clitics or inflectional affixes. In analytic languages and agglutinative languages, markers are generally easily distinguished. In fusional languages and polysynthetic languages, this is often not the case. For example, in Latin, a highly fusional language, the word amō ("I love") is marked by suffix -ō for indicative mood, active voice, first person, singular, present tense. Analytic languages tend to have a relatively limited number of markers. Marker (linguistics): Markers should be distinguished from the linguistic concept of markedness. An unmarked form is the basic "neutral" form of a word, typically used as its dictionary lemma, such as—in English—for nouns the singular (e.g. cat versus cats), and for verbs the infinitive (e.g. to eat versus eats, ate and eaten). Unmarked forms (e.g. the nominative case in many languages) tend to be less likely to have markers, but this is not true for all languages (compare Latin). Conversely, a marked form may happen to have a zero affix, like the genitive plural of some nouns in Russian (e.g. сапо́г). In some languages, the same forms of a marker have multiple functions, such as when used in different cases or declensions (for example -īs in Latin).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jonah complex** Jonah complex: The Jonah complex is the fear of success or the fear of being one's best. This fear prevents self-actualization, or the realization of one's own potential. It is the fear of one's own greatness, the evasion of one's destiny, or the avoidance of exercising one's talents. As the fear of achieving a personal worst may serve to motivate personal growth, likewise the fear of achieving a personal best may hinder achievement. The Jonah complex is evident in neurotic people. Etymology: Although Abraham Maslow is credited for the term, the name "Jonah complex" was originally suggested by Maslow's friend, Professor Frank E. Manuel. The name comes from the story of the Biblical prophet Jonah's evasion of the destiny to prophesy the destruction of Nineveh. Maslow states, "So often we run away from the responsibilities dictated (or rather suggested) by nature, by fate, even sometimes by accident, just as Jonah tried—in vain—to run away from his fate". Causes: Any dilemma, paradox or challenge faced by an individual may trigger reactions related to the "Jonah complex". These challenges may vary in degree and intensity. Such challenges may include career changes, beginning new stages in life, moving to new locations, interviews or auditions, and undertaking new interpersonal commitments such as marriage. The crux of the Jonah Complex distinguishes to the subject an inability to differentiate humility from self-helplessness. Other causes include: Fear of the sense of responsibility and work required that often attends recognizing one's own greatness, talents, potential Fear that an extraordinary life would be too much out of the ordinary, and hence not acceptable to others inciting xenophobic rejection Fear by association of the ability honed being heightened and elevated as subject to a traumatic unrelated event, complex or memory Fear of seeming arrogant, self-centered, etc. Causes: Difficulty envisioning oneself as a prominent or authoritative figure
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**40S ribosomal protein S28** 40S ribosomal protein S28: 40S ribosomal protein S28 is a protein that in humans is encoded by the RPS28 gene.Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 40S subunit. The protein belongs to the S28E family of ribosomal proteins. It is located in the cytoplasm. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Documentary channel** Documentary channel: A documentary channel is a specialty channel which focuses on broadcasting documentaries. Some documentary channels further specialize by dedicating their television programming to specific types of documentaries or documentaries in a specific area of knowledge. Documentary and The History Channel are examples of this. There is some overlap between news channels and documentary channels, but while a documentary channel may also broadcast programs about current affairs, it will, as a rule, air longer, more in-depth segments and not present up-to-the-minute news coverage. Also, many other TV channels regularly air documentaries, but unless a channel is significantly dedicated to documentary-type programming, it probably will not be considered a documentary channel. As of 2006, some of the most famous documentary channels are the Discovery Channel and the National Geographic Channel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RCW 36** RCW 36: RCW 36 (also designated Gum 20) is an emission nebula containing an open cluster in the constellation Vela. This H II region is part of a larger-scale star-forming complex known as the Vela Molecular Ridge (VMR), a collection of molecular clouds in the Milky Way that contain multiple sites of ongoing star-formation activity. The VMR is made up of several distinct clouds, and RCW 36 is embedded in the VMR Cloud C. RCW 36: RCW 36 is one of the sites of massive-star formation closest to the Solar System, whose distance of approximately 700 parsecs (2300 light-years). The most massive stars in the star cluster are two stars with late-O or early-B spectral types, but the cluster also contains hundreds of lower-mass stars. This region is also home to objects with Herbig–Haro jets, HH 1042 and HH 1043. Star formation in RCW 36: Like most star-forming regions, the interstellar medium around RCW 36 contains both the gas from which stars form and some newly formed young stars. Here, young stellar clusters form in giant molecular clouds. Molecular clouds are the coldest, densest form of interstellar gas and are composed mostly of molecular hydrogen (H2), but also include more complex molecules, cosmic dust, and atomic helium. Stars form when the mass gas in part of a cloud becomes too great, causing it to collapse due to the Jeans instability. Most stars do not form alone, but in groups containing hundreds or thousands of other stars. RCW 36 is an example of this type of "clustered" star formation. Molecular cloud and H II region: The Vela Molecular Ridge can be subdivided into several smaller clouds, each of which in turn can be subdivided into cloud "clumps". The molecular cloud clump from which the RCW 36 stars are forming is Clump 6 in the VMR C cloud.Early maps of the region were produced by radio telescopes that traced emission from several types of molecules found in the clouds, including CO, OH, and H2CO. More detailed CO maps were produced in the 1990s by a team of Japanese astronomers using the NANTEN millimeter-wavelength telescope. Using emission from C18O, they estimated the total mass of Cloud C to be 44,000 M☉. The cloud maps suggest that Cloud C is the youngest component of the VMR because of an ultra-compact H II region associated with RCW 36 and several sites of embedded protostars, while H II regions in other VMR clouds are more evolved. Observations from the Herschel Space Telescope show that the material within the cloud is organized into filaments and RCW 36 sits near the south end of a 10-parsec long filament.Star formation in RCW 36 is currently ongoing. In the dense gas at the western edge of RCW 36, where the far-infrared emission is greatest, are found protostellar cores, the Herbig Haro objects, and an ultra-compact H II region. However, more deeply embedded star-formation is obscured by dust, so radiation can only escape from the cloud surface and not from the embedded objects themselves.The H II region is an area around the cluster in which hydrogen atoms in the interstellar medium have been ionized by ultraviolet light from O- and B-type stars. The H II region in RCW 36 has an hourglass morphology, similar to the shape of H II regions around other young stellar clusters like W40 or Sh2-106. In addition, an ultra-compact H II region surrounds IRAS source 08576−4333. Star cluster: Due to the youth of RCW 36, most of the stars in the cluster are at an early stage of stellar evolution where they are known as young stellar objects or pre-main-sequence stars. These stars are still in the process of contraction before they reach the main sequence, and they may still have gas accreting onto them from either a circumstellar disk or envelope. Star cluster: Cluster members in RCW 36 have been identified through both infrared and X-ray observations. Bright infrared sources, attributed to massive stars, were first identified by the TIFR 100-cm balloon-born telescope from the National Balloon Facility in Hyderabad, India. In the early 2000s, infrared images in the J, H, and Ks bands have suggested at least 350 cluster members. Observations by NASA's Spitzer Space Telescope and Chandra X-ray Observatory were used to identify cluster members as part of the MYStIX survey of nearby star-forming regions. In the MYStIX catalog of 384 probable young stellar members of RCW 36, more than 300 of the stars are detected by X-ray sources. Modeling of the brightnesses of stars at various infrared wavelengths has shown 132 young stellar objects to have infrared excess consistent with circumstellar disks or envelopes.The cluster has been noted by Baba et al. for having a high density of stars, with star counts (the number of stars within an angular area of the sky) exceeding 3000 stars per square parsec at the center of the cluster. A measurement of central area density using the MYStIX catalog suggested approximately 10,000 stars per square parsec at the cluster center, but this study also suggested that such densities are not unusual for massive star-forming regions. The spatial distribution of stars has been described as a King profile or alternatively as a "core-halo" structure.Stellar density near the center of RCW 36 has been estimated to be approximately 300,000 stars per cubic parsec (or 10,000 stars per cubic light year). In contrast, the density of stars in the Solar neighborhood is only 0.14 star per cubic parsec, so the density of stars at the center of RCW 36 is about 2 million times greater. It has been calculated that for young stellar clusters with more than 104 stars pc.−3 close encounters between stars can lead to interactions between protoplanetary disks that affect developing planetary systems. Young stellar objects: Several special types of young stellar object have been identified in RCW 36, and are described in more detail below. The properties of these stars are related to their extreme youth. Young stellar objects: Two stars in RCW 36 have Herbig-Haro jets (HH 1042 and HH 1043). Jets of gas flowing out from young stars can be produced by accretion onto a star. In RCW 36 these jets were seen in a number of spectral lines, including lines from hydrogen, helium, oxygen, nitrogen, sulfur, nickel, calcium, and iron. Mass loss rates from the jets have been estimated to be on the order of 10−7 M☉ solar masses per year. Inhomogeneities in the jets have been attributed to variable accretion rate on timescales of approximately 100 years.The young star 2MASS J08592851-4346029 has been classified as a Herbig Ae star. Stars in this class are pre-main-sequence, intermediate-mass stars (spectral type A) with emission lines in their spectra from hydrogen. Observations indicate that 2MASS J08592851-4346029 has a bloated radius as would be expected for a young star that is still contracting. Some of the lines in its spectrum have a P-Cygni Profile indicating the presence of a stellar wind.The young star CXOANC J085932.2−434602 was observed by the Chandra X-ray Observatory to have produced a large flare with a peak temperature greater than 100 million kelvins. Such "super hot" flares from young stars have been seen in other star-forming regions like the Orion Nebula.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Uroguanylin** Uroguanylin: Uroguanylin is a 16 amino acid peptide that is secreted by enterochromaffin cells in the duodenum and proximal small intestine. Guanylin acts as an agonist of the guanylyl cyclase receptor guanylate cyclase 2C (GC-C), and regulates electrolyte and water transport in intestinal and renal epithelia. By agonizing this guanylyl cyclase receptor, uroguanylin and guanylin cause intestinal secretion of chloride and bicarbonate to dramatically increase; this process is helped by the second messenger cGMP. Its sequence is H-Asn-Asp-Asp-Cys(1)-Glu-Leu-Cys(2)-Val-Asn-Val-Ala-Cys(1)-Thr-Gly-Cys(2)-Leu-OH. Uroguanylin: In humans, the uroguanylin peptide is encoded by the GUCA2B gene. Uroguanylin may be involved in appetite and perceptions of 'fullness' after eating meals, as suggested by a study into mice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paratrooper (ride)** Paratrooper (ride): The Paratrooper, also known as the "Parachute Ride" or "Umbrella Ride", is a type of fairground ride. It is a ride where seats suspended below a wheel rotate at an angle. The seats are free to rock sideways and swing out under centrifugal force as the wheel rotates. Invariably, the seats on the Paratrooper ride have a round shaped umbrella or other shaped canopy above the seats. In contrast to modern thrill rides, the Paratrooper is a ride suitable for almost all ages. Most Paratrooper rides require the rider to be at least 36 inches (91.44 cm) tall to be accompanied by an adult, and over 48 inches (121.92 cm) to ride alone. Paratrooper (ride): Older Paratrooper rides have a rotating wheel which is permanently raised, which has the disadvantage that riders can only load two at a time as each seat is brought to hang vertically at the lowest point of the wheel. Some models have a lower platform that's slightly raised on the ends that could permit the loading of up to three seats at a time. Most of these rides were made by the manufacturing companies Bennett, Watkins or Hrubetz. The German manufacturer Heintz-Fahtze also made larger models of the Paratrooper under the name of the Twister. Paratrooper (ride): Modern Paratrooper rides use a hydraulic lifting piston to raise the wheel to their riding angle while spinning the seats. In its lowered position, all the seats hang vertically near the ground and can be loaded simultaneously. The above manufacturers also made these types and the height requirements to ride them remain the same. Variations: The Force 10 is a ride made by Tivoli Enterprises that features some of the same motion of the Paratrooper. The Star Trooper is a variant created by Dartron Industries that features seats facing both ways. The Star Trooper's initial design eventually evolved into the Cliffhanger, also made by Dartron Industries. The same seats for this ride are used in the Swift-O-Plane and the same height requirement is the same as Enterprise. Variations: In the 1980s, British amusement manufacturer David Ward developed the Super Trooper, of which the wheel rises horizontally up a central column. Once at the top, the wheel slants up to 45 degrees in either direction. He built two 12-seat versions and a 10-seat version. In 2018, PWS Rides Ltd. acquired the plans from Ward to build a new version with the first example due to be delivered in early 2019.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**John G. Bollinger** John G. Bollinger: Dr. John G. Bollinger is the Dean Emeritus, College of Engineering & Professor Emeritus of Industrial and Systems Engineering, University of Wisconsin-Madison. Education: BS, Mechanical Engineering (1957), UW-Madison MS, Mechanical Engineering (EE minor) (1958), Cornell University College of Engineering PhD, Mechanical Engineering (EE minor) (1961), UW-Madison Career: Bollinger was on the faculty of the University of Wisconsin-Madison from 1960 through 2000, and was a Fulbright fellow in Germany at the Machine Tool and Industrial Organization Institute in Aachen (1962–63) and England where he was a visiting professor at the Cranfield Institute of Technology (1980–81).Bollinger served as Dean from July 1981 until September 1999. Prior to being Dean, he was Director of the Data Acquisition and Simulation Laboratory and Chairman of the Department of Mechanical Engineering.He was elected a member of the National Academy of Engineering in 1983 for outstanding research on machine tools, sensors, and controls for manufacturing equipment, and leadership in education and the engineering profession. Career: In 1992, he was named a director of Enhanced Imaging Technologies Inc. in Irvine, California.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Combo (video games)** Combo (video games): In video games, a combo (short for combination) is a set of actions performed in sequence, usually with strict timing limitations, that yield a significant benefit or advantage. The term originates from fighting games where it is based upon the concept of a striking combination. It has been since applied more generally to a wide variety of genres, such as puzzle games, shoot 'em ups, and sports games. Combos are commonly used as an essential gameplay element, but can also serve as a high score or attack power modifier, or simply as a way to exhibit a flamboyant playing style. Combo (video games): In fighting games, combo specifically indicates a timed sequence of moves which produce a cohesive series of hits, each of which leaves the opponent unable to block. History: John Szczepaniak of Hardcore Gaming 101 considers Data East's DECO Cassette System arcade title Flash Boy (1981), a scrolling action game based on the manga and anime series Astro Boy, to have a type of combo mechanic. When the player punches an enemy and it explodes, debris can destroy other enemies.The use of combo attacks originated from Technōs Japan's beat 'em up arcade games, Renegade in 1986 and Double Dragon in 1987. In contrast to earlier games that let players knock out enemies with a single blow, the opponents in Renegade and Double Dragon could take much more punishment, requiring a succession of punches, with the first hit temporarily immobilizing the enemy, making him unable to defend himself against successive punches. Combo attacks would later become more dynamic in Capcom's Final Fight, released in 1989. History: Fighting games The earliest known competitive fighting game that used a combo system was Culture Brain's Shanghai Kid in 1985; when the spiked speech balloon that reads "RUSH!" pops up during battle, the player had a chance to rhythmically perform a series of combos called "rush-attacking".The combo notion was introduced to competitive fighting games with Street Fighter II (1991) by Capcom, when skilled players learned that they could combine several attacks that left no time for the computer player to recover if they timed them correctly. Combos were a design accident; lead producer Noritaka Funamizu noticed that extra strikes were possible during a bug check on the car-smashing bonus stage. He thought that the timing required was too difficult to make it a useful game feature, but left it in as a hidden one. Combos have since become a design priority in almost all fighting games, and range from the simplistic to the highly intricate. The first game to count the hits of each combo, and reward the player for performing them, was Super Street Fighter II. History: Rhythm games In rhythm games, combo measures how many consecutive notes have received at least the second-worst judgment (i.e. other than the worst judgment). Never receiving the worst judgment in the entire song is called a full combo or a no miss. Receiving the best judgment for all notes in the song is called a full perfect combo or an all perfect. Some rhythm games have an internal judgment that is tighter than the best judgment, e.g. Critical Perfect in Maimai or S-Critical in Sound Voltex. Receiving an internal judgment for all notes in a song is called a 理論値. Other uses: Many other types of video games include a combo system involving chains of tricks or other maneuvers, usually in order to build up bonus points to obtain a high score. Examples include the Tony Hawk's Pro Skater series, the Crazy Taxi series, and Pizza Tower. The first game with score combos was Data East's 1981 DECO Cassette System arcade game Flash Boy.Combos are a main feature in many puzzle games, such as Columns, Snood and Magical Drop. Primarily they are used as a scoring device, but in the modes of play that are level-based, are used to more quickly gain levels. Shoot 'em ups have increasingly incorporated combo systems, such as in Ikaruga, as have hack-and-slash games, such as Dynasty Warriors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polsby–Popper test** Polsby–Popper test: The Polsby–Popper test is a mathematical compactness measure of a shape developed to quantify the degree of gerrymandering of political districts. The method was developed by lawyers Daniel D. Polsby and Robert Popper, though it had earlier been introduced in the field of paleontology by E.P. Cox. The formula for calculating a district's Polsby–Popper score is PP(D)=4πA(D)P(D)2 , where D is the district, P(D) is the perimeter of the district, and A(D) is the area of the district. A district's Polsby–Popper score will always fall within the interval of [0,1] , with a score of 0 indicating complete lack of compactness and a score of 1 indicating maximal compactness. Compared to other measures that use dispersion to measure gerrymandering, the Polsby–Popper test is very sensitive to both physical geography (for instance, convoluted coastal borders) and map resolution. The method was chosen by Arizona's redistricting commission in 2000.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tropone** Tropone: Tropone or 2,4,6-cycloheptatrien-1-one is an organic compound with some importance in organic chemistry as a non-benzenoid aromatic. The compound consists of a ring of seven carbon atoms with three conjugated alkene groups and a ketone group. The related compound tropolone (2-hydroxy-2,4,6-cycloheptatrien-1-one) has an additional alcohol (or an enol including the double bond) group next to the ketone. Tropones are uncommon in natural products, with the notable exception of the 2-hydroxyl derivatives, which are called tropolones. Tropone: Tropone has been known since 1951 and is also called cycloheptatrienylium oxide. The name tropolone was coined by M. J. S. Dewar in 1945 in connection to perceived aromatic properties. Properties: Dewar in 1945 proposed that tropones could have aromatic properties. The carbonyl group is more polarized as a result of the triene ring, giving a partial positive charge on the carbon atom (A) and a partial negative charge on oxygen. In an extreme case, the carbon atom has a full positive charge (B) forming a tropylium ion ring which is an aromatic 6 electron system (C). Properties: Tropones are also basic (D) as a result of the aromatic stabilization. This property can be observed in the ease of salt formation with acids. The dipole moment for tropone is 4.17 D compared to a value of only 3.04 D for cycloheptanone. This difference is consistent with stabilization of the dipolar resonance structure. Synthesis: Numerous methods exist for the organic synthesis of tropones and its derivatives. Two selected methods for the synthesis of tropone are by selenium dioxide oxidation of cycloheptatriene and indirectly from tropinone by a Hofmann elimination and a bromination. Reactions: Tropone undergoes ring contraction to benzoic acid with potassium hydroxide at elevated temperature. Many derivatives also contract to the corresponding arenes. Tropone reacts in electrophilic substitution, for instance with bromine, but the reaction proceeds through the 1,2-addition product and is not an electrophilic aromatic substitution. Tropone derivatives also react in nucleophilic substitution very much like in nucleophilic aromatic substitution. Tropone is also found to react in an [8+3]annulation with a cinnamic aldehyde Diene character Tropone behaves as a diene in a Diels-Alder reactions, for instance with maleic anhydride. Similarly, it forms adducts with iron tricarbonyl, akin to (butadiene)iron tricarbonyl. Derivatives: Other tropone derivatives include puberulonic and puberulic acids, roseobacticides, pernambucone, crototropone, orobanone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strong interaction** Strong interaction: In nuclear physics and particle physics, the strong interaction, which is also often called the strong force or strong nuclear force, is a fundamental interaction that confines quarks into proton, neutron, and other hadron particles. The strong interaction also binds neutrons and protons to create atomic nuclei, where it is called the nuclear force. Strong interaction: Most of the mass of a common proton or neutron is the result of the strong interaction energy; the individual quarks provide only about 1% of the mass of a proton. At the range of 10−15 m (1 femtometer, slightly more than the radius of a nucleon), the strong force is approximately 100 times as strong as electromagnetism, 106 times as strong as the weak interaction, and 1038 times as strong as gravitation.The strong interaction is observable at two ranges and mediated by two force carriers. On a larger scale (of about 1 to 3 fm), it is the force (carried by mesons) that binds protons and neutrons (nucleons) together to form the nucleus of an atom. On the smaller scale (less than about 0.8 fm, the radius of a nucleon), it is the force (carried by gluons) that holds quarks together to form protons, neutrons, and other hadron particles. In the latter context, it is often known as the color force. The strong force inherently has such a high strength that hadrons bound by the strong force can produce new massive particles. Thus, if hadrons are struck by high-energy particles, they give rise to new hadrons instead of emitting freely moving radiation (gluons). This property of the strong force is called color confinement, and it prevents the free "emission" of the strong force: instead, in practice, jets of massive particles are produced. Strong interaction: In the context of atomic nuclei, the same strong interaction force (that binds quarks within a nucleon) also binds protons and neutrons together to form a nucleus. In this capacity it is called the nuclear force (or residual strong force). So the residuum from the strong interaction within protons and neutrons also binds nuclei together. As such, the residual strong interaction obeys a distance-dependent behavior between nucleons that is quite different from that when it is acting to bind quarks within nucleons. Additionally, distinctions exist in the binding energies of the nuclear force of nuclear fusion vs nuclear fission. Nuclear fusion accounts for most energy production in the Sun and other stars. Nuclear fission allows for decay of radioactive elements and isotopes, although it is often mediated by the weak interaction. Artificially, the energy associated with the nuclear force is partially released in nuclear power and nuclear weapons, both in uranium or plutonium-based fission weapons and in fusion weapons like the hydrogen bomb.The strong interaction is mediated by the exchange of massless particles called gluons that act between quarks, antiquarks, and other gluons. Gluons are thought to interact with quarks and other gluons by way of a type of charge called color charge. Color charge is analogous to electromagnetic charge, but it comes in three types (±red, ±green, and ±blue) rather than one, which results in a different type of force, with different rules of behavior. These rules are detailed in the theory of quantum chromodynamics (QCD), which is the theory of quark–gluon interactions. History: Before 1971, physicists were uncertain as to how the atomic nucleus was bound together. It was known that the nucleus was composed of protons and neutrons and that protons possessed positive electric charge, while neutrons were electrically neutral. By the understanding of physics at that time, positive charges would repel one another and the positively charged protons should cause the nucleus to fly apart. However, this was never observed. New physics was needed to explain this phenomenon. History: A stronger attractive force was postulated to explain how the atomic nucleus was bound despite the protons' mutual electromagnetic repulsion. This hypothesized force was called the strong force, which was believed to be a fundamental force that acted on the protons and neutrons that make up the nucleus. History: In 1964, Murray Gell-Mann and George Zweig proposed the quark model, which holds that protons and neutrons (along with other subatomic particles called hadrons and mesons) are actually made up of smaller particles called quarks. The strong attraction between nucleons was the side-effect of a more fundamental force that bound the quarks together into protons and neutrons. The theory of quantum chromodynamics explains that quarks carry what is called a color charge, although it has no relation to visible color. Quarks with unlike color charge attract one another as a result of the strong interaction, and the particle that mediates this was called the gluon. Behavior of the strong force: The word strong is used since the strong interaction is the "strongest" of the four fundamental forces. At a distance of 10−15 m, its strength is around 100 times that of the electromagnetic force, some 106 times as great as that of the weak force, and about 1038 times that of gravitation. The strong force is described by quantum chromodynamics (QCD), a part of the Standard Model of particle physics. Mathematically, QCD is a non-Abelian gauge theory based on a local (gauge) symmetry group called SU(3). Behavior of the strong force: The force carrier particle of the strong interaction is the gluon, a massless gauge boson. Unlike the photon in electromagnetism, which is neutral, the gluon carries a color charge. Quarks and gluons are the only fundamental particles that carry non-vanishing color charge, and hence they participate in strong interactions only with each other. The strong force is the expression of the gluon interaction with other quark and gluon particles. Behavior of the strong force: All quarks and gluons in QCD interact with each other through the strong force. The strength of interaction is parameterized by the strong coupling constant. This strength is modified by the gauge color charge of the particle, a group-theoretical property. Behavior of the strong force: The strong force acts between quarks. Unlike all other forces (electromagnetic, weak, and gravitational), the strong force does not diminish in strength with increasing distance between pairs of quarks. After a limiting distance (about the size of a hadron) has been reached, it remains at a strength of about 10,000 newtons (N), no matter how much farther the distance between the quarks. As the separation between the quarks grows, the energy added to the pair creates new pairs of matching quarks between the original two; hence it is impossible to isolate quarks. The explanation is that the amount of work done against a force of 10,000 newtons is enough to create particle–antiparticle pairs within a very short distance of that interaction. The very energy added to the system required to pull two quarks apart would create a pair of new quarks that will pair up with the original ones. In QCD, this phenomenon is called color confinement; as a result only hadrons, not individual free quarks, can be observed. The failure of all experiments that have searched for free quarks is considered to be evidence of this phenomenon. Behavior of the strong force: The elementary quark and gluon particles involved in a high energy collision are not directly observable. The interaction produces jets of newly created hadrons that are observable. Those hadrons are created, as a manifestation of mass–energy equivalence, when sufficient energy is deposited into a quark–quark bond, as when a quark in one proton is struck by a very fast quark of another impacting proton during a particle accelerator experiment. However, quark–gluon plasmas have been observed. Residual strong force: Contrary to the description above of distance independence, in the post-Big Bang universe it is not the case that every quark in the universe attracts every other quark. Color confinement implies that the strong force acts without distance-diminishment only between pairs of quarks, and that in compact collections of bound quarks (hadrons), the net color-charge of the quarks essentially cancels out, resulting in a limit of the action of the color-forces: From distances approaching or greater than the radius of a proton, compact collections of color-interacting quarks (hadrons) collectively appear to have effectively no color-charge, or "colorless", and the strong force is therefore nearly absent between those hadrons. However, the cancellation is not quite perfect, and a residual force (described below) remains. This residual force does diminish rapidly with distance, and is thus very short-range (effectively a few femtometres). It manifests as a force between the "colorless" hadrons, and is known as the nuclear force or residual strong force (and historically as the strong nuclear force). Residual strong force: The nuclear force acts between hadrons, known as mesons and baryons. This "residual strong force", acting indirectly, transmits gluons that form part of the virtual π and ρ mesons, which, in turn, transmit the force between nucleons that holds the nucleus (beyond hydrogen-1 nucleus) together.The residual strong force is thus a minor residuum of the strong force that binds quarks together into protons and neutrons. This same force is much weaker between neutrons and protons, because it is mostly neutralized within them, in the same way that electromagnetic forces between neutral atoms (van der Waals forces) are much weaker than the electromagnetic forces that hold electrons in association with the nucleus, forming the atoms.Unlike the strong force, the residual strong force diminishes with distance, and does so rapidly. The decrease is approximately as a negative exponential power of distance, though there is no simple expression known for this; see Yukawa potential. The rapid decrease with distance of the attractive residual force and the less-rapid decrease of the repulsive electromagnetic force acting between protons within a nucleus, causes the instability of larger atomic nuclei, such as all those with atomic numbers larger than 82 (the element lead). Residual strong force: Although the nuclear force is weaker than the strong interaction itself, it is still highly energetic: transitions produce gamma rays. The mass of a nucleus is significantly different from the summed masses of the individual nucleons. This mass defect is due to the potential energy associated with the nuclear force. Differences between mass defects power nuclear fusion and nuclear fission. Unification: The so-called Grand Unified Theories (GUT) aim to describe the strong interaction and the electroweak interaction as aspects of a single force, similarly to how the electromagnetic and weak interactions were unified by the Glashow–Weinberg–Salam model into electroweak interaction. The strong interaction has a property called asymptotic freedom, wherein the strength of the strong force diminishes at higher energies (or temperatures). The theorized energy where its strength becomes equal to the electroweak interaction is the grand unification energy. However, no Grand Unified Theory has yet been successfully formulated to describe this process, and Grand Unification remains an unsolved problem in physics. Unification: If GUT is correct, after the Big Bang and during the electroweak epoch of the universe, the electroweak force separated from the strong force. Accordingly, a grand unification epoch is hypothesized to have existed prior to this.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Colony (biology)** Colony (biology): In biology, a colony is composed of two or more conspecific individuals living in close association with, or connected to, one another. This association is usually for mutual benefit such as stronger defense or the ability to attack bigger prey.Colonies can form in various shapes and ways depending on the organism involved. For instance, the bacterial colony is a cluster of identical cells (clones). These colonies often form and grow on the surface of (or within) a solid medium, usually derived from a single parent cell.Colonies, in the context of development, may be composed of two or more unitary (or solitary) organisms or be modular organisms. Unitary organisms have determinate development (set life stages) from zygote to adult form and individuals or groups of individuals (colonies) are visually distinct. Modular organisms have indeterminate growth forms (life stages not set) through repeated iteration of genetically identical modules (or individuals), and it can be difficult to distinguish between the colony as a whole and the modules within. In the latter case, modules may have specific functions within the colony. Colony (biology): In contrast, solitary organisms do not associate with colonies; they are ones in which all individuals live independently and have all of the functions needed to survive and reproduce. Colony (biology): Some organisms are primarily independent and form facultative colonies in reply to environmental conditions while others must live in a colony to survive (obligate). For example, some carpenter bees will form colonies when a dominant hierarchy is formed between two or more nest foundresses (facultative colony), while corals are animals that are physically connected by living tissue (the coenosarc) that contains a shared gastrovascular cavity. Colony types: Social colonies Unicellular and multicellular unitary organisms may aggregate to form colonies. For example, Protists such as slime molds are many unicellular organisms that aggregate to form colonies when food resources are hard to come by, as together they are more reactive to chemical cues released by preferred prey. Eusocial insects like ants and honey bees are multicellular animals that live in colonies with a highly organized social structure. Colonies of some social insects may be deemed superorganisms. Animals, such as humans and rodents, form breeding or nesting colonies, potentially for more successful mating and to better protect offspring. The Bracken Cave is the summer home to a colony of around 20 million Mexican free-tailed bats, making it the largest known concentration of mammals. Colony types: Modular organisms Modular organisms are those in which a genet (or genetic individual formed from a sexually-produced zygote) asexually reproduces to form genetically identical clones called ramets.A clonal colony is when the ramets of a genet live in close proximity or are physically connected. Ramets may have all of the functions needed to survive on their own or be interdependent on other ramets. For example, some sea anemones go through the process of pedal laceration in which a genetically identical individual is asexually produced from tissue broken off from the anemone's pedal disc. In plants, clonal colonies are created through the propagation of genetically identical trees by stolons or rhizomes. Colony types: Colonial organisms are clonal colonies composed of many physically connected, interdependent individuals. The subunits of colonial organisms can be unicellular, as in the alga Volvox (a coenobium), or multicellular, as in the phylum Bryozoa. Colonial organisms may have been the first step toward multicellular organisms. Individuals within a multicellular colonial organism may be called ramets, modules, or zooids. Structural and functional variation (polymorphism), when present, designates ramet responsibilities such as feeding, reproduction, and defense. To that end, being physically connected allows the colonial organism to distribute nutrients and energy obtained by feeding zooids throughout the colony. The hydrozoan Portuguese man o' war is a classic example of a colonial organism, one of many in the taxonomic class. Colony types: Microbial colonies A microbial colony is defined as a visible cluster of microorganisms growing on the surface of or within a solid medium, presumably cultured from a single cell. Because the colony is clonal, with all organisms in it descending from a single ancestor (assuming no contamination), they are genetically identical, except for any mutations (which occur at low frequencies). Obtaining such genetically identical organisms (or pure strains) can be useful; this is done by spreading organisms on a culture plate and starting a new stock from a single resulting colony.A biofilm is a colony of microorganisms often comprising several species, with properties and capabilities greater than the aggregate of capabilities of the individual organisms. Colony ontogeny for eusocial insects: Colony ontogeny refers to the developmental process and progression of a colony. It describes the various stages and changes that occur within a colony from its initial formation to its mature state. The exact duration and dynamics of colony ontogeny can vary greatly depending on the species and environmental conditions. Factors such as resource availability, competition, and environmental cues can influence the progression and outcome of colony development. Colony ontogeny for eusocial insects: During colony ontogeny for eusocial insects such as ants and bees, a colony goes through several distinct phases, each characterised by specific behavioural patterns, division of labor, and structural modifications. While the exact details can vary depending on the species, the general progression typically involves a number of well-defined stages, detailed below. Colony ontogeny for eusocial insects: Founding stage In this initial stage, a single female individual or small group of female individuals, often called the foundress(es), queen(s) (and kings for termites) or primary reproductive(s), establish a new colony. The foundresses build a basic nest structure and begin to lay eggs. The foundresses can also perform non-reproductive tasks at this early stage, such as nursing these first eggs and leaving the nest to gather resources. Colony ontogeny for eusocial insects: Worker emergence This is also known as the ergonomic stage. As the eggs laid by the foundresses develop, they give rise to the first generation of workers. These workers can assume various tasks, such as foraging, brood care, and nest maintenance. Initially, the worker population is relatively small, and their tasks are not as specialised. As the colony grows, more workers emerge, and the division of labor becomes more pronounced. Some individuals may specialise in tasks like foraging, defense, or tending to the brood, while others may take on general tasks within the nest. These specialised tasks can change throughout the life of a worker. Colony ontogeny for eusocial insects: Reproductive phase At a certain point in the colony ontogeny, usually after a period of growth and maturation, the colony produces reproductives, including new virgin queens (princesses) and males. These individuals have the potential to leave the nest and start new colonies, ensuring the transmission of the gene pool of its natal colony. Colony ontogeny for eusocial insects: Colony death Over time, colonies may go through a senescence phase where the reproductive output declines, and the colony's overall vitality diminishes. Eventually, the colony may die off or be replaced by a new generation of reproductives. After the death of the queen in a monogyne colony, possible fates other than colony death include serial polygyny (when a virgin queen of the colony replaces the dead queen as the primary reproductive) or colony inheritance (when a worker takes over as primary reproductive). Life history: Individuals in social colonies and modular organisms receive benefit to such a lifestyle. For example, it may be easier to seek out food, defend a nesting site, or increase competitive ability against other species. Modular organisms' ability to reproduce asexually in addition to sexually allows them unique benefits that social colonies do not have.The energy required for sexual reproduction varies based on the frequency and length of reproductive activity, number and size of offspring, and parental care. While solitary individuals bear all of those energy costs, individuals in some social colonies share a portion of those costs. Life history: Modular organisms save energy by using asexual reproduction during their life. Energy reserved in this way allows them to put more energy towards colony growth, regenerating lost modules (due to predation or other cause of death), or response to environmental conditions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Very short-lived substances** Very short-lived substances: Very short-lived substances (VSLS) are ozone-depleting halogen-containing substances found in the stratosphere. These substances have very short lifetimes, typically less than 6 months. VSLS are responsible for atmospheric damage once they enter the stratosphere and are a contributing factor to the destruction of ozone. Description: VSLS are ozone-depleting halogen-containing substances found in the stratosphere. These substances have very short lifetimes, typically less than 6 months. Approximately 90% of VSLS are produced by natural processes and their rate of production is increasing. “They are bromine compounds produced by seaweed and the ocean's phytoplankton”. Only 10% of ozone deleting chlorine compounds are man-made. VSLS are responsible for atmospheric damage once they enter the stratosphere and are a contributing factor to the destruction of the ozone layer. In previous decades it was believed that the most significant factor in ozone depletion was the increase in chlorofluorocarbons (CFCs). Currently VSLS are increasing rapidly, mainly due to industrial activities. The primary VSLS is n-propyl bromide (C3H7Br). It has been forecast that brominated VSLS will increase to about 8 - 10% of the total VSLS emission by the end of 21st century. There has not been much research in this area, although this is changing as more scientists study this substance to predict its long-term impact on ozone levels. Transport: The most significant route of air entering the stratosphere is through the tropical tropopause layer (TTL) via convection. The air masses are shifted by convection are able to rise several kilometers within a few hours. This transport mechanism is fast enough for VSLS lifetimes. Effects: Despite their short lifespan, VSLS have been shown to contribute significantly towards the depletion of the ozone layer, particularly in the lower stratosphere above mid-latitude areas. One study has shown that every five additional parts per trillion by volume (pptv) of VSLS in the atmosphere reducesthe global ozone column by about 1.3%. Long-term calculations of VSLS injection into the stratosphere reveal a robust correlation between sea surface temperature, convective activity and the number of short-lived source gases in the tropical tropopause layer (TTL), which becomes especially clear during the perturbations induced by El Niño seasons. This trend is attributed to increased activity by bromine-generating marine life under warmer temperatures and indicates that VSLS concentration will likely increase over the 21st century due to climate change. The potential significant increases in the atmospheric abundance of short-lived halogen substances, through changing natural processes or continued anthropogenic emissions could be important for future climate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electrodeless plasma excitation** Electrodeless plasma excitation: Electrodeless plasma excitation methods include helicon plasma sources, inductively coupled plasmas, and surface-wave-sustained discharges. Electrodeless plasma excitation: Electrodeless high-frequency discharges (HF) have two important advantages over plasmas using electrodes, like capacitively coupled plasmas that are of great interest for contemporary plasma research and applications: Firstly, no sputtering of the electrodes occurs. However, depending on ion energy, sputtering of the reactor walls or the substrate may still occur. Under conditions typical for plasma modification purposes, the ion energy in electrodeless HF discharges is about an order of magnitude lower than in RF discharges. This way, the contamination of the gas phase and damage of the substrate surface can be reduced. Electrodeless plasma excitation: Secondly, the surface-wave effect allows to generate spatially extended plasmas with electron densities that exceed the critical density.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sampa Das** Sampa Das: Sampa Das is an Indian biotechnologist, scientist, and an expert on public sector agricultural biotechnology. She is a fellow of the Indian National Science Academy (FNA) and the National Academy of Sciences, India (FNASc). Currently, she is Senior Professor and Head of the Division of Plant Biology at Bose Institute in Kolkata, which is a multi-disciplinary research institution focused on science and technology. Education: Working under the supervision of Prof S K Sen of Bose Institute, Sampa Das received her doctorate degree in 1981. Das has worked with national and international individuals studying the mechanisms of plant defence responses against pests and pathogens, with an aim to combat their stress. She did her post-doctoral training at the Friedrich Miescher Institute in Switzerland, where she became interested in plant transformation, including rice, mustard, and tomatoes. Career: Dr. Das became a faculty member of Bose Institute.She expanded her research of plant transformation to include chickpeas and mung bean, two important sources of protein for India's predominantly vegetarian population. Dr. Das began seeking ways to tweak the genetic constitution of these plants to improve their quality and quantity of the produce. After successfully completing her research on primary levels, she expanded her research to T3 and T4 level plants.Her research at Bose has included isolation, characterization and monitoring the functionality of insecticidal proteins from plant sources. She has studied expression of agronomically important genes in crop plants.She has worked on development of insect resistant transgenic rice, chickpea and mustard plants free of antibiotic resistant selection marker through the expression of mannose binding monocot plant lectins and different Bt toxin genes. She has studied the molecular interaction between receptor proteins identified from target insects and insecticidal lectins as well as different Bt proteins.Dr. Das has worked on developing understanding of the mechanism of defense response in plants when challenged by various fungal and bacterial pathogens. Isolation and characterization of differentially expressed defense response related genes, proteins from rice and chickpea plants detected at early stage of infection by Fusarium oxysporum f.sp.ciceris and Xanthomonas oryzae pv oryzae, respectively.She has worked on identification, characterisation and purification of few insecticidal lectins and other proteins from plant sources and isolation and cloning of effective insecticidal lectin and other protein coding gene(s) from respective plant genome(s).Dr. Das has worked on establishment of efficient plant regeneration and transformation protocol for mustard, chickpea and pigeonpea. Other areas of interest are construction of a number of vectors with different T-DNA border elements for a better understanding of mechanism of T-DNA integration into host plant and construction of chimeric Bt, protease inhibitor gene (s) and other agronomically important gene(s) constructs for their expression in important crops namely, rice and mustard for increased productivity. Awards and honors: In 2007, she became a fellow of the Indian National Science Academy and two years later she became a fellow of the National Academy of Sciences India.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Liver stage antigens** Liver stage antigens: Liver stage antigens (LSA) are a set of peptides from Plasmodium falciparum that are recognized by the body's immune system.The two most studied ones are: LSA-1 LSA-3
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Szegő kernel** Szegő kernel: In the mathematical study of several complex variables, the Szegő kernel is an integral kernel that gives rise to a reproducing kernel on a natural Hilbert space of holomorphic functions. It is named for its discoverer, the Hungarian mathematician Gábor Szegő. Szegő kernel: Let Ω be a bounded domain in Cn with C2 boundary, and let A(Ω) denote the space of all holomorphic functions in Ω that are continuous on Ω¯ . Define the Hardy space H2(∂Ω) to be the closure in L2(∂Ω) of the restrictions of elements of A(Ω) to the boundary. The Poisson integral implies that each element ƒ of H2(∂Ω) extends to a holomorphic function Pƒ in Ω. Furthermore, for each z ∈ Ω, the map f↦Pf(z) defines a continuous linear functional on H2(∂Ω). By the Riesz representation theorem, this linear functional is represented by a kernel kz, which is to say Pf(z)=∫∂Ωf(ζ)kz(ζ)¯dσ(ζ). Szegő kernel: The Szegő kernel is defined by S(z,ζ)=kz(ζ)¯,z∈Ω,ζ∈∂Ω. Like its close cousin, the Bergman kernel, the Szegő kernel is holomorphic in z. In fact, if φi is an orthonormal basis of H2(∂Ω) consisting entirely of the restrictions of functions in A(Ω), then a Riesz–Fischer theorem argument shows that S(z,ζ)=∑i=1∞ϕi(z)ϕi(ζ)¯.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ATP1A4** ATP1A4: Sodium/potassium-transporting ATPase subunit alpha-4 is an enzyme that in humans is encoded by the ATP1A4 gene.The protein encoded by this gene belongs to the family of P-type cation transport ATPases, and to the subfamily of Na+/K+ -ATPases. Na+/K+ -ATPase is an integral membrane protein responsible for establishing and maintaining the electrochemical gradients of Na and K ions across the plasma membrane. These gradients are essential for osmoregulation, for sodium-coupled transport of a variety of organic and inorganic molecules, and for electrical excitability of nerve and muscle. This enzyme is composed of two subunits, a large catalytic subunit (alpha) and a smaller glycoprotein subunit (beta). The catalytic subunit of Na+/K+ -ATPase is encoded by multiple genes. This gene encodes an alpha 4 subunit. Alternatively spliced transcript variants encoding different isoforms have been identified.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The European Journal of Lymphology and Related Problems** The European Journal of Lymphology and Related Problems: The European Journal of Lymphology and Related Problems is a quarterly peer-reviewed medical journal published by the European Society of Lymphology. The journal was established in 1990 and covers research in the fields of lymphology and related areas. The editor-in-chief is Francesco Boccardo (University of Genoa). In addition to the printed journal, content is distributed free of cost online in PDF format. Abstracting and indexing: The journal is abstracted and indexed in Embase and Scopus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multi-stage fitness test** Multi-stage fitness test: The multi-stage fitness test (MSFT), also known as the beep test, bleep test, PACER (Progressive Aerobic Cardiovascular Endurance Run), PACER test, FitnessGram PACER test, or the 20 m Shuttle Run Test (20 m SRT), is a running test used to estimate an athlete's aerobic capacity (VO2 max). Multi-stage fitness test: The test requires participants to run 20 meters back and forth across a marked track keeping time with beeps. Every minute or so, the next level commences: the time between beeps gets shorter; participants must run faster. If a participant fails to reach the relevant marker in time, they are cautioned. A second caution ends the test for that runner. The number of shuttles completed is recorded as the score of that runner. The score is recorded in Level.Shuttles format (e.g. 9.5). The maximum laps on the PACER test is 247, which former Central Middle School student Dennis Mejia achieved, the only person to ever reach such a level. Multi-stage fitness test: The test is used by sporting organizations around the world along with schools, the military, and others interested in gauging cardiovascular endurance, an important component of overall physical fitness. The multi-stage fitness test is also part of most health-related fitness test batteries for children and adolescents, such as Eurofit, Alpha-fit, FitnessGram and ASSOFTB.The multi-stage fitness test was first described by Luc Léger with the original 1-minute protocol, which starts at a speed of 8.5 km/h, and increases by 0.5 km/h each minute. Other variations of the test have also been developed, where the protocol starts at a speed of 8.0 km/h and with either 1 or 2-minute stages, but the original protocol is nevertheless recommended. The test appears to encourage maximal effort by children. Additionally, the test's prediction of aerobic capacity is valid for most individuals, including those who are overweight or obese. Procedure: Prior to the test commencing, runners line up at the 0m marker, facing the 20m marker. Following a countdown, a double beep or voice cue signals the start. Procedure: Runners commence running towards the 20m marker At or before the following beep, runners must reach the 20m marker. Touching with a single foot is acceptable At or after, but not before, the same beep, runners commence running back to the 0m marker At or before the next beep, runners must reach the 0m marker At or after, but not before, the same beep, runners start the next circuit (i.e. back to Step 1)Every minute or so, the level changes. This is signaled, usually, by a double beep or, possibly, a voice cue. The required speed at the new speed level will be 0.5 km/h faster. Procedure: Notes: The distance between the "start" and "turn around" markers is usually 20m; however, the test can also be carried out using a 15m track. Shuttle completion times are modified in proportion. Procedure: Leger specified a 1-minute protocol: that is, each level was meant to last approximately 1 minute. However, because speed changes mid-shuttle confuse matters, the algorithm for a change in level is as follows: "the next level commences on completion of the current shuttle when the absolute difference between the time spent at the level and 60 seconds is the least". Procedure: Scoring A runner who fails to reach the relevant marker in time is cautioned; if they want to continue, they must touch the marker before turning back. Two consecutive failures terminates their attempt. Their most recent completed shuttle is marked as their score. Scoring is usually done using "Level.Shuttle" terminology; for example, 10.2, which means "completed 2 shuttles at level 10". Estimating VO2 max: VO2 max, or milliliters of oxygen per kilogram of body mass per minute (e.g., mL/(kg·min)), is considered an excellent proxy for aerobic fitness. Attempts have been made to correlate MSFT scores with VO2 max. Do note that such estimations are fraught with difficulty as test scores, while substantially dependent on VO2 max, also depend on running efficiency, test familiarity, anaerobic capacity, personal drive, ambient temperature, running equipment (floor, shoes) and other factors. Estimating VO2 max: A paper by Flouris, et al. (2005) determined the following: max maximal attained speed in km/h 6.55 35.8 ) An earlier paper by Ramsbottom, et al. (1988) suggested the following: max 12.1 Level 3.48 ) Variations: Luc Léger, the originator of the multi-stage fitness test, never did patent it. Consequently, organizations around the world have been able to incorporate subtle variations into the test. The most common variations are: First level at 8.0 km/h The Léger test requires the first level to be run at 8.5 km/h. Some organizations require it to be run at 8.0 km/h. Note that the second level is always run at 9.0 km/h. Also, speeds at subsequent levels always increment by 0.5 km/h. The impact of this variation is insignificant as almost all runners' scores easily exceed level 1. Variations: Time spent at each level All versions of the test evaluate for a change of level only on completion of shuttles. The Léger test's algorithm requires that each level lasts approximately 60 seconds. This means the next level commences when the absolute difference between the time spent at the level and 60 seconds is least. Put simply, some levels may run for a trifle less than 60 seconds, others a little more than 60 seconds and the odd one exactly 60 seconds. On the other hand, a few non-Léger versions of the test trigger a level change only when the time spent at a level first exceeds 60 seconds. This variation results in one extra shuttle being run at some levels. Variations: In practice, since the speed change at a new level (rather than an extra lap) is most likely to trigger "failure", this variation also has an insignificant change on one's achievable score. Scoring starts from zero Scoring of the Léger test starts from 1. That is, at the end of the very first shuttle, the participant has scored 1.1. A variation has scoring starting from 0; at the end of the first shuttle, the runner has achieved 0.1. The impact of this variation is purely administrative: just add or subtract 1 to convert scores. World record: Participation The Guinness World Record for the largest group beep test is held by Army Foundation College, in Harrogate, North Yorkshire; 941 people took part. In popular culture: The introductory explanation of one multi-stage fitness test, the FitnessGram PACER test, has been widely spread as a copypasta, meme, and through other comedic ways due to the test's modern use in schools, primarily in physical education classes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DTDP-3-amino-3,6-dideoxy-alpha-D-galactopyranose 3-N-acetyltransferase** DTDP-3-amino-3,6-dideoxy-alpha-D-galactopyranose 3-N-acetyltransferase: DTDP-3-amino-3,6-dideoxy-alpha-D-galactopyranose 3-N-acetyltransferase (EC 2.3.1.197, FdtC, dTDP-D-Fucp3N acetylase) is an enzyme with systematic name acetyl-CoA:dTDP-3-amino-3,6-dideoxy-alpha-D-galactopyranose 3-N-acetyltransferase. This enzyme catalyses the following chemical reaction acetyl-CoA + dTDP-3-amino-3,6-dideoxy-alpha-D-galactopyranose ⇌ CoA + dTDP-3-acetamido-3,6-dideoxy-alpha-D-galactopyranosedTDP-3-acetamido-3,6-dideoxy-alpha-D-galactose is a component of the glycan chain of the crystalline bacterial cell surface layer protein of Aneurinibacillus thermoaerophilus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spinocerebellar ataxia** Spinocerebellar ataxia: Spinocerebellar ataxia (SCA) is a progressive, degenerative, genetic disease with multiple types, each of which could be considered a neurological condition in its own right. An estimated 150,000 people in the United States have a diagnosis of spinocerebellar ataxia at any given time. SCA is hereditary, progressive, degenerative, and often fatal. There is no known effective treatment or cure. SCA can affect anyone of any age. The disease is caused by either a recessive or dominant gene. In many cases people are not aware that they carry a relevant gene until they have children who begin to show signs of having the disorder. Signs and symptoms: Spinocerebellar ataxia (SCA) is one of a group of genetic disorders characterized by slowly progressive incoordination of gait and is often associated with poor coordination of hands, speech, and eye movements. A review of different clinical features among SCA subtypes was recently published describing the frequency of non-cerebellar features, like parkinsonism, chorea, pyramidalism, cognitive impairment, peripheral neuropathy, seizures, among others. As with other forms of ataxia, SCA frequently results in atrophy of the cerebellum, loss of fine coordination of muscle movements leading to unsteady and clumsy motion, and other symptoms. Ocular deficits can be quantified using the SODA scale.The symptoms of an ataxia vary with the specific type and with the individual patient. In many cases a person with ataxia retains full mental capacity but progressively loses physical control. Cause: The hereditary ataxias are categorized by mode of inheritance and causative gene or chromosomal locus. The hereditary ataxias can be inherited in an autosomal dominant, autosomal recessive, or X-linked manner. Cause: Many types of autosomal dominant cerebellar ataxias for which specific genetic information is available are now known. Synonyms for autosomal-dominant cerebellar ataxias (ADCA) used prior to the current understanding of the molecular genetics were Marie's ataxia, inherited olivopontocerebellar atrophy, cerebello-olivary atrophy, or the more generic term "spinocerebellar degeneration." (Spinocerebellar degeneration is a rare inherited neurological disorder of the central nervous system characterized by the slow degeneration of certain areas of the brain. There are three forms of spinocerebellar degeneration: Types 1, 2, 3. Symptoms begin during adulthood.) There are five typical autosomal-recessive disorders in which ataxia is a prominent feature: Friedreich ataxia, ataxia-telangiectasia, ataxia with vitamin E deficiency, ataxia with oculomotor apraxia (AOA), spastic ataxia. Disorder subdivisions: Friedreich's ataxia, Spinocerebellar ataxia, Ataxia telangiectasia, Vasomotor ataxia, Vestibulocerebellar, Ataxiadynamia, Ataxiophemia, and Olivopontocerebellar atrophy. Cause: There have been reported cases where a polyglutamine expansion may lengthen when passed down, which often can result in an earlier age-of-onset and a more severe disease phenotype for individuals who inherit the disease allele. This falls under the category of genetic anticipation. Several types of SCA are characterized by repeat expansion of the trinucleotide sequence CAG in DNA that encodes a polyglutamine repeat tract in protein. The expansion of CAG repeats over successive generations appears to be due to slipped strand mispairing during DNA replication or DNA repair. Diagnosis: Classification A few SCAs remain unspecified and can not be precisely diagnosed, but in the last decade genetic testing has allowed precise identification of dozens of different SCAs and more tests are being added each year. In 2008, a genetic ataxia blood test developed to test for 12 types of SCA, Friedreich's ataxia, and several others. However, since not every SCA has been genetically identified some SCAs are still diagnosed by neurological examination, which may include a physical exam, family history, MRI scanning of the brain and spine, and spinal tap.Many SCAs below fall under the category of polyglutamine diseases, which are caused when a disease-associated protein (i.e., ataxin-1, ataxin-3, etc.) contains a large number of repeats of glutamine residues, termed a polyQ sequence or a "CAG trinucleotide repeat" disease for either the one-letter designation or codon for glutamine respectively. The threshold for symptoms in most forms of SCA is around 35, though for SCA3 it extends beyond 50. Most polyglutamine diseases are dominant due to the interactions of resulting polyQ tail.The first ataxia gene was identified in 1993 and called "Spinocerebellar ataxia type 1" (SCA1); later genes were called SCA2, SCA3, etc. Usually, the "type" number of "SCA" refers to the order in which the gene was found. At this time, there are at least 29 different gene mutations that have been found.The following is a list of some of the many types of Spinocerebellar ataxia. Diagnosis: Others include SCA18, SCA20, SCA21, SCA23, SCA26, SCA28, and SCA29. Four X-linked types have been described ( 302500, 302600, 301790, 301840), but only the first of these has so far been tied to a gene (SCAX1). Treatment: Medication There is no cure for spinocerebellar ataxia, which is currently considered to be a progressive and irreversible disease, although not all types cause equally severe disability.In general, treatments are directed towards alleviating symptoms, not the disease itself. Many patients with hereditary or idiopathic forms of ataxia have other symptoms in addition to ataxia. Medications or other therapies might be appropriate for some of these symptoms, which could include tremor, stiffness, depression, spasticity, and sleep disorders, among others. Both onset of initial symptoms and duration of disease are variable. If the disease is caused by a polyglutamine trinucleotide repeat CAG expansion, a longer expansion may lead to an earlier onset and a more radical progression of clinical symptoms. Typically, a person with this disease will eventually be unable to perform daily tasks (ADLs). However, rehabilitation therapists can help patients to maximize their ability of self-care and delay deterioration to certain extent. Researchers are exploring multiple avenues for a cure including RNAi and the use of Stem Cells and several other avenues.On January 18, 2017, BioBlast Pharma announced completion of Phase 2a clinical trials of their medication, Trehalose, in the treatment of SCA3. BioBlast has received FDA Fast Track status and Orphan Drug status for their treatment. The information provided by BioBlast in their research indicates that they hope this treatment may prove efficacious in other SCA treatments that have similar pathology related to PolyA and PolyQ diseases.In addition, Dr. Beverly Davidson has been working on a methodology using RNAi technology to find a potential cure for over 2 decades. Her research began in the mid-1990s and progressed to work with mouse models about a decade later and most recently has moved to a study with non-human primates. The results from her most recent research "are supportive of clinical application of this gene therapy".Finally, another gene transfer technology discovered in 2011 has also been shown by Boudreau et al. to hold great promise and offers yet another avenue to a potential future cure. Treatment: N-Acetyl-Leucine N-Acetyl-Leucine is an orally administered, modified amino acid that is being developed as a novel treatment for multiple rare and common neurological disorders by IntraBio Inc (Oxford, United Kingdom).N-Acetyl-Leucine has been granted multiple orphan drug designations from the U.S. Food & Drug Administration (FDA) and the European Medicines Agency (EMA) for the treatment of various genetic diseases, including Spinocerebellar Ataxias. N-Acetyl-Leucine has also been granted Orphan Drug Designations in the US and EU for the related inherited cerebellar ataxia Ataxia-Telangiectasia U.S. Food & Drug Administration (FDA) and the European Medicines Agency (EMA).Published case series studies have demonstrated the effects of acute treatment with N-Acetyl-Leucine for the treatment of inherited cerebellar ataxias, including Spinocerebellar Ataxias. These studies further demonstrated that the treatment is well tolerated, with a good safety profile. Treatment: A multinational clinical trial investigating N-Acetyl-L-Leucine for the treatment of a related inherited cerebellar ataxia, Ataxia-Telangiectasia, began in 2019.IntraBio is also conducting parallel clinical trials with N-Acetyl-L-Leucine for the treatment of Niemann-Pick disease type C and GM2 Gangliosidosis (Tay-Sachs and Sandhoff Disease). Future opportunities to develop N-Acetyl-Leucine include Lewy Body Dementia, Amyotrophic lateral sclerosis, Restless Leg Syndrome, Multiple Sclerosis, and Migraine. Treatment: Rehabilitation Physical therapists can assist patients in maintaining their level of independence through therapeutic exercise programmes. One recent research report demonstrated a gain of 2 SARA points (Scale for the Assessment and Rating of Ataxia) from physical therapy. In general, physical therapy emphasises postural balance and gait training for ataxia patients. General conditioning such as range-of-motion exercises and muscle strengthening would also be included in therapeutic exercise programmes. Research showed that spinocerebellar ataxia 2 (SCA2) patients with a mild stage of the disease gained significant improvement in static balance and neurological indices after six months of a physical therapy exercise training program. Occupational therapists may assist patients with incoordination or ataxia issues through the use of adaptive devices. Such devices may include a cane, crutches, walker, or wheelchair for those with impaired gait. Other devices are available to assist with writing, feeding, and self care if hand and arm coordination are impaired. A randomised clinical trial revealed that an intensive rehabilitation program with physical and occupational therapies for patients with degenerative cerebellar diseases can significantly improve functional gains in ataxia, gait, and activities of daily living. Some level of improvement was shown to be maintained 24 weeks post-treatment. Speech language pathologists may use both behavioral intervention strategies as well as augmentative and alternative communication devices to help patients with impaired speech.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plasma confinement** Plasma confinement: In plasma physics, plasma confinement refers to the act of maintaining a plasma in a discrete volume. Confining plasma is required in order to achieve fusion power. There are two major approaches to confinement: magnetic confinement and inertial confinement.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cheerleading uniform** Cheerleading uniform: A cheerleading uniform is a standardized outfit worn by cheerleaders during games and other events. These uniforms typically include the official colors and mascots of the school or team and are designed to make the wearer appear physically attractive. Early styles/brief history: Cheerleading uniforms in the early 1900s were a steadfast symbol of the schools they represented, usually depicting the first letter of a high school or the first letter plus the letters "H" and "S", standing for "high school." These letters were normally sewn onto a sweater-type garment, sometimes even polo shirts in warm weather. While conducive to showing school spirit and having a uniform looking team, these sweater-tops were often hot, bulky, and not very functional for any type of athletic movement. The most common type of sweater worn by early cheerleaders was a long cardigan with multiple buttons, normally worn over a turtle neck shirt or collared blouse. The school letters were often sewn in either corner of the sweater, sometimes in the middle of a megaphone shape. Worn with the sweater was a very modest ankle-length wool skirt, often a darker color than the sweater. Some early cheerleading squads chose plaid fabrics for skirts, often these squads were from religious schools and universities, as plaid was the main fabric of their classroom uniforms. Early styles/brief history: Early cheerleading squads wore saddle shoes or flat canvas sneakers with high dress socks with their uniforms. This style of uniform continued through the 1950s and 1960s and is often depicted in movies and photos from that time period. Jean Lee Originals in Goshen, IN became the first company to widely market cheerleading uniforms in the 1960s and 1970s. Owner, Jean Marie Harter, designed the pleated skirt and double stripe sweater styles still in use today. She was one of the first female graduates from The School of Management at Northwestern University and was the daughter of Arthur (Dad) V. Harter, owner of House of Harter Sporting Goods, the largest sporting good store in Indiana (and much of the Midwest) and supplier to most high schools and colleges in the state, including Notre Dame. All uniforms at Jean Lee Originals were custom made to fit each individual. Early styles/brief history: A larger entity - Cheerleader Supply Company, began copying Jean Lee's styles and mass producing uniforms in standard issue sizes (s, m, l), eventually putting the smaller, custom store out of business by the 1980s. The company was founded by Lawrence "Herkie" Herkimer, of Dallas, TX,a former cheerleader at Southern Methodist University, who began selling pom pom kits to local high schools. Herkie was also the first to organize cheerleading camps and competitions. Modern styles of cheerleading uniforms: As the focus of cheerleading shifted from an auxiliary unit, to an athletic pursuit, changes in the uniforms' material, style and fit were necessary. Modern styles of cheerleading uniforms: 1960s uniforms As fashion styles changed through the 1960s so did the cheerleading uniform. Gone were the overly long wool skirts, as pleated shorter skirts became more popular. The long skirt was essentially chopped in half as knee length cotton fabric skirts made for easier movement and a more comfortable experience for the wearer as compared to their wool counterparts. The sweater top changed dramatically, squads elected to wear short sleeve crew neck sweaters in favor of long cardigans, however the school letters and megaphone emblem remained, now being placed in the center of the stylish crew neck sweaters. Some squads in this time period, in particular high school squads, favored placing an additional embroidered emblem with the squad member's name on the center of the school letter patch. This was a symbol of high school popularity, as it was a huge privilege to be a cheerleader 1970s uniforms Much changed in uniform fashion from the 1960s. Most squads now wore more athletic [material] or tennis shoes. Cheerleaders wore sneakers as opposed to saddle shoes or Keds, and the uniform was made less bulky to allow for easier movement. Also more variety was available for sweaters, vests or shells and skirts. The sweater now featured a stripe pattern on the front, in alternating school colors. The letter patch became more elaborate as well, often more colorful and unique. Sweaters were also less bulky and had a more flattering fit. This new slimmed style allowed better movement and more of a functional fit, without sacrificing modesty or tradition. Sweaters were made to fit close to the body for a tighter fit, and the length was tapered very short to eliminate excess fabric overlapping the skirt. Often this caused the cheerleader's bare abdomen to be exposed during movement- by now most sweaters were worn without any shirt or collared blouse beneath them. Different styles were incorporated to give squads more of a choice. Round neck, and v-neck sweaters were popular with squads seeking greater functionality, as cheerleading was becoming more athletic instead of the standard vocal chant. The new sweater styles allowed squads to eliminate the extra collared blouse beneath the sweater, essentially just wearing the sweater over a bra. While these uniforms provided a functional style, some modest women in society viewed these outfits as scandalous and racy. The shorter skirts combined with the shorter and tighter sweaters were viewed by some as "improper." 1985–1995 uniforms Theses uniforms are similar to the current uniforms except slouch socks especially Wigwam slouch socks were very popular to wear. Also Keds champion sneakers were worn by many cheerleaders. A typical school cheerleading uniform from 1985 does not look much different than a uniform today. The favored tops in this period were a turtleneck worn underneath a sweatshirt or a sweater or a waist-length button-down sleeveless modest style vest, worn with or without a turtleneck layer underneath. Sometimes a turtleneck bodysuit was worn instead. The choice skirt remained a pleat model, but with added color striping around the bottom hem. The length style preferred was shortened to mid-thigh or slightly longer for most squads. The general rule at this time was the skirt had to be down the end of fingers when arm down at side. Bike shorts were worn underneath with some uniforms. Modern styles of cheerleading uniforms: Current uniforms Most uniforms are currently made from a polyester blended fabric, usually containing spandex as well. Shiny foiled covered stretch fabric is commonly found in competitive cheerleading uniforms. Dye-sublimated uniforms have also become increasingly popular in recent years. Dye-sublimated uniforms have a design, team name, or logo printed directly on the garment using a dye-sublimation printer and can give a cheer squad a more individual look with a lower cost.The top without the sleeves are called shells, if they have sleeves it is called a liner. Most American school squads wear a sleeveless top with either a sports bra or an athletic tank top underneath. If the shell lacks sleeves, many teams wear a turtle neck bodysuit under it, although this is not required specifically. The bodysuits can be either leotard like or covering only to the bottom of the ribcage. Due to guidelines imposed by the National Federation of High Schools, high school squads must have a top that covers their midriff with arms by their sides, however if the arms are raised most uniforms will show a small section of midriff, which is not against NFSHSA rules. Most school-sanctioned squads have modest-looking uniform tops that are usually a waist-length fit, covering the whole frontal upper body except at the shoulders and arms when worn sleeveless. Likewise, the back construction of most school cheerleading tops cover the full upper body, however skin in the lower back area is mostly left uncovered if the cheerleader is sitting or bending; this does not violate NFSHSA uniform rules. These requirements do not apply to all-star cheerleading organizations, therefore many have tops that stop at or just below the bottom of the bra line. Another growing trend among all-star teams is having sections of material missing (allowing bare skin to show) across the top for the chest, the shoulders, the top of the back, or portions of the arms. The length of skirts has shortened dramatically, with the average length for skirts at both high school and all-star being 10 to 13 inches, and lengths are shrinking every year, however, some coaches and various team sponsors encourage wearing shorter skirts due to safety reasons (too much fabric can be dangerous while tumbling). Skirts are worn over the top of colored or metallic spandex/polyester briefs, also called lollies, spankies, or bundies. These briefs are worn over the top of underwear and are sometimes printed with stars, dots, etc. The briefs can also sometimes have a team logo, such as a paw print, sewn on the side or across the behind. In colder weather especially when cheerleading outside it is common to see leggings worn under the uniform with socks over the leggings. Mittens or gloves and a head wrap over the ears may also be worn. Modern styles of cheerleading uniforms: Worldwide examples Piercing/tattoo rules Due to the frequency of midriff exposure with most cheerleading tops, many schools and all-star coaches prohibit navel rings (belly rings) and other piercings while a cheerleader is at a competition. During competition, most federations also enforce such bans. While there is no official ban on tattoos, school-sanctioned squads typically require that tattoos that could be visible in uniform, be covered with a bandage or waterproof skin shade makeup. Due to the popularity of lower back tattoos, this method is becoming increasingly commonplace. Cheerleading uniform terminology: Ribbons/bows/scrunchiesWorn in the cheerleader's hair, which is often styled to match each other. The ribbons, bows, and scrunchies are usually the school's or the team's colors, and can be custom made or ordered through various companies.Bodysuit/bodylinerA leotard-like undergarment that matches the uniform colors and design, and is intended to be worn underneath the uniform shell. Normally these are long-sleeved tops that snap at the bottom but they can be customized to the length of the shell top, either in a waist-length design or a crop-top style. Most squads prefer to wear bodysuits during competition as it creates a uniformed look in arm movements. Waist length shells can be worn with several styles of bodyliners, in contrast to midriff shells which are limited to cropped stomach-showing liners. Full coverage bodyliners worn under a waist length shell eliminate all midriff exposure during cheering. A cropped liner worn under a waist-length shell will cover the arms but leaves the stomach uncovered, allowing midriff skin to still show just as if the shell were to be worn alone.Shell/vestMade out of polyester and cotton, this is the main focal point of the uniform. This is the top of the outfit which includes the design of the uniform stripes, school/team colors, and the mascot or high school insignias or school letters. Example: "Willow High School" would be "WHS". The shell is normally sleeveless with a "V" neckline, however the neckline style can often be customized by the school or team during the uniform ordering process. The shell can either have a zipper closure in the back or side to aid in dressing, or it can have a stretchy material that allows the top to expand slightly when dressing. The shell is designed to fit somewhat snug but not tight on the wearer—it should fit close to the body. The shell can be worn alone over a bra or camisole, or it can be worn over-top a bodysuit of choice or a simple turtleneck. This is usually up to each squad's preference. A shell top is available in many lengths and even in somewhat intricate style cues.Full shellUsually sleeveless, and having a simple style design. It is normally waist length, ending at the top of the skirt. Typically worn alone with a bra underneath. This style top is worn by most American school squads as it is very modest and meets most dress codes, with midriff exposure limited to bare lower backs and an inch of frontal skin while the arms are raised. The sleeveless shell, if worn alone, will normally leave the arms and shoulders open. A properly fitted shell top is essential in making the uniform look flattering for everyone on the cheer squad. A shell that is properly sized and fitted will leave enough room around the arm openings for movement, eliminating fabric rubbing against the underarms. Additionally, if the shell is sized with the correct measurements, the bottom of the hem should be long enough to meet the top of the skirt while standing stationary. These tops can be highly customized from very ordinary to highly detailed designs. Certain cut-outs in sections of the garment are an option, as are varying hemlines for the bottom of the top. For instance, instead of a straight hem along the bottom of the top, some squads will choose a "cut-up V" hem, which leaves an upside down "V" notch open along the front, leaving slight midriff exposed at the belly button area. This style is selected to allow for an additional color stripe along the bottom hem of the shell.Halter shellSimilar in style and length as a full fitting shell top, but differing in the upper part of the garment. This top is normally fully sleeveless, and more revealing of the upper back. The halter style normally includes a neck strap on the top that ties in the back.Crop top/midriffThis style of top is mostly used by colleges and out-of-school squads (all-star squads). It is very revealing, and rarely worn by high school squads due to dress code violations. The top is typically worn sleeveless with only a bra underneath. The school's or team's logo is featured in the center of the top. The top ends right after the bra-line leaving the entire midriff of the cheerleader showing. The National Federation of State High School Associations (NFHS) instituted a rule against cropped uniform tops at the high school level. Most cheerleading governing bodies outlaw this top from being worn by public schools in competition, however a few middle and high schools still choose to wear this style of top at their respective non-competing events, such as football games or parades. There has been much debate on whether it is right to have a teenage girl wearing a garment like this at a public school, as much of the body is revealed by this top.SkirtThe bottom of a cheerleading uniform, with matching colors and logo/stripe design as the shell. Skirts can be very customizable with various styles of pleat designs, or no pleats at all, such as an "A-line" skirt. The general rule of length for skirts is mid-thigh and can be sized simply by using the "fingertip" rule. Modern skirts are offered in various rises as well. Traditionally the skirt should be worn at the natural waistline, at the navel area. The traditional skirt should be high enough to touch the bottom of the shell slightly. However some squads elect to wear low-rise skirts either with longer shells or with a normal length shell creating a midriff look. Bundies must be worn under the skirt.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TAS2R10** TAS2R10: Taste receptor type 2 member 10 is a protein that in humans is encoded by the TAS2R10 gene. The protein is responsible for bitter taste recognition in mammals. It serves as a defense mechanism to prevent consumption of toxic substances which often have a characteristic bitter taste. Function: TAS2R10 is a G-protein-coupled receptor (GPCR) that is part of a large group of eukaryotic membrane receptors. As a G-protein linked receptor, TAS2R10 helps with relaying communication across the cell membrane between the extracellular and intracellular matrix. Signaling molecules (ligands) bind to GPCRs and cause activation of the G protein which leads to activation of second messenger systems. These messengers inform cells of the presence or lack of substances in their environment which signals effectors to carryout biological functions. TAS2R10 specifically acts as a bitter taste receptor.In general, TAS1Rs are receptors for umami and sweet tastes and TAS2Rs are bitter receptors. Bitter taste is mediated by numerous receptors, with TAS2R10 being part of a G-protein-coupled receptor superfamily. Humans have almost 1,000 different and highly specific GPCRs. Each GPCRs binds to a specific signaling molecule. Function: TAS2R10, along with several other bitter taste receptors, is expressed in the taste receptor cells of the tongue palate epithelia and smooth muscle of human airways. They are organized in the genome in clusters and are genetically linked to loci that influence bitter taste perception of both mice and humans. The activation of the receptors within cells causes an increase in intracellular calcium ions which triggers the opening of potassium channels. The cell membrane becomes depolarized and the smooth muscle relaxes. The depolarization stimulates neurotransmitters that send sensory information to the brain. The information is processed in the brain and perceived as a specific taste. Structure: Most GPCRs consist of a single polypeptide with a globular tertiary shape and are made up of three general components: the extracellular domain, intracellular domain and the transmembrane domain.The extracellular domain includes the amino terminus and is composed of loops and helices that form binding pockets for ligands. Ligands bind to the receptors which causes activation. The transmembrane domain consists of seven hydrophobic transmembrane segments. The segments are dispersed throughout the membrane. They transmit signals received from ligand binding at the extracellular domain to the intracellular domain. The intracellular domain in the cytoplasm of the cell includes the carboxyl terminus and is where downstream signaling pathways are initiated as part of G-protein activation. GPCR proteins range in size from 25-150 amino acids attached to the C- terminus and can be 80-480 Å in length. Biological Importance: In mammals, bitter taste is used as a safety mechanism to prevent animals from eating toxic plants or animals. Bitter taste serves as a warning that a substance is potentially lethal. TAS2R10 is one of many bitter taste receptors that allows for the recognition of bitter taste. TAS2R10 receptors are able to detect many toxic substances such as strychnine. Strychnine is a naturally occurring poisonous alkaloid found in the seeds of trees in the Strychnos genus. Ingestion or exposure of strychnine can cause involuntary muscle contractions and spasms that can lead to death by asphyxiation when respiratory muscles are involved. Therapeutic Use: A variety of research and studies are being conducted to investigate how taste receptors like TASR10 have additional functions beyond taste recognition. It is known that the activation of GPCR membrane proteins induces smooth muscle relaxation and vasodilation. This mechanism is being further studied in the hopes of developing potential treatments for vasoconstricting conditions such as asthma.There is also research being down on how TASR receptors have a role in both regulatory functions in cancers and thyroid function regulation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TLX3** TLX3: T-cell leukemia homeobox protein 3 is a protein that in humans is encoded by the TLX3 gene.RNX (HOX11L2, TLX3) belongs to a family of orphan homeobox genes that encode DNA-binding nuclear transcription factors. Members of the HOX11 gene family are characterized by a threonine-47 replacing cytosine in the highly conserved homeodomain (Dear et al., 1993).[supplied by OMIM]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded