text
stringlengths
60
353k
source
stringclasses
2 values
**Lunar Saros 135** Lunar Saros 135: Saros cycle series 135 for lunar eclipses occurs at the moon's descending node, repeats every 18 years 11 and 1/3 days. It contains 71 events.This lunar saros is linked to Solar Saros 142. This series contains 23 total eclipses. The first was on November 7, 1957, and the last will occur on July 6, 2354. The longest total eclipse will occur on May 12, 2264, and totality will last 106 minutes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pazurgo** Pazurgo: Pazurgo is a word puzzle which normally takes the form of a rectangular grid with white and shaded squares. Pazurgo includes elements from Crossword puzzles and Word Search puzzles, along with the addition of its own unique elements. The goal is to solve each of the clues by finding the solution word in the grid by forming a chain linking the letters of the word together. Once all of the solution word chains have been discovered, the remaining available letters form the solution to the Scramble clue, when those letters are unscrambled in the correct order. History: The idea for Pazurgo came to Jeremy L Graybill in 2003 while he was playing the piano one day. The first book of Pazurgo puzzles Pazurgo The Amazing New Word Puzzle was published by Tuttle Publishing, with an international release date of September 10, 2010. A Flash application for solving Pazurgo puzzles was created in 2009, and is available for use with some free puzzles at www.Pazurgo.com. The Pazurgo trademark is held internationally by Graybill, and the puzzle design is patent pending. Solving Pazurgo Puzzles: Instructions The steps for solving Pazurgo puzzles becomes intuitive while solving the initial puzzle. Pattern recognition and logic skills used in Word Searches and Sudoku are useful in solving Pazurgo puzzles. The starting letters of each word are indicated within the grid. The word chains begin with one of the indicated starting letters, and each letter of the solution word chain is discovered in turn in a horizontal or vertical direction from the previous letter. A line is drawn linking two consecutive letters in a word, crossing out any letters of the puzzle which fall in between, and letters which are crossed out may no longer be used in a solution word. Each letter may only be used within a solution word, and any letters which are circled may not then be crossed out by a solution word chain. A solution word chain may cross over itself or over the chain of another solution, but a chain may not cross through the shaded squares. Once all of the solution word chains have been discovered, the remaining letters which are not circled or crossed out will form the solution to the "Scramble" clue, when those letters are unscrambled in the correct order. Example In the basic example shown here: There are two clues to solve, and both solution words begin with the letter "H", as indicated by circled letters in the grid. Solving Pazurgo Puzzles: Solving the first clue yields the five letter solution "HIPPO". Within the grid, each of the starting letters is looked at, to discover the valid word chain for "HIPPO". The chain is found by starting with the "H" in the top row, crossing out the "P" below it, and continuing in turn with the "I", "P", "P", and "O" as shown in the solution. Solving Pazurgo Puzzles: Solving the second clue yields the five letter solution "HORSE". The valid word chain for "HORSE" begins with the unused starting letter "H" in the second row. The word chain then crosses over the word chain for "HIPPO", crosses out the "I", and continues on to the "O", "R", and "S" as shown in the solution. The word chain then crosses over itself to add the final letter "E". Solving Pazurgo Puzzles: Once the two words have been solved, the remaining letters which aren't a part of a word chain and aren't crossed out by a word chain, make up the solution to the Scramble clue, when they are unscrambled. The letters "I", "S", and "H" are unscrambled to form the word "HIS". Types of Puzzles: Themed Puzzles Many Pazurgo puzzles contain a "theme", and all of the clues and words within that puzzle belong to that theme. As an example, the puzzle At the Zoo available at www.Pazurgo.com contains a theme of animals seen at the zoo, including: CHEETAH CROCODILE GORILLA POLARBEAR Non-Themed Puzzles Many Pazurgo puzzles contain "random" clues and words, and the words do not all belong to a single theme. The clues for this type of puzzle are similar in concept to many "non-themed" Crossword puzzles. Non-Themed puzzles do not generally include a puzzle title.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Misleading graph** Misleading graph: In statistics, a misleading graph, also known as a distorted graph, is a graph that misrepresents data, constituting a misuse of statistics and with the result that an incorrect conclusion may be derived from it. Misleading graph: Graphs may be misleading by being excessively complex or poorly constructed. Even when constructed to display the characteristics of their data accurately, graphs can be subject to different interpretations, or unintended kinds of data can seemingly and ultimately erroneously be derived.Misleading graphs may be created intentionally to hinder the proper interpretation of data or accidentally due to unfamiliarity with graphing software, misinterpretation of data, or because data cannot be accurately conveyed. Misleading graphs are often used in false advertising. One of the first authors to write about misleading graphs was Darrell Huff, publisher of the 1954 book How to Lie with Statistics. Misleading graph: The field of data visualization describes ways to present information that avoids creating misleading graphs. Misleading graph methods: It [a misleading graph] is vastly more effective, however, because it contains no adjectives or adverbs to spoil the illusion of objectivity, there's nothing anyone can pin on you. There are numerous ways in which a misleading graph may be constructed. Excessive usage The use of graphs where they are not needed can lead to unnecessary confusion/interpretation. Generally, the more explanation a graph needs, the less the graph itself is needed. Graphs do not always convey information better than tables. Biased labeling The use of biased or loaded words in the graph's title, axis labels, or caption may inappropriately prime the reader. Fabricated trends Similarly, attempting to draw trend lines through uncorrelated data may mislead the reader into believing a trend exists where there is none. This can be both the result of intentionally attempting to mislead the reader or due to the phenomenon of illusory correlation. Pie chart Comparing pie charts of different sizes could be misleading as people cannot accurately read the comparative area of circles. The usage of thin slices, which are hard to discern, may be difficult to interpret. The usage of percentages as labels on a pie chart can be misleading when the sample size is small. Making a pie chart 3D or adding a slant will make interpretation difficult due to distorted effect of perspective. Bar-charted pie graphs in which the height of the slices is varied may confuse the reader. Comparing pie charts Comparing the data on barcharts is generally much easier. In the image below it's very hard to tell where the blue sector is bigger than the green sector on the piecharts. Misleading graph methods: 3D Pie chart slice perspective A perspective (3D) pie chart is used to give the chart a 3D look. Often used for aesthetic reasons, the third dimension does not improve the reading of the data; on the contrary, these plots are difficult to interpret because of the distorted effect of perspective associated with the third dimension. The use of superfluous dimensions not used to display the data of interest is discouraged for charts in general, not only for pie charts. In a 3D pie chart, the slices that are closer to the reader appear to be larger than those in the back due to the angle at which they're presented. This effect makes readers less performant in judging the relative magnitude of each slice when using 3D than 2D Item C appears to be at least as large as Item A in the misleading pie chart, whereas in actuality, it is less than half as large. Item D looks a lot larger than item B, but they are the same size. Misleading graph methods: Edward Tufte, a prominent American statistician, noted why tables may be preferred to pie charts in The Visual Display of Quantitative Information: Tables are preferable to graphics for many small data sets. A table is nearly always better than a dumb pie chart; the only thing worse than a pie chart is several of them, for then the viewer is asked to compare quantities located in spatial disarray both within and between pies – Given their low data-density and failure to order numbers along a visual dimension, pie charts should never be used. Misleading graph methods: Improper scaling Using pictograms in bar graphs should not be scaled uniformly, as this creates a perceptually misleading comparison. The area of the pictogram is interpreted instead of only its height or width. This causes the scaling to make the difference appear to be squared. In the improperly scaled pictogram bar graph, the image for B is actually 9 times as large as A. The perceived size increases when scaling. The effect of improper scaling of pictograms is further exemplified when the pictogram has 3 dimensions, in which case the effect is cubed. The graph of house sales (left) is misleading. It appears that home sales have grown eightfold in 2001 over the previous year, whereas they have actually grown twofold. Besides, the number of sales is not specified. An improperly scaled pictogram may also suggest that the item itself has changed in size. Assuming the pictures represent equivalent quantities, the misleading graph shows that there are more bananas because the bananas occupy the most area and are furthest to the right. Misleading graph methods: Logarithmic scaling Logarithmic (or log) scales are a valid means of representing data. But when used without being clearly labeled as log scales or displayed to a reader unfamiliar with them, they can be misleading. Log scales put the data values in terms of a chosen number (the base of the log) to a particular power. The base is often e (2.71828...) or 10. For example, log scales may give a height of 1 for a value of 10 in the data and a height of 6 for a value of 1,000,000 (106) in the data. Log scales and variants are commonly used, for instance, for the volcanic explosivity index, the Richter scale for earthquakes, the magnitude of stars, and the pH of acidic and alkaline solutions. Even in these cases, the log scale can make the data less apparent to the eye. Often the reason for the use of log scales is that the graph's author wishes to display vastly different scales on the same axis. Without log scales, comparing quantities such as 103 versus 109 becomes visually impractical. A graph with a log scale that was not clearly labeled as such, or a graph with a log scale presented to a viewer who did not know logarithmic scales, would generally result in a representation that made data values look of similar size, in fact, being of widely differing magnitudes. Misuse of a log scale can make vastly different values (such as 10 and 10,000) appear close together (on a base-10 log scale, they would be only 1 and 4). Or it can make small values appear to be negative due to how logarithmic scales represent numbers smaller than the base. Misleading graph methods: Misuse of log scales may also cause relationships between quantities to appear linear whilst those relationships are exponentials or power laws that rise very rapidly towards higher values. It has been stated, although mainly in a humorous way, that "anything looks linear on a log-log plot with thick marker pen" . Both graphs show an identical exponential function of f(x) = 2x. The graph on the left uses a linear scale, showing clearly an exponential trend. The graph on the right, however uses a logarithmic scale, which generates a straight line. If the graph viewer were not aware of this, the graph would appear to show a linear trend. Truncated graph A truncated graph (also known as a torn graph) has a y axis that does not start at 0. These graphs can create the impression of important change where there is relatively little change. Misleading graph methods: While truncated graphs can be used to overdraw differences or to save space, their use is often discouraged. Commercial software such as MS Excel will tend to truncate graphs by default if the values are all within a narrow range, as in this example. To show relative differences in values over time, an index chart can be used. Truncated diagrams will always distort the underlying numbers visually. Several studies found that even if people were correctly informed that the y-axis was truncated, they still overestimated the actual differences, often substantially. Misleading graph methods: These graphs display identical data; however, in the truncated bar graph on the left, the data appear to show significant differences, whereas, in the regular bar graph on the right, these differences are hardly visible. There are several ways to indicate y-axis breaks: Axis changes Changing the y-axis maximum affects how the graph appears. A higher maximum will cause the graph to appear to have less volatility, less growth, and a less steep line than a lower maximum. Changing the ratio of a graph's dimensions will affect how the graph appears. No scale The scales of a graph are often used to exaggerate or minimize differences. The lack of a starting value for the y axis makes it unclear whether the graph is truncated. Additionally, the lack of tick marks prevents the reader from determining whether the graph bars are properly scaled. Without a scale, the visual difference between the bars can be easily manipulated. Misleading graph methods: Though all three graphs share the same data, and hence the actual slope of the (x, y) data is the same, the way that the data is plotted can change the visual appearance of the angle made by the line on the graph. This is because each plot has a different scale on its vertical axis. Because the scale is not shown, these graphs can be misleading. Misleading graph methods: Improper intervals or units The intervals and units used in a graph may be manipulated to create or mitigate change expression. Omitting data Graphs created with omitted data remove information from which to base a conclusion. In the scatter plot with missing categories on the left, the growth appears to be more linear with less variation. In financial reports, negative returns or data that do not correlate with a positive outlook may be excluded to create a more favorable visual impression. 3D The use of a superfluous third dimension, which does not contain information, is strongly discouraged, as it may confuse the reader. Complexity Graphs are designed to allow easier interpretation of statistical data. However, graphs with excessive complexity can obfuscate the data and make interpretation difficult. Poor construction Poorly constructed graphs can make data difficult to discern and thus interpret. Extrapolation Misleading graphs may be used in turn to extrapolate misleading trends. Measuring distortion: Several methods have been developed to determine whether graphs are distorted and to quantify this distortion. Lie factor Lie factor size of effect shown in graphic size of effect shown in data , where size of effect second value first value first value |. A graph with a high lie factor (>1) would exaggerate change in the data it represents, while one with a small lie factor (>0, <1) would obscure change in the data. A perfectly accurate graph would exhibit a lie factor of 1. Graph discrepancy index graph discrepancy index 100 (ab−1), where percentage change depicted in graph , percentage change in data . Measuring distortion: The graph discrepancy index, also known as the graph distortion index (GDI), was originally proposed by Paul John Steinbart in 1998. GDI is calculated as a percentage ranging from −100% to positive infinity, with zero percent indicating that the graph has been properly constructed and anything outside the ±5% margin is considered to be distorted. Research into the usage of GDI as a measure of graphics distortion has found it to be inconsistent and discontinuous, making the usage of GDI as a measurement for comparisons difficult. Measuring distortion: Data-ink ratio data-ink ratio “ink” used to display the data total “ink” used to display the graphic The data-ink ratio should be relatively high. Otherwise, the chart may have unnecessary graphics. Data density data density number of entries in data matrix area of data graphic The data density should be relatively high, otherwise a table may be better suited for displaying the data. Usage in finance and corporate reports: Graphs are useful in the summary and interpretation of financial data. Graphs allow trends in large data sets to be seen while also allowing the data to be interpreted by non-specialists.Graphs are often used in corporate annual reports as a form of impression management. In the United States, graphs do not have to be audited, as they fall under AU Section 550 Other Information in Documents Containing Audited Financial Statements.Several published studies have looked at the usage of graphs in corporate reports for different corporations in different countries and have found frequent usage of improper design, selectivity, and measurement distortion within these reports. The presence of misleading graphs in annual reports have led to requests for standards to be set.Research has found that while readers with poor levels of financial understanding have a greater chance of being misinformed by misleading graphs, even those with financial understanding, such as loan officers, may be misled. Academia: The perception of graphs is studied in psychophysics, cognitive psychology, and computational visions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Social neuroscience** Social neuroscience: Social neuroscience is an interdisciplinary field devoted to understanding the relationship between social experiences and biological systems. Humans are fundamentally a social species, rather than solitary. As such, Homo sapiens create emergent organizations beyond the individual—structures that range from dyads, families, and groups to cities, civilizations, and cultures. In this regard, studies indicate that various social influences, including life events, poverty, unemployment and loneliness can influence health related biomarkers. The term "social neuroscience" can be traced to a publication entitled "Social Neuroscience Bulletin" which was published quarterly between 1988 and 1994. The term was subsequently popularized in an article by John Cacioppo and Gary Berntson, published in the American Psychologist in 1992. Cacioppo and Berntson are considered as the legitimate fathers of social neuroscience. Still a young field, social neuroscience is closely related to affective neuroscience and cognitive neuroscience, focusing on how the brain mediates social interactions. The biological underpinnings of social cognition are investigated in social cognitive neuroscience. Overview: Traditional neuroscience has for many years considered the nervous system as an isolated entity and largely ignored influences of the social environments in which humans and many animal species live. In fact, we now recognize the considerable impact of social structures on the operations of the brain and body. These social factors operate on the individual through a continuous interplay of neural, neuroendocrine, metabolic and immune factors on brain and body, in which the brain is the central regulatory organ and also a malleable target of these factors. Social neuroscience investigates the biological mechanisms that underlie social processes and behavior, widely considered one of the major problem areas for the neurosciences in the 21st century, and applies concepts and methods of biology to develop theories of social processes and behavior in the social and behavioral sciences. Social neuroscience capitalizes on biological concepts and methods to inform and refine theories of social behavior, and it uses social and behavioral constructs and data to advance theories of neural organization and function.Throughout most of the 20th century, social and biological explanations were widely viewed as incompatible. But advances in recent years have led to the development of a new approach synthesized from the social and biological sciences. The new field of social neuroscience emphasizes the complementary relationship between the different levels of organization, spanning the social and biological domains (e.g., molecular, cellular, system, person, relational, collective, societal) and the use of multi-level analyses to foster understanding of the mechanisms underlying the human mind and behavior. Methods: A number of methods are used in social neuroscience to investigate the confluence of neural and social processes. These methods draw from behavioral techniques developed in social psychology, cognitive psychology, and neuropsychology, and are associated with a variety of neurobiological techniques including functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), positron emission tomography (PET), facial electromyography (EMG), transcranial magnetic stimulation (TMS), electroencephalography (EEG), event-related potentials (ERPs), electrocardiograms, electromyograms, endocrinology, immunology, galvanic skin response (GSR), single-cell recording, and studies of focal brain lesion patients. In recent years, these methods have been complemented by virtual reality techniques (VR) and hormonal measures. Animal models are also important to investigate the putative role of specific brain structures, circuits, or processes (e.g., the reward system and drug addiction). In addition, quantitative meta-analyses are important to move beyond idiosyncrasies of individual studies, and neurodevelopmental investigations can contribute to our understanding of brain-behavior associations. The two most popular forms of methods used in social neuroscience are fMRI and EEG. fMRI are very cost efficient and high in spatial resolution. However, they are low in temporal resolution and therefore, are best to discover pathways in the brain that are used during social experiments. fMRI have low temporal resolution (timing) because they read oxygenated blood levels that pool to the parts of the brain that are activated and need more oxygen. Thus, the blood takes time to travel to the part of the brain being activated and in reverse provides a lower ability to test for exact timing of activation during social experiments. EEG is best used when a researcher is trying to brain map a certain area that correlates to a social construct that is being studied. EEGs provide high temporal resolution but low spatial resolution. In which, the timing of the activation is very accurate but it is hard to pinpoint exact areas on the brain, researchers are to narrow down locations and areas but they also create a lot of "noise". Most recently, researchers have been using TMS which is the best way to discover the exact location in the process of brain mapping. This machine can turn on and off parts of the brain which then allows researchers to test what that part of the brain is used for during social events. However, this machine is so expensive that it is rarely used. Methods: Note: Most of these methods can only provide correlations between brain mapping and social events (apart from TMS), a con of Social Neuroscience is that the research must be interpreted through correlations which can cause a decreased content validity. For example, during an experiment when a participant is doing a task to test for a social theory and a part of the brain is activated, it is impossible to form causality because anything else in the room or the thoughts of the person could have triggered that response. It is very hard to isolate these variables during these experiments. That is why self-reports are very important. This will also help decrease the chances of VooDoo correlations (correlations that are too high and over 0.8 which look like a correlation exists between two factors but actually is just an error in design and statistical measures). Another way to avoid this con, is to use tests with hormones that can infer causality. For example, when people are given oxytocin and placebos and we can test their differences in social behavior between other people. Using SCRs will also help isolate unconscious thoughts and conscious thoughts because it is the body's natural parasympathetic response to the outside world. All of these tests and devices will help social neuroscientists discover the connections in the brain that are used to carry out our everyday social activities. Methods: Primarily psychological methods include performance-based measures that record response time and/or accuracy, such as the Implicit Association Test; observational measures such as preferential looking in infant studies; and, self-report measures, such as questionnaire and interviews.Neurobiological methods can be grouped together into ones that measure more external bodily responses, electrophysiological methods, hemodynamic measures, and lesion methods. Bodily response methods include GSR (also known as skin conductance response (SCR)), facial EMG, and the eyeblink startle response. Electrophysiological methods include single-cell recordings, EEG, and ERPs. Hemodynamic measures, which, instead of directly measuring neural activity, measure changes in blood flow, include PET and fMRI. Lesion methods traditionally study brains that have been damaged via natural causes, such as strokes, traumatic injuries, tumors, neurosurgery, infection, or neurodegenerative disorders. In its ability to create a type of 'virtual lesion' that is temporary, TMS may also be included in this category. More specifically, TMS methods involve stimulating one area of the brain to isolate it from the rest of the brain, imitating a brain lesion. This is particularly helpful in brain mapping, a key approach in social neuroscience designed to determine which areas of the brain are activated during certain activities. Society for Social Neuroscience: A dinner to discuss the challenges and opportunities in the interdisciplinary field of social neuroscience at the Society for Neuroscience meeting (Chicago, November 2009) resulted in a series of meetings led by John Cacioppo and Jean Decety with social neuroscientists, psychologists, neuroscientists, psychiatrists, sociologists and economists in Argentina, Australia, Chile, China, Colombia, Hong Kong, Israel, Japan, the Netherlands, New Zealand, Singapore, South Korea, Taiwan, the United Kingdom and the United States. Social neuroscience was defined broadly as the interdisciplinary study of the neural, hormonal, cellular, and genetic mechanisms underlying the emergent structures that define social species. Thus, among the participants in these meetings were scientists who used a wide variety of methods in studies of animals as well as humans, and patients as well as normal participants. The consensus also emerged that a Society for Social Neuroscience should be established to give scientists from diverse disciplines and perspectives the opportunity to meet, communicate with, and benefit from the work of each other. The international, interdisciplinary Society for Social Neuroscience (http://S4SN.org) was launched at the conclusion of these consultations in Auckland, New Zealand on 20 January 2010, and the inaugural meeting for the Society was held on November 12, 2010, the day prior to the 2010 Society for Neuroscience meeting (San Diego, CA).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LZTR1** LZTR1: Leucine-zipper-like transcriptional regulator 1 is a protein that in humans is encoded by the LZTR1 gene.The LZTR1 gene provides instructions for making a protein among the class of the superfamily broad complex, tamtrack & brick-a-bac / poxvirus and zinc finger (BTB/POZ). The superfamily of proteins has a wide range of functions including chromatin condensation during conformation of the cell cycle. Other names associated with the LZTR gene are: BTBD29, LZTR-1, NS10, NS2, SWNTS2. This gene encodes a member of the BTB-kelch superfamily. Initially described as a putative transcriptional regulator based on weak homology to members of the basic leucine zipper-like family, the encoded protein subsequently has been shown to localize exclusively to the Golgi network where it may help stabilize the Golgi complex. Function: Based on its role in several tumor types, the LZTR1 protein is thought to act as a tumor suppressor. Tumor suppressors are proteins that keep cells from growing and dividing too rapidly or in an uncontrolled way. The LZTR1 is a non-specific protein that is found in all cells inside the body. It is believed to be a transcriptional regulator that is typically degraded on apoptotic cells. The protein will be phosphorylated at its tyrosine receptors that will target it for degradation. Intracellularly, LZTR proteins will be found in the Golgi apparatus. Studies suggest that the LZTR1 protein may help stabilize this structure. LTZR1 protein could possibly be associated with the CUL3 ubiquitin ligase (Cullin-Based Ubiquitin Ligase 3) complex that helps function to destroy unneeded proteins in the cell. It has also been observed that LZTR protein will inhibit Ras signaling in the membrane by reducing the affinity of Ras to the membrane. Ras belongs to the family of GTPases that are involved in transcription regulation and activation of Raf enzymes. Raf molecules will cascade phosphorylate other molecules in the body to have a wide impact on a cell. Studies using immunoprecipitation of endogenous LZTR1 followed by Western blotting were used to find the function of the LZTR gene. By trapping LZTR1 complexes from intact mammalian cells, Steklov et al. (2018) identified the guanosine triphosphatase RAS as a substrate for the LZTR1-CUL3 complex. Gene: The LZTR 1 gene is located on Chromosome 22: more specifically on the long arm at 22q11.21. The gene is approximately 16,768 base pairs long. Mutations: Studies have found that mutations in the LZTR1 gene were found in malignant cancerous cells in the tumors of patients with glioblastoma. These mutations were found to be somatic, typically caused by environmental factors, and the loss of the LZTR1 gene are seen in the cells that are divided uncontrollably. DiGeorge Syndrome: DiGeorge syndrome. (known as 22q11.2 deletion) caused by a deletion in the 22nd chromosome. Some of the typical symptoms associated with DiGeorge Syndrome are specific facial structure, congenital heart disease, and developmental delays. The implications of LTZR1 mutations were first diagnosed in DiGeorge patients. Studies have showed that deletion or mutation of the LZTR1 are identified in most patients that have been diagnosed with DiGeorge syndrome. The transcriptional regulation capabilities of the LZTR1 gene may play an important role in embryogenesis and is observed in several fetal organs. Noonan syndrome: Noonan syndrome is an autosomal dominant multisystem disorder characterized by a wide phenotypic spectrum including distinctive facial dysmorphism, postnatal growth retardation, short stature, ectodermal and skeletal defects, congenital heart anomalies, renal anomalies, lymphatic malformations, bleeding difficulties and variable cognitive deficits. Noonan syndrome: Studies have shown that in 29 genes there were 163 variants in patients with Noonan Syndrome. In the study, using In Silco software, the heterozygous missense mutation of the LZTR1 gene at exon 4 was the most pathogenic. This missense mutation will lead to a substitution of an alanine to valine in the primary structure of amino acid for the LZTR protein. Schwannomatosis: In patients with schwannomatosis, more than fifty different mutations in the LZTR1 gene are observed. These mutations themselves are not sufficient to cause the disorder, but are typically associated with it. The somatic changes from environmental factors are also seen in patients with schwannomatosis. When the gene is altered, the LTZR protein cannot function properly to regulate the cell cycle by controlling the growth division. This unregulated growth will lead to cancerous growth along the Schwann cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stop codon** Stop codon: In molecular biology (specifically protein biosynthesis), a stop codon (or termination codon) is a codon (nucleotide triplet within messenger RNA) that signals the termination of the translation process of the current protein. Most codons in messenger RNA correspond to the addition of an amino acid to a growing polypeptide chain, which may ultimately become a protein; stop codons signal the termination of this process by binding release factors, which cause the ribosomal subunits to disassociate, releasing the amino acid chain. Stop codon: While start codons need nearby sequences or initiation factors to start translation, a stop codon alone is sufficient to initiate termination. Properties: Standard codons In the standard genetic code, there are three different termination codons: Alternative stop codons There are variations on the standard genetic code, and alternative stop codons have been found in the mitochondrial genomes of vertebrates, Scenedesmus obliquus, and Thraustochytrium. Reassigned stop codons The nuclear genetic code is flexible as illustrated by variant genetic codes that reassign standard stop codons to amino acids. Properties: Translation In 1986, convincing evidence was provided that selenocysteine (Sec) was incorporated co-translationally. Moreover, the codon partially directing its incorporation in the polypeptide chain was identified as UGA also known as the opal termination codon. Different mechanisms for overriding the termination function of this codon have been identified in prokaryotes and in eukaryotes. A particular difference between these kingdoms is that cis elements seem restricted to the neighborhood of the UAG codon in prokaryotes while in eukaryotes this restriction is not present. Instead such locations seem disfavored albeit not prohibited. In 2003, a landmark paper described the identification of all known selenoproteins in humans: 25 in total. Similar analyses have been run for other organisms. Properties: The UAG codon can translate into pyrrolysine (Pyl) in a similar manner. Properties: Genomic distribution Distribution of stop codons within the genome of an organism is non-random and can correlate with GC-content. For example, the E. coli K-12 genome contains 2705 TAA (63%), 1257 TGA (29%), and 326 TAG (8%) stop codons (GC content 50.8%). Also the substrates for the stop codons release factor 1 or release factor 2 are strongly correlated to the abundance of stop codons. Large scale study of bacteria with a broad range of GC-contents shows that while the frequency of occurrence of TAA is negatively correlated to the GC-content and the frequency of occurrence of TGA is positively correlated to the GC-content, the frequency of occurrence of the TAG stop codon, which is often the minimally used stop codon in a genome, is not influenced by the GC-content. Properties: Recognition Recognition of stop codons in bacteria have been associated with the so-called 'tripeptide anticodon', a highly conserved amino acid motif in RF1 (PxT) and RF2 (SPF). Even though this is supported by structural studies, it was shown that the tripeptide anticodon hypothesis is an oversimplification. Nomenclature: Stop codons were historically given many different names, as they each corresponded to a distinct class of mutants that all behaved in a similar manner. These mutants were first isolated within bacteriophages (T4 and lambda), viruses that infect the bacteria Escherichia coli. Mutations in viral genes weakened their infectious ability, sometimes creating viruses that were able to infect and grow within only certain varieties of E. coli. Nomenclature: amber mutations (UAG) They were the first set of nonsense mutations to be discovered, isolated by Richard H. Epstein and Charles Steinberg and named after their friend and graduate Caltech student Harris Bernstein, whose last name means "amber" in German (cf. Bernstein).Viruses with amber mutations are characterized by their ability to infect only certain strains of bacteria, known as amber suppressors. These bacteria carry their own mutation that allows a recovery of function in the mutant viruses. For example, a mutation in the tRNA that recognizes the amber stop codon allows translation to "read through" the codon and produce a full-length protein, thereby recovering the normal form of the protein and "suppressing" the amber mutation. Nomenclature: Thus, amber mutants are an entire class of virus mutants that can grow in bacteria that contain amber suppressor mutations. Similar suppressors are known for ochre and opal stop codons as well. tRNA molecules carrying unnatural aminoacids have been designed to recognize the amber stop codon in bacterial RNA. This technology allows for incorporation of orthogonal aminoacids (such as p-azidophenylalanine) at specific locations of the target protein. Nomenclature: ochre mutations (UAA) It was the second stop codon mutation to be discovered. Reminiscent of the usual yellow-orange-brown color associated with amber, this second stop codon was given the name of "ochre", an orange-reddish-brown mineral pigment.Ochre mutant viruses had a property similar to amber mutants in that they recovered infectious ability within certain suppressor strains of bacteria. The set of ochre suppressors was distinct from amber suppressors, so ochre mutants were inferred to correspond to a different nucleotide triplet. Through a series of mutation experiments comparing these mutants with each other and other known amino acid codons, Sydney Brenner concluded that the amber and ochre mutations corresponded to the nucleotide triplets "UAG" and "UAA". Nomenclature: opal or umber mutations (UGA) The third and last stop codon in the standard genetic code was discovered soon after, and corresponds to the nucleotide triplet "UGA".To continue matching with the theme of colored minerals, the third nonsense codon came to be known as "opal", which is a type of silica showing a variety of colors. Nonsense mutations that created this premature stop codon were later called opal mutations or umber mutations. Mutations and disease: Nonsense Nonsense mutations are changes in DNA sequence that introduce a premature stop codon, causing any resulting protein to be abnormally shortened. This often causes a loss of function in the protein, as critical parts of the amino acid chain are no longer assembled. Because of this terminology, stop codons have also been referred to as nonsense codons. Mutations and disease: Nonstop A nonstop mutation, also called a stop-loss variant, is a point mutation that occurs within a stop codon. Nonstop mutations cause the continued translation of an mRNA strand into what should be an untranslated region. Most polypeptides resulting from a gene with a nonstop mutation lose their function due to their extreme length and the impact on normal folding. Nonstop mutations differ from nonsense mutations in that they do not create a stop codon but, instead, delete one. Nonstop mutations also differ from missense mutations, which are point mutations where a single nucleotide is changed to cause replacement by a different amino acid. Nonstop mutations have been linked with many inherited diseases including endocrine disorders, eye disease, and neurodevelopmental disorders. Hidden stops: Hidden stops are non-stop codons that would be read as stop codons if they were frameshifted +1 or −1. These prematurely terminate translation if the corresponding frame-shift (such as due to a ribosomal RNA slip) occurs before the hidden stop. It is hypothesised that this decreases resource wastage on nonfunctional proteins and the production of potential cytotoxins. Researchers at Louisiana State University propose the ambush hypothesis, that hidden stops are selected for. Codons that can form hidden stops are used in genomes more frequently compared to synonymous codons that would otherwise code for the same amino acid. Unstable rRNA in an organism correlates with a higher frequency of hidden stops. Hidden stops: However, this hypothesis could not be validated with a larger data set.Stop-codons and hidden stops together are collectively referred as stop-signals. Researchers at University of Memphis found that the ratios of the stop-signals on the three reading frames of a genome (referred to as translation stop-signals ratio or TSSR) of genetically related bacteria, despite their great differences in gene contents, are much alike. This nearly identical genomic-TSSR value of genetically related bacteria may suggest that bacterial genome expansion is limited by their unique stop-signals bias of that bacterial species. Translational readthrough: Stop codon suppression or translational readthrough occurs when in translation a stop codon is interpreted as a sense codon, that is, when a (standard) amino acid is 'encoded' by the stop codon. Mutated tRNAs can be the cause of readthrough, but also certain nucleotide motifs close to the stop codon. Translational readthrough is very common in viruses and bacteria, and has also been found as a gene regulatory principle in humans, yeasts, bacteria and drosophila. This kind of endogenous translational readthrough constitutes a variation of the genetic code, because a stop codon codes for an amino acid. In the case of human malate dehydrogenase, the stop codon is read through with a frequency of about 4%. The amino acid inserted at the stop codon depends on the identity of the stop codon itself: Gln, Tyr, and Lys have been found for the UAA and UAG codons, while Cys, Trp, and Arg for the UGA codon have been identified by mass spectrometry. Extent of readthrough in mammals have widely variable extents, and can broadly diversify the proteome and affect cancer progression. Use as a watermark: In 2010, when Craig Venter unveiled the first fully functioning, reproducing cell controlled by synthetic DNA he described how his team used frequent stop codons to create watermarks in RNA and DNA to help confirm the results were indeed synthetic (and not contaminated or otherwise), using it to encode authors' names and website addresses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kinetic scheme** Kinetic scheme: In physics, chemistry and related fields, a kinetic scheme is a network of states and connections between them representing the scheme of a dynamical process. Usually a kinetic scheme represents a Markovian process, while for non-Markovian processes generalized kinetic schemes are used. Figure 1 shows an illustration of a kinetic scheme. A Markovian kinetic scheme: Mathematical description A kinetic scheme is a network (a directed graph) of distinct states (although repetition of states may occur and this depends on the system), where each pair of states i and j are associated with directional rates, Aij (and Aji ). It is described with a master equation: a first-order differential equation for the probability P→ of a system to occupy each one its states at time t (element i represents state i). Written in a matrix form, this states: dP→dt=AP→ , where A is the matrix of connections (rates) Aij In a Markovian kinetic scheme the connections are constant with respect to time (and any jumping time probability density function for state i is an exponential, with a rate equal the value of all the exiting connections). When detailed balance exists in a system, the relation AjiPi(t→∞)=AijPj(t→∞) holds for every connected states i and j. The result represents the fact that any closed loop in a Markovian network in equilibrium does not have a net flow. A Markovian kinetic scheme: Matrix A can also represent birth and death, meaning that probability is injected (birth) or taken from (death) the system, where then, the process is not in equilibrium. These terms are different than a birth–death process, where there is simply a linear kinetic scheme. Specific Markovian kinetic schemes A birth–death process is a linear one-dimensional Markovian kinetic scheme. Michaelis–Menten kinetics are a type of a Markovian kinetic scheme when solved with the steady state assumption for the creation of intermediates in the reaction pathway. Generalizations of Markovian kinetic schemes: A kinetic scheme with time dependent rates: When the connections depend on the actual time (i.e. matrix A depends on the time, A→A(t) ), the process is not Markovian, and the master equation obeys, dP→dt=A(t)P→ . The reason for a time dependent rates is, for example, a time dependent external field applied on a Markovian kinetic scheme (thus making the process a not Markovian one). Generalizations of Markovian kinetic schemes: A semi-Markovian kinetic scheme: When the connections represent multi exponential jumping time probability density functions, the process is semi-Markovian, and the equation of motion is an integro-differential equation termed the generalized master equation: dP→dt=∫0tA(t−τ)P→(τ)dτ .An example for such a process is a reduced dimensions form. The Fokker Planck equation: when expanding the master equation of the kinetic scheme in a continuous space coordinate, one finds the Fokker Planck equation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lane splitting** Lane splitting: Lane splitting is riding a bicycle or motorcycle between lanes or rows of slow moving or stopped traffic moving in the same direction. It is sometimes called whitelining, or stripe-riding. This allows riders to save time, bypassing traffic congestion, and may also be safer than stopping behind stationary vehicles.Filtering or filtering forward is to be contrasted with lane splitting. Lane filtering refers to motorcycles moving through traffic that is stopped, such as at a red traffic light. In the developing world: In population-dense and traffic-congested urban areas, particularly in the developing world, the space between larger vehicles is filled with a wide variety of different kinds of two-wheeled vehicles, as well as pedestrians, and many other human or animal powered conveyances. In places such as Bangkok, Thailand and in Indonesia, the ability of motorcycles to take advantage of the space between cars has led to the growth of a motorcycle taxi industry. In Indonesia, the motorcycle is the most common type of vehicle.Unlike typical developed nations that have only a handful of vehicle types on their roads, many types of transport will share the same roads as cars and trucks; this diversity is extreme in Delhi, India, where more than 40 modes of transportation regularly use the roads. In contrast, New York City, for example, has perhaps five modes, and in parts of America a vast majority of traffic is made up of two types of vehicles on the road: cars and trucks.It has been suggested that highly diverse and adaptive modes of road use are capable of moving very large numbers of people in a given space compared with cars and trucks remaining within the bounds of marked lanes. On roads where modes of transportation are mingled this can cause a reduced efficiency for all modes. Safety: Filtering forward, in stopped or extremely slow traffic, requires very slow speed and awareness that in a door zone, vehicle doors may unexpectedly open. Also, unexpected vehicle movements such as lane changes may occur with little warning. Buses and tractor-trailers require extreme care, as the cyclist may be nearly invisible to the drivers who may not expect someone to be filtering forward. To avoid a hook collision with a turning vehicle at an intersection after filtering forward to the intersection, cyclists are taught to either take a position directly in front of the stopped lead vehicle, or stay behind the lead vehicle. Cyclists should not stop directly at the passenger side of the lead vehicle, that being a blind spot. Safety: Research Little safety research in the United States has directly examined the question of lane splitting. The European MAIDS report studied the causes of motorcycle accidents in four countries where it is legal and one where it is not, yet reached no conclusion as to whether it contributed to or prevented accidents. Proponents of lane splitting state that the author of the Hurt Report of 1981, Harry Hurt, implied that lane splitting improves motorcycle safety by reducing rear end crashes. However, in subsequent interviews, Hurt stated that there is no factual evidence to support this claim.Lane splitting supporters also state that the US DOT FARS database shows that fatalities from rear-end collisions into motorcycles are 30% lower in California than in Florida or Texas, states with similar riding seasons and populations but which do not lane split. No specifics are given about where this conclusion is found in the FARS system. The database is available online to the public. The NHTSA does say, based on the Hurt Report, that lane splitting "slightly reduces" rear-end accidents, and is worthy of further study due to the possible congestion reduction benefits.Lane splitting is never mentioned anywhere in the Hurt Report, and all of the data was collected in California, so no comparison was made between of lane splitting vs. non-lane splitting. The Hurt Report ends with a list of 55 specific findings, such as "Fuel system leaks and spills are present in 62% of the motorcycle accidents in the post-crash phase. This represents an undue hazard for fire." None of these findings mentions lane splitting, or rear-end collisions. The legislative and law enforcement advice that follows this list does not mention lane splitting or suggest laws be changed with regard to lane splitting. Safety: However lane-splitting riders who were involved in collisions were also more than twice as likely to rear-end another vehicle (38.4 percent versus 15.7 percent).In Europe, the MAIDS Report was conducted using Organisation for Economic Co-operation and Development (OECD) standards in 1999–2000 and collected data on over 900 motorcycle accidents in five countries, along with non-accident exposure data (control cases) to measure the contribution of different factors to accidents, in the same way as the Hurt Report. Four of the five countries where data was collected allow lane splitting, while one does not, yet none of the conclusions contained in the MAIDS Final Report note any difference in rear-end accidents or accidents during lane splitting. It is notable that the pre-crash motion of the motorcycle or scooter was lane-splitting in only 0.4% of cases, in contrast to the more common accident situations such as "Moving in a straight line, constant speed" 49.1% and "Negotiating a bend, constant speed" 12.1%. The motorcyclist was stopped in traffic prior to 2.8% of the accidents.Preliminary results from a study in the United Kingdom, conducted by the University of Nottingham for the Department for Transport, show that filtering is responsible for around 5% of motorcycle Killed or Seriously Injured (KSI) accidents. It also found that in these KSI cases the motorist is twice as likely to be at fault as the motorcyclist due to motorists "failing to take into account possible motorcycle riding strategies in heavy traffic".A very different form of research, where the capacity benefits are examined as well, 1500 powered two-wheelers were video tracked to calibrate an agent based model of movement between and along lanes, also included a Bayesian model calibrated to determine the choices made to move between lanes. This model provides a basis for measuring the risk levels of such choices, and late applications allowed the determination of the capacity gains (in terms of passenger car equivalent) from such movement once filtered to the front of the queue and in continuing non-intersection movements along stretches of roadBelgian policy research company Transport & Mobility Leuven published a study in September 2011 investigating the effects that increased motorcycle usage would have on traffic flow and emissions and found that a 10% modal shift would result in a 40% reduction in commute time and a 6% reduction in total emissions. This calculation assumed that all motorcycles moved between lanes and the space used by them, called a passenger car equivalent (PCE), would be reduced to zero when traffic came to a complete standstill. It also assumed that motorcycles would overtake cars without hindering them during heavy congestion, and PCE would be between less than 0.5, approaching zero as traffic density increased. Safety: Debate over safety and benefits Proponents state that the practice relieves congestion by removing commuters from cars and gets them to use the unused roadway space between the cars, and that lane splitting also improves fuel efficiency and motorcyclists' comfort in extreme weather. In the US, transportation engineers have suggested that motorcycles are too few, and will remain too few, to justify any special accommodation or legislative consideration, such as lane splitting. Unless it becomes likely that a very large number of Americans will switch to motorcycles, they will offer no measurable congestion relief, even with lane splitting. Rather, laws and infrastructure should merely incorporate motorcycles into normal traffic with minimal disruption and risk to riders.Potentially, lane splitting can lead to road rage on the part of drivers, who feel frustrated that the motorcyclists are able to filter through the traffic jam. However, the Hurt Report indicates that, "Deliberate hostile action by a motorist against a motorcycle rider is a rare accident cause." Lane splitting is not recommended for beginning motorcyclists, and riders who do not practice it in their home area are strongly cautioned that it can be risky if they attempt it when traveling to a jurisdiction where it is allowed. Similarly, for drivers new to places where it is done, it can be startling and scary. Safety: Responsibility and liability issues Another consideration is that lane splitting in the United States, even where legal, can possibly leave the rider legally responsible. According to J.L. Matthews in How to Win Your Personal Injury Claim: "Safely" is always very much a judgment call. The mere fact that an accident happened while a rider was lane splitting is very strong evidence that on that occasion it wasn't safe to do so. If you've been involved in an accident you will have a hard job convincing an insurance adjuster that the accident was not completely your fault. Safety: When the 2005 bill to legalize lane splitting in Washington State was defeated, a Washington State Patrol spokesman testified in opposition, saying that, "it would be difficult to set and enforce standards for appropriate speeds and conditions for lane splitting." He also said that officials with the California Highway Patrol told him that they wished they had never begun allowing the practice. As of October 2022, the California Highway Patrol has lane splitting tips on their website. Similar guidelines were posted by the California Department of Motor Vehicles, but those guidelines were subsequently removed. Safety: Safety aspects California's DMV handbook for motorcycles advises caution regarding lane splitting: "Vehicles and motorcycles each need a full lane to operate safely and riding between rows of stopped or moving vehicles in the same lane can leave you vulnerable. A vehicle could turn suddenly or change lanes, a door could open, or a hand could come out the window." The Oxford Systematics report commissioned by VicRoads, the traffic regulating authority in Victoria, Australia, found that for motorcycles filtering through stationary traffic "[n]o examples have yet been located where such filtering has been the cause of an incident."In the United Kingdom, Motorcycle Roadcraft, the police riding manual, is explicit about the advantages of filtering but also states that the "...advantages of filtering along or between stopped or slow moving traffic have to be weighed against the disadvantages of increased vulnerability while filtering".After discussing the pros and cons at great length, motorcycle safety guru David L. Hough ultimately argues that a rider, given the choice to legally lane split, is probably safer doing so, than to remain stationary in a traffic jam. However, Hough has not gone on record as favoring changing the law in jurisdictions where it is not permitted, in contrast to his public education and legislative efforts in favor of rider training courses and helmet use. A literature review of lane-sharing by the Oregon Department of Transportation notes "a potential safety benefit is increased visibility for the motorcyclist. Splitting lanes allows the motorcyclist to see what the traffic is doing ahead and be able to proactively maneuver." However, the review was limited and "Benefits were often cited in motorcyclist advocacy publications and enthusiast articles." Legal status: Lane splitting is controversial in the United States, and is sometimes an issue in other countries. This debate includes whether or not it is legal, whether or not it should be legal, and whether or not riders should lane split even where it is permitted. A frequently asked question by motorcyclists is "Is lane splitting legal?" Legal status in Australia In Australia, a furor erupted when the transport authorities decided to consolidate and clarify the disparate set of laws that collectively made lane splitting illegal. Because of the very opacity of the laws they were attempting to clarify, many Australians had actually believed that lane splitting was legal, and they had been practicing it as long as they had been riding. They interpreted the action as a move to change the law to make lane splitting illegal. Because of the volume of public comment opposed to this, the authorities decided to take no further action and so the situation remained as it was until 1 July 2014 when New South Wales made filtering and lane splitting legal under strict conditions. On 1 February 2015, similar relaxations were introduced in Queensland. Legal status: Legal status in the European Union In most of the European Union lane splitting is legal, and in a number of countries, such as France, Italy, Spain or Netherlands, it's even expected. Depending on a country there can be some restrictions - for example in Germany it's legal only when the car traffic is slow or stationary - or it might be straight out illegal, but tolerated to a degree, such as in Slovakia. In Poland the legal situation is somewhat complex, as lane splitting is not specifically legalised, but not banned either. All of the traffic laws that regulate typical overtaking apply even when lane splitting, notably it cannot be done in places where overtaking is forbidden by the lane markings (double center line) or other traffic signs, a vehicle (even single-track) cannot drive on the center line itself and has to keep a safe distance from other road users. Legal status: Legal status in the Philippines The Land Transportation Office, through Administrative Order No. 15 series of 2008 prohibits motorcycles from lane splitting along public roads and highways in the Philippines. The order, however, does not include provisions to penalize riders for doing so. Meanwhile, there are no laws prohibiting bicycles or other non-motorized vehicles from lane splitting on roads. Legal status: A bill was filed in the 19th Congress by Pangasinan 5th district representative Ramon Guico Jr., who initially filed it in the 18th Congress in September 2019. The bill proposes to ban motorcycles and motorized tricycles from lane splitting except when overtaking, defining it as when a motorcycle or motorized tricycle stops or passes through vehicles during traffic on a broken white line. The proposed fines range from ₱1,500 to ₱5,000, including revocation of the violator's license. Legal status: Legal status in Taiwan In Taiwan, no local traffic laws prohibiting lane splitting for motorcycles under 250cc unless they drive outside motorcycle lanes or fail to maintain a safe distance. For motorcycles over 250cc, defined as "large heavy motorcycles" (大型重型機車) and shall apply regulations of small cars by local traffic laws, lane splitting is illegal which can be penalized from NT$3,000 to NT$6,000. However, court decisions allow lane splitting for large heavy motorcycles when filtering. Legal status: Legal status in the US The legal confusion in Australia is not exceptional. In a 2012 California survey, 53 percent of non-motorcycle drivers thought that lane splitting was legal. At the time, there was no specific traffic law in California that addressed lane splitting. No legal prohibition of an action generally means that the action is lawful; however, there are other U.S. states in which there are no traffic laws explicitly prohibiting lane splitting, but officials rely on other laws to regularly interpret lane splitting as unlawful. For example, New Mexico does not address lane splitting by name, but has language requiring turn signals be used continuously for at least 100 ft (30 m) before changing lanes. as well as other codes which may be cited by an officer. Many other states have derived identical codes from the Uniform Vehicle Code.Lane splitting was legally defined for the first time in California by a bill signed into law in August 2016. The new law established a definition of lane splitting while making no mention of whether, or under what circumstances, it is allowed. It also permits the California Highway Patrol, in consultation with government and interest groups, to establish educational guidelines about lane splitting. This essentially gives the CHP permission to bring back the FAQ and advice on lane splitting which they published and rescinded in 2009 after one person complained. Sport Rider magazine predicted that "issues are almost certain" due to the law's ambiguity as to what is and is not legal. Cycle World said that while it, "is a step in the right direction, the AB 51 Bill doesn't actually do much to clear anything up." Effective January 1, 2017, section 21658.1 was added to the California Vehicle Code and defines lane splitting, which is now explicitly legal in California. The California Highway Patrol released new lane splitting safety tips on September 27, 2018.Bills to legalize lane splitting have been introduced in state legislatures around the United States over the last twenty years, but none had been enacted before California’s.Utah legalized filtering in 2019 and the law went into effect on May 14, 2019. The Utah Department of Public Safety Highway Safety Office created infographics and videos that demonstrated safe and legal filtering.Montana's governor signed SB9, legalizing filtering in March of 2021. The bill went into effect on October 1, 2021. This law permits filtering at up to 20 mph (32 km/h) between vehicles that are stopped or moving at up to 10 mph (16 km/h). Legal status: Arizona's governor signed SB1273 on March 23, 2022, and it will go into effect after 90 days after the end of the second regular session of the 55th Arizona State Legislature. This legislation mirrored Utah's bill, permitting motorcycles to travel at up to 15 mph (24 km/h) between completely stopped vehicles on roads with a speed limit of 45 mph (72 km/h) or slower and at least two adjacent lanes in the same direction of travel, as long as the movement could be made safely. Senator Tyler Pace sponsored the bill and Representative Frank Carroll co-sponsored it. The bill received support from ABATE of Arizona, a state motorcyclists' rights organization.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monthly Shōnen Jump** Monthly Shōnen Jump: Monthly Shōnen Jump (月刊少年ジャンプ, Gekkan Shōnen Janpu, commonly anglicized as MONTHLY JUMP) was a shōnen manga magazine which was published monthly in Japan by Shueisha from 1970 to 2007 under the Jump line of magazines. It was the sister magazine to Weekly Shōnen Jump. History: The Monthly Shōnen Jump magazine started as a spin-off issue of Weekly Jump called Bessatsu Shōnen Jump.The second spin-off issue was called Monthly Shōnen Jump, which caught on and became its own separate independent manga magazine. History: Shōnen manga magazines in Japan in the 1980s focused on bishōjo characters, and Monthly Shōnen Jump stood out due to the many product and toy tie-ins it had during that period and into the 1990s. An off-shoot, Hobby's Jump, was published for 16 issues from 1983 to 1988. Another spin-off Go!Go! Jump was a collaboration between its sister magazine Weekly Jump and Monthly Jump; it was published in 2005 and was only published once. History: On 22 February 2007, Shueisha announced that Monthly Jump would cease publication as of the July issue (on sale June 6, 2007.) Sales had slumped to a third of the magazine's peak, though a new magazine called Jump SQ. took its place on 2 November.In a letter dated 2 May 2007, Shueisha announced that Claymore takes a month break but it, Gag Manga Biyori, Rosario + Vampire, and Tegami Bachi continued in Weekly Shōnen Jump until the start of the magazine Jump SQ. List of titles: Titles with ☆ were transferred to Shueisha's Jump Square. The magazine's longest running manga were: Kattobi itto (Motoki Monma), Wataru Ga Pyun! (Tsuyoshi Nakaima) and Eleven (Taro Nami, Hiroshi Takahashi) Last series Rosario + Vampire Claymore☆ Tegami Bachi☆ Sheisen no Shachi Gag Manga Biyori Passacaglia Op.7 Étoile Blue Dragon ST Buttobi Itto Kurohime Mr. Perfect DohRan Surebrec -Nora the 2nd- Kuroi Love Letter Mizu Cinema Past series Kia Asamiya Steam Detectives (Moved to Ultra Jump at the magazines start.) Kazunari Kakei Nora: The Last Chronicle of Devildom Koji Kousaka Sutobasu Yarō Shō Yūichi Agarie & Kenichi Sakura Kotokuri ★ Dragon Drive Hiroshi Aro Sherriff Futaba-kun Change! Rin Hirai Legendz Akira Toriyama Neko Majin Z Hiroyuki Asada I'll Takehiko Inoue Buzzer Beater Akio Chiba Captain Kōichi Endo Shinigami-kun Fumihito Higashitani Kuroi Love Letter ★ Daisuke Higuchi Go Ahead Shotaro Ishinomori Cyborg 009 Yūko Ishizuka Anoa no Mori ★ Bibiko Kurowa Gentō Club Gatarō Man Jigoku Kōshien Kōsuke Masuda Gag Manga Biyori ★☆ Takayuki Mizushina Uwa no Sora Chūihō Akira Momozato Guts Ranpei Motoki Monma Kattobi Itto Go Nagai Kekko Kamen Maboroshi Panty (written by Yasutaka Nagai) Keiji Nakazawa I Saw It (published in America by EduComics) Tarō Nami & Hiroshi Takahashi Eleven Riku Sanjo & Koji Inada Beet the Vandel Buster ★ (Moved to Jump Square Crown) Ami Shibata Ayakashi Tenma Yoshihiro Takahashi Shiroi Senshi Yamato Kikuhide Tani & Yoshihiro Kuroiwa Zenki Osamu Tezuka Astro Boy 1985 e no Tabidachi Godfather no Musuko Grotesque e no Shōtai Inai Inai Bā Norihiro Yagi Angel Densetsu Claymore ★☆ Katakura Masanori Kurohime★☆
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Liquid-crystal polymer** Liquid-crystal polymer: Liquid crystal polymers (LCPs) are polymers with the property of liquid crystal, usually containing aromatic rings as mesogens. Despite uncrosslinked LCPs, polymeric materials like liquid crystal elastomers (LCEs) and liquid crystal networks (LCNs) can exhibit liquid crystallinity as well. They are both crosslinked LCPs but have different cross link density. They are widely used in the digital display market. In addition, LCPs have unique properties like thermal actuation, anisotropic swelling, and soft elasticity. Therefore, they can be good actuators and sensors. One of the most famous and classical applications for LCPs is Kevlar, a strong but light fiber with wide applications, notably bulletproof vests. Background: Liquid crystallinity in polymers may occur either by dissolving a polymer in a solvent (lyotropic liquid-crystal polymers) or by heating a polymer above its glass or melting transition point (thermotropic liquid-crystal polymers). Liquid-crystal polymers are present in melted/liquid or solid form. In solid form the main example of lyotropic LCPs is the commercial aramid known as Kevlar. Chemical structure of this aramid consists of linearly substituted aromatic rings linked by amide groups. In a similar way, several series of thermotropic LCPs have been commercially produced by several companies (e.g., Vectra / Celanese). Background: A high number of LCPs, produced in the 1980s, displayed order in the melt phase analogous to that exhibited by nonpolymeric liquid crystals. Processing of LCPs from liquid-crystal phases (or mesophases) gives rise to fibers and injected materials having high mechanical properties as a consequence of the self-reinforcing properties derived from the macromolecular orientation in the mesophase. Today, LCPs can be melt-processed on conventional equipment at high speeds with excellent replication of mold details. In fact, the high ease of forming of LCPs is an important competitive advantage against other plastics, as it offsets high raw material cost.The class of polar and bowlic LCPs, with unique properties and important potential applications, remains to be exploited. Mesophases: Same as the small molecular liquid crystal, liquid crystal polymers also have different mesophases. The mesogen cores of the polymers will aggregate into different mesophases: nematics, cholesterics, smectics and compounds with highly polar end groups. More information about the mesophases can be found on liquid crystal page. Classification: LCPs are categorized by the location of liquid crystal cores. Main chain liquid crystal polymers (MCLCPs), as the name indicates, have liquid crystal cores in the main chain. To contrast, side chain liquid crystal polymers (SCLCPs) have pendant side chains containing the liquid crystal cores. The basic structures for these two kinds of LCPs are shown in the figure. Classification: Main chain LCP Main chain LCPs have rigid, rod-like mesogens in the polymer backbones, which indirectly leads to the high melting temperature of this kind of LCPs. To make this kind of polymer easy to process, different methods are applied to lower the transition temperature: (1) Introducing flexible sequences; (2) Introducing bends or kinks; (3) Adding substituent groups to the aromatic mesogens... Classification: Side Chain LCP In side chain LCPs, the mesogens are in the polymer side chains. The mesogens usually are linked to the backbones through flexible spacers (Although for a few LCPs, the side chains directly link to the backbones). If the mesogens are directly linked to the backbones, the coil-like conformation of the backbones will impede the mesogens from forming an orientational structure. However, by introducing flexible spacers between the backbones and the mesogens, the ordering of mesogens can be decoupled from the conformation of the backbones. Classification: Becasause of the researcher's effort, more and more LCPs of different structures are synthesized. Therefore, latin letters are used to help the classification of LCPs. Mechanism: Mesogens in LCPs can self-organize to form liquid crystal regions in different conditions. Based on the mechanism of aggregation and ordering, LCPs can be roughly divided into two subcategories as shown below. However, the distinction is not rigidly defined. LCPs can be transformed into liquid crystals with more than one method. Mechanism: Lyotropic systems Lyotropic main chain LCPs have rigid mesogen cores (like aromatic rings) in the backbones. This kind of LCPs forms liquid crystals due to their rigid chain conformation but not only the aggregation of mesogen cores. Because of the rigid structure, strong solvent is needed to dissolve the lyotropic main chain polymers. When the concentration of the polymers reaches critical concentration, the mesophases begin to form and the viscosity of the polymer solution begins to decrease. Lyotropic main chain LCPs have been mainly used to generate high-strength fibers such as Kevlar. Mechanism: Side chain LCPs usually consist of both hydrophobic and hydrophilic segments. Usually, the side chain ends are hydrophilic. When they are dissolved in water, micelles will form due to hydrophobic force. If the volume fraction of the polymers exceeds the critical volume fraction, the micellar segregates will be packed to form a liquid crystal structure. As the concentration varies above the critical volume fraction, the liquid crystal generated may have different packing ways. Temperature, the stiffness of the polymers, molecular weight of the polymers can affect the liquid crystal transformation. Lyotropic side chain LCPs like alkyl polyoxyethylene surfactants attached to polysiloxane polymers may be applied to personal care products like liquid soap etc. Mechanism: Thermotropic systems The study of thermotropic LCPs is inspired by the success of the lyotropic LCPs. This kind of LCPs can only be processable when the melting temperature is far below the decomposition temperature. Above the melting temperature and glass transition temperature and below the clearing point, the thermotropic LCPs will form liquid crystals. After the clearing point, the melt will be isotropic and clear again. What is different from the small molecular liquid crystals is that we can get frozen liquid crystals by quenching the liquid crystal polymers below the glass transition temperature. Moreover, we can use copolymerization to adjust the melting temperature and mesophase temperature. Mechanism: There are other systems like phototropic systems. Liquid crystal elastomers (LCEs): Finkelmann first proposed LCEs in 1981. LCEs attracted attention from researchers and industry. LCEs can be synthesized both from polymeric precursors and from monomers. LCEs can respond to heat, light, and magnetic fields. Nanomaterials can be introduced into LCE matrices (LCE-based composites) to provide different properties and tailor LCEs' ability to respond to different stimuli. Liquid crystal elastomers (LCEs): Applications LCEs have many applications. For example, LCE films can be used as optical retarders due to their anisotropic structure. Because they can control the polarization state of transmitted light, they are commonly used in 3D glasses, patterned retarders for transflective displays, and flat panel LC displays. Modifying LCE with azobenzene, allows it to show light response properties. It can be applied for controlled wettability, autonomous lenses, and haptic surfaces. Besides the display application, research has focused on other interesting properties such as its special thermally and photogenerated macroscale mechanical responses, which means they can be good actuators.LCEs are used to make actuators and artificial muscles for robotics. They have been studied for use as lightweight energy absorbers, with potential applications in helmets, body armor, vehicle bumpers, using multi-layered, tilted beams of LCE, sandwiched between stiff supporting structures. Liquid crystal elastomers (LCEs): Synthesis Polymeric precursors LCEs synthesized from the polymeric precursors can be divided into two subcategories:Poly(hydrosiloxane): A two-step crosslinking technique is applied to derive LCEs from poly(hydrosiloxane). Poly(dydrosiloxane) is mixed with a monovinyl-functionalized liquid crystalline monomer, a multifunctional vinyl crosslinker, and catalyst. This mixture is used to generate a weakly crosslinked gel, in which the monomers are linked to the poly(dydrosiloxane) backbones. During the first crosslinking step or shortly after that, orientation is introduced into the mesogen cores of the gel with mechanical alignment methods. After that, the gel is dehydrated and the crosslinking reaction is completed. Therefore, the orientation is kept in the elastomer by crosslinking. In this way, highly ordered side chain LCEs can be produced, which are also called single-crystal or monodomain LCEs. Liquid crystal elastomers (LCEs): LCPs: With LCPs as precursors, a similar two-step method can be applied. Aligned LCPs mixed with multifunctional crosslinkers directly generate LCEs. The mixture is first heated to isotopic. Fibers are drawn from the mixture and then crosslinked, thus the orientation can be trapped in the LCE. However, it is limited by the difficulty of processing caused by the high viscosity of the starting material. Liquid crystal elastomers (LCEs): Low molar mass monomers Liquid crystal low molar mass monomers are mixed with crosslinkers and catalysts. The monomers can be aligned and then polymerized to keep the orientation. One advantage of this method is that the low molar mass monomers can be aligned by not only mechanical alignment, but also diamagnetic, dielectric, surface alignment. For example, thiol-ene radical step-growth polymerization and Michael addition produce well-ordered LCEs. This is also a good way to synthesize moderately to densely crosslinked glassy LCNs. Liquid crystal elastomers (LCEs): The main difference between LCEs and LCNs is the cross link density. LCNs are primarily synthesized from (meth)acrylate-based multifunctional monomers while LCEs usually come from crosslinked polysiloxanes. Properties: A unique class of partially crystalline aromatic polyesters based on p-hydroxybenzoic acid and related monomers, liquid-crystal polymers are capable of forming regions of highly ordered structure while in the liquid phase. However, the degree of order is somewhat less than that of a regular solid crystal. Typically, LCPs have a high mechanical strength at high temperatures, extreme chemical resistance, inherent flame retardancy, and good weatherability. Liquid-crystal polymers come in a variety of forms from sinterable high temperature to injection moldable compounds. LCPs can be welded, though the lines created by welding are a weak point in the resulting product. LCPs have a high Z-axis coefficient of thermal expansion. Properties: LCPs are exceptionally inert. They resist stress cracking in the presence of most chemicals at elevated temperatures, including aromatic or halogenated hydrocarbons, strong acids, bases, ketones, and other aggressive industrial substances. Hydrolytic stability in boiling water is excellent. Environments that deteriorate the polymers are high-temperature steam, concentrated sulfuric acid, and boiling caustic materials. Polar and bowlic LCPs are ferroelectrics, with reaction time order-of-magnitudes smaller than that in conventional LCs and could be used to make ultrafast switches. Bowlic columnar polymers possess long, hollow tubes; with metal or transition metal atoms added into the tube, they could potentially form ultrahigh-Tc superconductors. Uses: Because of their various properties, LCPs are useful for electrical and mechanical parts, food containers, and any other applications requiring chemical inertness and high strength. LCP is particularly good for microwave frequency electronics due to low relative dielectric constants, low dissipation factors, and commercial availability of laminates. Packaging microelectromechanical systems (MEMS) is another area that LCP has recently gained more attention. The superior properties of LCPs make them especially suitable for automotive ignition system components, heater plug connectors, lamp sockets, transmission system components, pump components, coil forms and sunlight sensors and sensors for car safety belts. LCPs are also well-suited for computer fans, where their high tensile strength and rigidity enable tighter design tolerances, higher performance, and less noise, albeit at a significantly higher cost. Trade names: LCP is sold by manufacturers under a variety of trade names. These include: Zenite Vectra Laperos Zenite 5145L is a liquid crystal polymer with 45% glass fiber filler, originally developed by DuPont, which is used for injection molded parts with intricate features. Typical uses include electronic packaging, housing. etc. The heat deflection temperature is 290 °C. Relative Temperature Index (RTI considering strength but not impact or flexing) is 130 °C. The density is about 1.76 g/cm3. The typical tensile strength at room temperature is 130 MPa (19 ksi). Melting temperature 319 °C. The Deflection Temperature Under Load (DTUL) is 275 °C.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Doppler parameter** Doppler parameter: The Doppler parameter, or Doppler broadening parameter, usually denoted as b , is a parameter commonly used in astrophysics to characterize the width of observed spectral lines of astronomical objects. It is defined as b=2σ ,where σ is the one-dimensional velocity dispersion (Draine 2011, p. 58). Given this parameter, the velocity distribution of the line-emitting/absorbing atoms and ions proximated by a Gaussian can be rewritten as p=12π1σe−(v−v0)2/2σ2=1π1be−(v−v0)2/b2 ,where pdv is the probability of the velocity along the line of sight being in the interval [v,v+dv] The line width is also often specified in terms of the FWHM (full width at half maximum), which is ln ln 1.665 b Distribution: The Doppler parameters of Lyman-alpha forest absorption lines are in the range 10–100 km s−1, with a median value around 36 kms−1 that decrease with redshift (Kim et al. 1997). Analyses of the HST/COS dataset of low-redshift quasars gives a median b parameter of around 33 kms−1 (Danforth et al. 2016, Gaikwad et al. 2017).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Molecular conductance** Molecular conductance: Molecular Conductance ( G=I/V ), or the conductance of a single molecule, is a physical quantity in molecular electronics. Molecular conductance is dependent on the surrounding conditions (e.g. pH, temperature, pressure), as well as the properties of the measuring device. Many experimental techniques have been developed in an attempt to measure this quantity directly, but theorists and experimentalists still face many challenges.Recently, a great deal of progress has been made in the development of reliable conductance-measuring techniques. These techniques can be divided into two categories: molecular film experiments, which measure groups of tens of molecules, and single-molecule-measuring experiments. Molecular film experiments: Molecular film experiments generally consist of the sandwiching of a thin layer of molecules between two electrodes which are used to measure the conductance through the layer. Two of the most successful implementations of this concept have been the bulk electrode approach and in the use of nanoelectrodes. In the bulk electrode approach, a molecular film is typically immobilized onto one electrode and an upper electrode is brought into contact with it allowing for a measure of current flow as a function of applied bias voltage. The nanoelectrode class of experiments, in creatively utilizing equipment such as atomic force microscope tips and small-radius wires, are able to perform the same sorts of current versus applied bias measurements but on a much smaller number of molecules as compared to bulk electrode. For instance, the tip of an atomic force microscope can be used as a top electrode and, given the nano-scale radius of curvature of the tip, the number of molecules measured is drastically cut. The difficulties encountered in these experiments have come mainly in dealing with such thin layers of molecules which often results in problems with short-circuiting the electrodes. Single-molecule-measurement: More recently, single-molecule-measurement experiments have been developed that are bringing experimenters a better look at molecular conductance. These fall under the categories of scanning probe, which involves fixed electrode, and mechanically formed junction techniques. One example of a mechanically formed junction experiment involves using a movable electrode to make contact with and then pull away from an electrode surface coated with a single layer of molecules. As the electrode is removed from the surface, the molecules that had bonded between the two electrodes begin to detach until eventually one molecule is connected. The atomic-level geometry of the tip-electrode contact has an effect on the conductance and can change from one run of the experiment to the next, so a histogram approach is required. Forming a junction in which the precise contact geometry is known has been one of the main difficulties with this approach. Applications: An important first step toward the goal of building electronic devices on the molecular level is the ability to measure and control the electric current through an individual molecule. Based on the anticipated continuation of Moore's Law, which is expected to carry the miniaturization of transistors on integrated circuits into the atomic scale within the next 10 to 20 years, this goal of single-molecule-level circuit design is likely to become widespread throughout the semiconductor industry.Other applications focus on the insight provided by these experiments in the area of charge transport, which is a recurrent phenomenon in many chemical and biological processes. This sort of insight gives researchers the ability to read the chemical information stored in a single molecule electronically, which can then be used in a wide variety of chemical and biosensor applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nitro Pro** Nitro Pro: Nitro PDF Pro is an application used to create and edit Portable Document Format (PDF) files and digital documents. History: Nitro Software was founded in Melbourne, Australia, by a team of three, as an alternative PDF software to Adobe Acrobat.In 2015, the company reported 1 million licenses sold. In 2018, it launched the Nitro Productivity and eSigning Suite.In 2018, Nitro PDF Pro was used by more than 650,000 businesses.In June 2021, the company purchased PDFPen, a Mac, iPad, and iPhone PDF editing app. Products: Products include a PDF editor, a browser-based application for electronic signatures and PDF productivity tools. Subscription services include cloud-based user management, deployment and analytics tools. Nitro also manages several free document conversion sites. The company sunset their PDF reader, Nitro Reader, in 2017. It was claimed that "users can get the same functionality with an expired free trial of Nitro PDF Pro". That's not quite true, as several functions are disabled including a simple [Save/SaveAs] of an existing opened document. This can not only create additional workflow (re-download, re-direct:save) if the document was opened into the reader from an external source -- but in some cases (paywall, limited-articles, sign-up/subscribe, etc.) not just inconvenience the user but also impose additional actual costs or restrict access altogether. Products: Nitro's desktop products are available on Windows and Mac. Nitro Cloud is compatible with any web browser on any machine. Nitro PDF Pro is proprietary trialware, while Nitro Reader is freeware for both personal and professional use. Nitro version history
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Desiring-production** Desiring-production: Desiring-production (French: production désirante) is a term coined by the French thinkers Gilles Deleuze and Félix Guattari in their book Anti-Oedipus (1972). Overview: Deleuze and Guattari oppose the Freudian conception of the unconscious as a representational "theater", instead favoring a productive "factory" model: desire is not an imaginary force based on lack, but a real, productive force. They describe the machinic nature of desire as a kind of "desiring-machine" that functions as a circuit breaker in a larger "circuit" of various other machines to which it is connected. Meanwhile, the desiring-machine is also producing a flow of desire from itself. Deleuze and Guattari conceptualize a multi-functional universe composed of such machines all connected to each other: "There are no desiring-machines that exist outside the social machines that they form on a large scale; and no social machines without the desiring machines that inhabit them on a small scale." Desiring-production is explosive: "there is no desiring-machine capable of being assembled without demolishing entire social sectors". Overview: The concept of desiring-production is part of Deleuze and Guattari's more general appropriation of Friedrich Nietzsche's formulation of the will to power. In both concepts, a pleasurable force of appropriation of what is outside oneself, incorporating into oneself what is other than oneself, characterizes the essential process of all life. Similarly, a kind of reverse force of "forgetting" in Nietzsche and the body without organs in Deleuze and Guattari disavows the will to power and desiring-production, attempting to realize the ideal of a hermetic subject. Overview: Thenceforth, while very interested by Wilhelm Reich's fundamental question—why did the masses desire fascism?—they criticized his dualist theory leading to a rational social reality on one side, and an irrational desire reality on the other side. Anti-Œdipus was thus an attempt to think beyond Freudo-Marxism; and Deleuze and Guattari tried to do for Freud what Marx had done for Adam Smith. Overview: In his early writings (Machine et Structure), Guattari discussed a machinic-phylum, and talked of the machinic and evasive character of desire; composed of concepts similar to Deleuzian ones (in particular, to series and repetitions), and other structures which show congruence with the ideas developed later, with his collaborator, Gilles Deleuze (see assemblage theory). The subject for these early writings was situated at the border of determined machines at one end, and a field of undetermined structures on the other side, composed of becomings, and technological advancement and re-emergences. Desiring Machines have nothing to do with the Oedipus, and nothing to do with the discourses or methods of the psychoanalytic neurotic. Desiring production is a primary and transcendental (in the immanent or Kantian sense) and virtual process of the perpetual emergence of corporeal, and incorporeal relations, which develop and emerge from real genetic, organic, and anorganic histories, social machines, and contingent worlds or "modes" of desiring production. Desiring machines are breaks, or sudden stops, in a pool or field of flows. However, the desiring-machines are also flows in themselves, which operate at different speeds in comparison to their environment (see dissipative system), and can be considered static in the plane of composition (i.e., in relation to the extraneous flows which it cuts through with the mechanisms of the passive syntheses). However, in the plane of consistency, all the desiring machines are flowing, and all the flows are consistent (and active) and are found as continuous gradients of velocities (or forces) and capacities (or intensities), both relating to Nietzschean metaphysics, and to the physical interlude in Ethics. The desiring-machines have no object, nor subject, and they produce in flows which are beyond systematicity, and thus, reject the double-articulated, representational semiosis of subjectivity, as present in Lacan, and as dormant in Freud, Hegel, and Kant. "The power of the machine, is that one cannot ultimately distinguish the unconscious subject of desire from the order of the machine itself."Desiring-machines participate in events of convergence, where partial objects and BwOs are conjoined upon an amorphous lattice of codes (milieus and strata), and an apparent counter-flow of decoding (deterritorialisation and territories), producing lineages and multiplicities of gears, events, and productive elements, regardless of whether such parts are concordant or discordant; positive or negative. Instead, production is an unyielding and affirmative process of connection (as opposed to teleology, dialectics, essentialism, and hylomorphism, etc.,) and then the resultant divisions and quotients which emerge in relation to BwOs.Deleuze and Guattari also discuss other more resistant machines, such as Bachelor machines, Miraculating machines, Celibate machines, and Paranoiac machines, which all have specific relations to a socius (the Paranoiac and the Miraculating machine), or to a collective apparatus of strata (the Bachelor and Celibate machines, as exemplified in Kafka).Hardt has suggested that Desiring-production is a social or cosmological ontology. Foucault, however, has suggested against using such a model for general and systematic claims.Published in the same year as Anti-Œdipus, Guy Hocquenghem's Homosexual Desire re-articulated desiring-production within the emergent field of queer theory. Sources: Deleuze, Gilles and Félix Guattari. 1972. Anti-Œdipus. Trans. Robert Hurley, Mark Seem and Helen R. Lane. London and New York: Continuum, 2004. Vol. 1 of Capitalism and Schizophrenia. 2 vols. 1972-1980. Trans. of L'Anti-Oedipe. Paris: Les Editions de Minuit. ISBN 0-8264-7695-3. ---. 1980. A Thousand Plateaus. Trans. Brian Massumi. London and New York: Continuum, 2004. Vol. 2 of Capitalism and Schizophrenia. 2 vols. 1972-1980. Trans. of Mille Plateaux. Paris: Les Editions de Minuit. ISBN 0-8264-7694-5. Guattari, Félix. 1984. Molecular Revolution: Psychiatry and Politics. Trans. Rosemary Sheed. Harmondsworth: Penguin. ISBN 0-14-055160-3. ---. 1995. Chaosophy. Ed. Sylvère Lotringer. Semiotext(e) Foreign Agents Ser. New York: Semiotext(e). ISBN 1-57027-019-8. ---. 1996. Soft Subversions. Ed. Sylvère Lotringer. Trans. David L. Sweet and Chet Wiener. Semiotext(e) Foreign Agents Ser. New York: Semiotext(e). ISBN 1-57027-030-9. Hocquenghem, Guy. 1972. Homosexual Desire. Trans. Daniella Dangoor. 2nd ed. Series Q Ser. Durham: Duke UP, 1993. ISBN 0-8223-1384-7. Massumi, Brian. 1992. A User's Guide to Capitalism and Schizophrenia: Deviations from Deleuze and Guattari. Swerve editions. Cambridge, USA and London: MIT. ISBN 0-262-63143-1. Holland, Eugene W. 1999. Deleuze and Guattari's Anti-Oedipus: Introduction to Schizoanalysis. New York: Routledge. ISBN 978-0-415-11318-2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carlos A. Felippa** Carlos A. Felippa: Carlos A. Felippa is a professor of Aerospace Engineering Sciences at the University of Colorado. His research at Colorado concerns aerospace structures and structural analysis, with special interests in coupled field problems: elastoacoustics, aeroelasticity, control-structure interaction, thermomechanics and electrothermomechanics. Biography: Felippa studied civil engineering at the Universidad Nacional de Córdoba (Ing. Civ. 1963) and the University of California, Berkeley (M.S. 1964 and Ph.D. 1966). After working at Boeing and Lockheed, he took a position at the University of Colorado in 1986, where from 1989 to 1991 he directed the Center for Space Structures and Controls. Selected publications: Professor Felippa has over 150 publications in refereed journals, conference proceedings, and book chapters. C. A. Felippa and K. C. Park, A direct flexibility method, Computer Methods in Applied Mechanics and Engineering, 149, 319–337, 1997. C. A. Felippa, Recent Developments in Parametrized Variational Principles for Mechanics, Computational Mechanics, 18, No. 3, 159–174, 1996. C. A. Felippa, L. A. Crivelli and B. Haugen, A Survey of the Core-Congruential Formulation for Nonlinear Finite Elements, Archives of Computational Methods in Engineering, 1, pp. 1–48, 1994. C. A. Felippa, Parametrized Variational Principles and Applications, Science and Perspectives in Mechanics, ed. by B. Nayroles, J. Etay and D. Renouard, ENS Grenoble, Grenoble, France, 1994, pp. 1–42. K. C. Park and C. A. Felippa, Partitioned Analysis of Coupled Systems, Chapter 3 in Computational Methods for Transient Analysis, T. Belytschko and T. J. R. Hughes, eds., North-Holland, Amsterdam–New York, 1983.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LifeCam** LifeCam: The LifeCam is a lineup of webcams from Microsoft for PC users marketed since 2006. Various models and series of webcams are designed for either laptops or desktops. VX Series: History The VX series of Microsoft LifeCam debuted on 13 June 2006, and were available for sale in August 2006 as Microsoft's first entry into selling webcams, with only two models, the VX-3000 and VX-6000. One of the exclusive features include the integration with Windows Live Messenger, by having a Windows Live Call button that can be used to easily initiate a video call. VX Series: These models both feature a similar round body design, with a round base, known as the Universal Attachment Base. The entire series was designed for desktop use, as it used a base for attaching it to a desktop monitor. All webcams of this series interface via USB. Specifications NX Series: All webcams of the NX series are designed for notebooks and interfaces via USB 2.0. Specifications LifeCam Show: 2.0 megapixel sensor 8.0 megapixel still images (interpolated) 21-60" focused depth of field (fixed focus) 71-degree wide angle lens Comes with many attachments: Magnet disc, clip, and its own stand—it can be used on both desktop and laptop computers. Released: Sep 2008 LifeCam Cinema: 720p 16:9 1/4" OmniVision OV9712 Sensor 5.0 megapixel still images (interpolated) Autofocus Built-in microphone 73,5-degree wide angle glass lens 4"-∞ depth of field (autofocus) Released: Aug 2009 LifeCam Studio: 1080p 16:9 Sensor 8.0 megapixel still images (interpolated) Autofocus Built-in microphone 75-degree wide angle glass lens Tripod thread Released: Sep 2010 HD Series: These series debuted in March 2010 and uses Microsoft's TrueColor technology for improved color balance. They all have 720p HD video resolution and have built-in microphones. Specifications
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nicotine poisoning** Nicotine poisoning: Nicotine poisoning describes the symptoms of the toxic effects of nicotine following ingestion, inhalation, or skin contact. Nicotine poisoning can potentially be deadly, though serious or fatal overdoses are rare. Historically, most cases of nicotine poisoning have been the result of use of nicotine as an insecticide. More recent cases of poisoning typically appear to be in the form of Green Tobacco Sickness, or due to unintended ingestion of tobacco or tobacco products or consumption of nicotine-containing plants.Standard textbooks, databases, and safety sheets consistently state that the lethal dose of nicotine for adults is 60 mg or less (30–60 mg), but there is overwhelming data indicating that more than 500 mg of oral nicotine is required to kill an adult.Children may become ill following ingestion of one cigarette; ingestion of more than this may cause a child to become severely ill. The nicotine in the e-liquid of an electronic cigarette can be hazardous to infants and children, through accidental ingestion or skin contact. In some cases children have become poisoned by topical medicinal creams which contain nicotine.People who harvest or cultivate tobacco may experience Green Tobacco Sickness (GTS), a type of nicotine poisoning caused by skin contact with wet tobacco leaves. This occurs most commonly in young, inexperienced tobacco harvesters who do not consume tobacco. Signs and symptoms: Nicotine poisoning tends to produce symptoms that follow a biphasic pattern. The initial symptoms are mainly due to stimulatory effects and include nausea and vomiting, excessive salivation, abdominal pain, pallor, sweating, hypertension, tachycardia, ataxia, tremor, headache, dizziness, muscle fasciculations, and seizures. After the initial stimulatory phase, a later period of depressor effects can occur and may include symptoms of hypotension and bradycardia, central nervous system depression, coma, muscular weakness and/or paralysis, with difficulty breathing or respiratory failure.From September 1, 2010 to December 31, 2014, there were at least 21,106 traditional cigarette calls to US poison control centers. During the same period, the ten most frequent adverse effects to traditional cigarettes reported to US poison control centers were vomiting (80.0%), nausea (9.2%), drowsiness (7.8%), cough (7.2%), agitation (6.6%), pallor (3.0%), tachycardia (2.5%), diaphoresis (1.5%), dizziness (1.5%), and diarrhea (1.4%). 95% of traditional cigarette calls were related to children 5 years old or less. Most of the traditional cigarette calls were a minor effect.Calls to US poison control centers related to e-cigarette exposures involved inhalations, eye exposures, skin exposures, and ingestion, in both adults and young children. Minor, moderate, and serious adverse effects involved adults and young children. Minor effects correlated with e-cigarette liquid poisoning were tachycardia, tremor, chest pain and hypertension. More serious effects were bradycardia, hypotension, nausea, respiratory paralysis, atrial fibrillation and dyspnea. The exact correlation is not fully known between these effects and e-cigarettes. 58% of e-cigarette calls to US poison control centers were related to children 5 years old or less. E-cigarette calls had a greater chance to report an adverse effect and a greater chance to report a moderate or major adverse effect than traditional cigarette calls. Most of the e-cigarette calls were a minor effect.From September 1, 2010 to December 31, 2014, there were at least 5,970 e-cigarette calls to US poison control centers. During the same period, the ten most frequent adverse effects to e-cigarettes and e-liquid reported to US poison control centers were vomiting (40.4%), eye irritation or pain (20.3%), nausea (16.8%), red eye or conjunctivitis (10.5%), dizziness (7.5%), tachycardia (7.1%), drowsiness (7.1%), agitation (6.3%), headache (4.8%), and cough (4.5%).E-cigarette exposure cases in the US National Poison Data System increased greatly between 2010 and 2014, peaking at 3,742 in 2014, fell in 2015 though 2017, and then between 2017 and 2018 e-cigarette exposure cases increased from 2,320 to 2,901. The majority of cases (65%) were in children under age five and 15% were in ages 5–24. Approximately 0.1% of cases developed life-threatening symptoms. Toxicology: The LD50 of nicotine is 50 mg/kg for rats and 3 mg/kg for mice. 0.5–1.0 mg/kg can be a lethal dosage for adult humans, and 0.1 mg/kg for children. However the widely used human LD50 estimate of 0.5–1.0 mg/kg was questioned in a 2013 review, in light of several documented cases of humans surviving much higher doses; the 2013 review suggests that the lower limit causing fatal outcomes is 500–1000 mg of ingested nicotine, corresponding to 6.5–13 mg/kg orally. An accidental ingestion of only 6 mg may be lethal to children.It is unlikely that a person would overdose on nicotine through smoking alone. The US Food and Drug Administration (FDA) stated in 2013: "There are no significant safety concerns associated with using more than one [over the counter] OTC [nicotine replacement therapy] NRT at the same time, or using an OTC NRT at the same time as another nicotine-containing product—including a cigarette." Ingestion of nicotine pharmaceuticals, tobacco products, or nicotine containing plants may also lead to poisoning. Smoking excessive amounts of tobacco has also led to poisoning; a case was reported where two brothers smoked 17 and 18 pipes of tobacco in succession and were both fatally poisoned. Spilling an extremely high concentration of nicotine onto the skin can result in intoxication or even death since nicotine readily passes into the bloodstream following skin contact.The recent rise in the use of electronic cigarettes, many forms of which are designed to be refilled with nicotine-containing "e-liquid" supplied in small plastic bottles, has renewed interest in nicotine overdoses, especially the possibility of young children ingesting the liquids. A 2015 Public Health England report noted an "unconfirmed newspaper report of a fatal poisoning of a two-year old child" and two published case reports of children of similar age who had recovered after ingesting e-liquid and vomiting. They also noted case reports of suicides by nicotine, where adults drank liquid containing up to 1,500 mg of nicotine. They recovered (helped by vomiting), but an ingestion apparently of about 10,000 mg was fatal, as was an injection. They commented that "Serious nicotine poisoning seems normally prevented by the fact that relatively low doses of nicotine cause nausea and vomiting, which stops users from further intake." Four adults died in the US and Europe, after intentionally ingesting liquid. Two children, one in the US in 2014 and another in Israel in 2013, died after ingesting liquid nicotine.The discrepancy between the historically stated 60-mg dose and published cases of nicotine intoxication has been noted previously (Matsushima et al. 1995; Metzler et al. 2005). Nonetheless, this value is still widely accepted over the 500 mg figure as the basis for safety regulations of tobacco and other nicotine-containing products (such as the EU wide TPD, set at a maximum of 20 mg/ml). Pathophysiology: The symptoms of nicotine poisoning are caused by effects at nicotinic cholinergic receptors. Nicotine is an agonist at nicotinic acetylcholine receptor which are present in the central and autonomic nervous systems, and the neuromuscular junction. At low doses nicotine causes stimulatory effects on these receptors, however, higher doses or more sustained exposures can cause inhibitory effects leading to neuromuscular blockade.It is sometimes reported that people poisoned by organophosphate insecticides experience the same symptoms as nicotine poisoning. Organophosphates inhibit an enzyme called acetylcholinesterase, causing a buildup of acetylcholine, excessive stimulation of all types of cholinergic neurons, and a wide range of symptoms. Nicotine is specific for nicotinic cholinergic receptors only and has some, but not all of the symptoms of organophosphate poisoning. Diagnosis: Increased nicotine or cotinine (the nicotine metabolite) is detected in urine or blood, or serum nicotine concentrations increase. Treatment: The initial treatment of nicotine poisoning may include the administration of activated charcoal to try to reduce gastrointestinal absorption. Treatment is mainly supportive and further care can include control of seizures with the administration of a benzodiazepine, intravenous fluids for hypotension, and administration of atropine for bradycardia. Respiratory failure may necessitate respiratory support with rapid sequence induction and mechanical ventilation. Hemodialysis, hemoperfusion or other extracorporeal techniques do not remove nicotine from the blood and are therefore not useful in enhancing elimination. Acidifying the urine could theoretically enhance nicotine excretion, although this is not recommended as it may cause complications of metabolic acidosis. Prognosis: The prognosis is typically good when medical care is provided and patients adequately treated are unlikely to have any long-term sequelae. However, severely affected patients with prolonged seizures or respiratory failure may have ongoing impairments secondary to the hypoxia. It has been stated that if a patient survives nicotine poisoning during the first 4 hours, they usually recover completely. At least at "normal" levels, as nicotine in the human body is broken down, it has an approximate biological half-life of 1–2 hours. Cotinine is an active metabolite of nicotine that remains in the blood for 18–20 hours, making it easier to analyze due to its longer half-life.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wieacker syndrome** Wieacker syndrome: First being described and identified in 1985, Wieacker-Wolff syndrome is a rare, slowly progressive, genetic disorder present at birth and characterized by deformities of the joints of the feet, muscle degeneration, mild intellectual disability and an impaired ability to move certain muscles of the eyes, face and tongue. Wieacker syndrome is inherited as an X-linked recessive trait.The condition is characterized by contracture of the lower joints, muscle atrophy, impaired facial muscles, mental retardation, and syndromic facies. Additional symptoms include stiffening of the muscles and joints of the feet, intellectual disabilities, droopy eyelids, crossed eyes, farsightedness, and abnormal curvature of the spine. Depending on a person’s genotype, the severity of their symptoms will differ. For example, females tend to have milder signs of the disease, especially when heterozygous. Genetics: Wieacker syndrome is caused by a mutation in ZC4H2 on the X chromosome (Xq13-q21).There are five affected families, each with different mutations to the ZC4H2 gene. Family one has a substitution of Guanine (G) to a Cytosine (C) at basepair position 187. This leads to an amino acid change at position 63, where a leucine replaces a valine. This phenotype is the most mild because mutation only alters the secondary structure of the protein. This leads to a less stable protein that is still fully functional.Family two has a substitution of a Guanine (G) to a Adenine (A) at base pair position 593 which leads to an amino acid change from an arginine to glutamine. This phenotype is lethal at ages .5 to 8.0 years due to a change in the highly conserved zinc-finger domain which compromises the proteins function.Family three also has a lethal phenotype. Family three has a substitution of a Cytosine (C) to a Thiamine (T) as base pair position 601. This leads to an amino acid change at position 201 from a proline to serine. This phenotype is lethal from ages 1.4 weeks to 13.0 weeks because it alters the highly conserved domain of the protein which compromises its function.Families 4 and 5 have the same substitution of a Cytosine (C) to Thiamine (T) at base pair position 637. This leads to an amino acid change from Arginine to Tryptophan at amino acid position 213. This phenotype occurs near the highly conserved Carboxyl terminus of the protein. This leads to an intermediate phenotype, where females are normal except for a developmental which has been hypothesized to be caused by varying X-chromosome inactivation. Diagnosis: In some instances in the history of the family in which the syndrome was first described, the syndrome was present at birth. The mutations were found by various methods, including whole-genome sequencing, X-chromosome exome sequencing, and direct sequencing of the ZC4H2 gene: all mutations were confirmed by Sanger sequencing and segregated with the disorder in the families. There were three missense and one splice site mutations. The mutations were found by exome sequencing in some of the families. Management: Treatment of Wieacker syndrome is typically supportive and symptomatic due to the little information physicians have on the disease. Therapies such as physical therapy, surgery, speech therapy, and special education are typically used and have been beneficial for the impacted families. While these management techniques cannot fully treat the disorder, they can help reduce the negative results of the symptoms and aid the families in living a semi-normal life. Clinical trials are in development regarding the disorder as well, looking to better understand Wieacker syndrome and its possible treatments. Epidemiology: Wieacker syndrome has fewer than 30 confirmed cases, where it usually affects males, but some carrier females show mild manifestations of the disorder. As of 2015, the syndrome has been reported in five families. Prevalence and incidence rates are not fully known at this time but are spread across Germany, France, the Netherlands, Australia, and the United States.In four generations of a Missouri kindred, Miles and Carpenter (1991) observed three brothers and a male cousin with mental retardation in association with exotropia, microcephaly, distal muscle wasting, and 10 low digital arches. Six women who might represent heterozygotes were found to have 8 to 10 low digital arches; 5 of these women had exotropia.A second family was found with a similar, but more severe phenotype. Affected individuals presented with neonatal respiratory distress, arthrogryposis multiplex congenita, muscle weakness, and ptosis, suggesting dysfunction of neuromuscular transmission in utero.A third family identified by Hirata et al. (2013) had previously been reported by Hennekam et al. (1991). That family had five affected males in three sibships connected through females. Affected males had severe arthrogryposis and muscle weakness in the pre- and postnatal periods, resulting in death within the first weeks or months of life. The one surviving boy was had severely impaired intellectual development. Epidemiology: May et al. (2015) reported three previously unreported families with X-linked syndromic mental retardation. There were 10 affected males and 10 carrier females. There was phenotypic variability between the families, but all male patients had impaired intellectual development. Frints et al. (2019) reported 11 males from 6 unrelated families (families 1, 4-6, 9, and 19) with WRWF. Two additional male patients (families 18 and 24) were sporadic cases with de novo missense variants. Some affected fetuses showed signs of the disorder in utero: these included clubfoot or rocker bottom feet, fetal hypo/akinesia, contractures, AMC, and nuchal edema.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Laboratory Syrian hamster** Laboratory Syrian hamster: Syrian hamsters (Mesocricetus auratus) are one of several rodents used in animal testing. Syrian hamsters are used to model human medical conditions including various cancers, metabolic diseases, non-cancer respiratory diseases, cardiovascular diseases, infectious diseases, and general health concerns. In 2014, Syrian hamsters accounted for 14.6% of the total animal research participants in the United States covered by the Animal Welfare Act. Use in research: Since 1972 the use of hamsters in animal testing research has declined. In 2014 in the United States, animal research used about 120,000 hamsters, which was 14.6% of the total research animal use (under the Animal Welfare Act which excludes mice, rats, and fish) for that year in that country. According to the Canadian Council for Animal Care, a total of 1,931 hamsters were used for research in 2013 in Canada, making them the sixth-most popular rodent after mice (1,233,196), rats (228,143), guinea pigs (20,687), squirrels (4,446) and voles (2,457). Human medical research: Cancer research Humans get lung cancer from tobacco smoking. Syrian hamsters are a model for researching Non-small-cell lung carcinoma, which is one of the types of human lung cancer. In research, when hamsters are injected with the carcinogen NNK several times over six months, they will develop that sort of cancer. In both Syrian hamsters and humans, this cancer is associated with mutations to the KRAS gene. For various reasons, collecting data on the way that Syrian hamsters develop this lung cancer provides insight on how humans develop it.Oral squamous-cell carcinoma is a common cancer in humans and difficult to treat. Scientists studying this disease broadly accept Syrian hamsters as animal models for researching it. In this research, the hamster is given anesthesia, has its mouth opened to expose the inside of its cheeks, and the researcher brushes the carcinogen DMBA on its cheeks. The scientist can take cell samples from the mouth of the hamster to measure the development of the cancer. This process has good reproducibility. The cancer itself develops tumors in a predictable way starting with hyperkeratosis, then hyperplasia, then dysplasia, then carcinoma. In humans with this cancer there is increased ErbB2 production of receptor tyrosine kinase and Syrian hamsters with this cancer also have increased levels of that kinase. As the tumor develops in the hamster, they also have increased gene expression in p53 and c-myc which is similar to human cancer development. Because hamsters develop this cancer so predictably, researchers are comfortable in using hamsters in research on prevention and treatment.There is scientific and social controversy about the virus SV40 causing cancers in human. Leaving that controversy aside, Syrian hamsters injected with SV40 certainly will develop various cancers in predictable ways depending on how they are exposed to the virus. The hamster has been used as a research model to clarify what SV40 does in humans.The golden hamster can contract contagious reticulum cell sarcoma which can be transmitted from one golden hamster to another by means of the bite of the mosquito Aedes aegypti. Human medical research: Metabolic disorders Syrian hamsters are susceptible to many metabolic disorders which affect humans. Because of this, hamsters are an excellent animal model for studying human metabolic disorder.Gallstones may be induced in Syrian hamsters by giving the hamster excess dietary cholesterol or sucrose. Hamsters metabolize cholesterol in a way that is similar to humans. Different sorts of fats are more or less likely to produce gallstones in hamsters. The gender differences in gallstone formation in hamsters is significant. Hamsters of different genetic strains have significant differences in susceptibility to forming gallstones.Diabetes mellitus is studied in various ways using Syrian hamsters. Hamsters which are feed fructose for 7 days get hyperinsulinemia and hyperlipidemia. Such hamsters then have an increase in hepatic lipase and other measurable responses which are useful for understanding diabetes in humans. Streptozotocin or alloxan may be administered to induce chronic diabetes in hamsters.Atherosclerosis may be studied with Syrian hamsters because both organisms have similar lipid metabolism. Hamsters develop atherosclerosis as a result of dietary manipulation. Hamsters develop atherosclerotic plaques as humans do. Human medical research: Non-cancer respiratory disease Smoke inhalation can be studied on Syrian hamsters by putting the hamster in a laboratory smoking machine. Pregnant hamsters have been used to model the effects of smoking on pregnant humans.The emphysema component of COPD may be induced in hamsters by injecting pancreatic elastase into their tracheas.Pulmonary fibrosis may be induced in hamsters by injecting bleomycin into their tracheas. Human medical research: Cardiovascular Cardiomyopathy in hamsters is an inherited condition and there are genetic lines of hamsters which are bred to retain this gene so that they may be used to study the disease.Microcirculation may be studied in hamster cheek pouches. The pouches of hamsters are thin, easy to examine without stopping bloodflow, and highly vascular. When examined, the cheek pouch is pulled through the mouth while being grasped with forceps. At this point the cheek is everted and can be pinned onto a mount for examination.Reperfusion injury may be studied with everted hamster pouches also. To simulate reperfusion, one method is to tie a cuff around the pouch to restrict blood flow and cause ischemia. Another method could be to compress the veins and arteries with microvascular clips which do not cause trauma. In either case, after about an hour of restricting the blood, the pressure is removed to study how the pouch recovers.Several inbred strains of hamsters have been developed as animal models for human forms of dilated cardiomyopathy. The gene responsible for hamster cardiomyopathy in a widely studied inbred hamster strain, BIO14.6, has been identified as being delta-sarcoglycan. Pet hamsters are also potentially prone to cardiomyopathy, which is a not infrequent cause of unexpected sudden death in adolescent or young adult hamsters. Human medical research: Infection research Syrian hamsters have been infected with a range of disease causing agents to study both the disease and the cause of the disease. Human medical research: Hantavirus pulmonary syndrome is a medical condition in humans caused by any of the Hantavirus species. Syrian hamsters easily contract Hantavirus species, but they do not get the same symptoms as humans, and the same infection that is deadly in humans have effects ranging from nothing to flu to death in Syrian hamsters. Because hamsters become easily infected, they are used to study the pathogenesis of Hantavirus. Andes virus and Maporal viruses infect hamsters and cause pneumonia and edema. The Sin Nombre virus and Choclo virus will infect hamsters but not cause any disease.SARS coronavirus causes severe acute respiratory syndrome in humans. Syrian hamsters may be infected with the virus, and like humans will have viral replication and lesions in the respiratory tract which can be examined with histopathological tests. However, hamsters do not develop clinical symptoms of the disease. Hamsters might be used to study the infection process.Leptospira viruses cause Leptospirosis in humans and similar symptoms in Syrian hamsters. Syrian hamsters are used to test drugs to treat the disease.Bacteria which have been studied by infection Syrian hamsters with them include Leptospira, Clostridium difficile, Mycoplasma pneumoniae, and Treponema pallidum.Parasites which have been studied by infecting Syrian hamsters with them include Toxoplasma gondii, Babesia microti, Leishmania donovani, Trypanosoma cruzi, Opisthorchis viverrini, Taenia, Ancylostoma ceylanicum, and Schistosoma.Syrian hamsters are infected with scrapie so that they get transmissible spongiform encephalopathy. Human medical research: In March 2020, researchers from the University of Hong Kong have shown that Syrian hamsters could be a model organism for COVID-19 research. Other medical conditions Scientists use male hamsters to study the effects of steroids on male behavior. The behavior of castrated hamsters is compared to typical male hamsters. Castrated hamsters are then given steroids and their behavior noted. Some steroid treatments will cause castrated hamsters to do behaviors that typical male hamsters do.Poor nutrition may cause female infertility in mammals. Human medical research: When hamsters do not have enough of the right food, they have fewer estrous cycles. Studies in hamsters identify the nutritional needs for maintaining fertility.Syrian hamsters are used to study how NSAIDs can cause reactive gastropathy. One way to study is to inject hamsters with indometacin, which causes an ulcer within 1–5 hours depending on the dose. If repeatedly given doses, hamsters get severe lesions and die within 5 days from peptic ulcers in their pyloric antrum. A model for creating a chronically ill hamster which will not die from the ulcers is to give naproxen by gavage. When the hamster is chronically ill, it can be used to test anti-ulcer drugs.Syrian hamsters are also widely used in research into alcoholism, by virtue of their large livers, and ability to metabolise high doses. Research on Syrian hamsters themselves: In captivity, golden hamsters follow well-defined daily routines of running in their hamster wheel, which has made them popular subjects in circadian rhythms research. For example, Martin Ralph, Michael Menaker, and colleagues used this behavior to provide definitive evidence that the suprachiasmatic nucleus in the brain is the source of mammalian circadian rhythms.Hamsters have a number of fixed action patterns that are readily observed, including scent-marking and body grooming, which is of interest in the study of animal behavior. Research on Syrian hamsters themselves: Scientific studies of animal welfare concerning captive golden hamsters have shown they prefer to use running wheels of large diameters (35 cm diameter was preferred over 23 cm, and 23 cm over 17.5 cm,), and that they prefer bedding material which allows them to build nests, if nesting material is not already available. They prefer lived-in bedding (up to two weeks old – longer durations were not tested) over new bedding, suggesting they may prefer bedding changes at two-week intervals rather than weekly or daily. They also prefer opaque tubes closed at one end, 7.6 cm in diameter, to use as shelter in which to nest and sleep.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Slop (clothing)** Slop (clothing): In 16th to 19th century Europe and North America, the slop trade was the manufacture and sale of slop, cheap ready-made clothing that was made by slop-workers and sold in slop-shops by slop-sellers. Slop: The name "slop" was originally naval slang for the cheap ready-made clothing that a naval rating would purchase in lieu of an official uniform (which ratings in the British Royal Navy, at least, did not have until 1857) sometimes from a "slop chest" maintained on board ship by the purser. The trade: The trade originated in government purchases of uniforms for soldiers and sailors; said uniforms being standardized and mass-produced rather than tailored to individuals, made to official specifications with rules about materials and shapes.The rise in the slop trade was particularly spurred on by wartime orders for military clothing, such as during the Nine Years War and the War of the Spanish Succession.The slop trade was flourishing by the 18th century, as slop-sellers realized that they could sell to the general public as well as to the army and navy, and also received a boost from the Napoleonic Wars. The trade: Slop work became organized into a system of large clothing warehouses subcontracting out to small workshops or individuals. The trade: In the 19th century, however, "slop" was to gain a negative connotation, because of an economic conflict with the older bespoke tailoring industry.In the U.K. the rise of industrialization led to a growing workforce of largely female slop-worker labour, working on piecework, paid by the item, from home, which grew to outnumber the largely male workforce of craft tailors who in contrast worked in a master tailor's workshop and were paid by time worked. The trade: In 1824 the ratio of the former to the latter in London was 4:1, but by 1849 it was 3:20. The trade: The gender disparity had been created by exclusionary practices in the craft tailoring trade in the late 18th and early 19th century, as male tailors sought to exclude women.Similar factors were at work elsewhere; such as in Baltimore in the United States, where large tailoring enterprises such as Thomas Sheppard and Nathaniel Childs took to styling themselves "tailor and slop seller". The trade: An increasingly female population with a growing number of female household heads provided a ready workforce of cheaper lesser-skilled female labour.In London, cheap ready-made clothing gained a wider market through increased middle-class and working-class incomes in the latter part of the century, and a succession of strikes organized by tailors unions (in 1827, 1830, and 1834) largely failed. The trade: The women slop-workers were seen as, and sometimes used as, strike-breakers, particularly in the London Tailors' Union strike of 1834 (which sought better wages, shorter hours, and a prohibition of the piecework and homework that slop-work involved); and contemporary commentators (such as Henry Mayhew who interviewed clothes sellers and Charles Kingsley in both his Cheap Clothes and Nasty and Alton Locke, Tailor and Poet) painted the traditional tailoring trade's view of the situation as the "honourable" traditional tradesmen (also known as "Flints") versus the "dishonourable" slop-workers (named "Dungs") who worked in sweat-shops, and the de-skilling of what was once skilled labour.The clothing, also, was criticized for its poor quality, especially those slops that were made of shoddy, and for its exploitation of mainly the low-skilled women workers in the industry whose jobs involved minute parts of the overall process of the production of the clothes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CT Chamaeleontis** CT Chamaeleontis: CT Chamaeleontis (CT Cha) is a T Tauri star - a primary of the star system in the constellation of Chamaeleon. It has an apparent visual magnitude which varies between 12.31 and 12.43. The star is still accreting material at rate 6×10−10 M☉/year. CT Chamaeleontis: In 2006 and 2007, a faint companion was observed 2.7 arcseconds away from CT Chamaeleontis, using the Very Large Telescope at the European Southern Observatory. Since the object shares common proper motion with CT Chamaeleontis, it is believed to be physically close to the star, with a projected separation of approximately 440 astronomical units. It is estimated to have a mass of approximately 17 Jupiter masses and is probably a brown dwarf or a planet. The companion has been designated CT Chamaeleontis B. The companion was proven to be most likely in the brown dwarf mass range in 2015.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Haagerup property** Haagerup property: In mathematics, the Haagerup property, named after Uffe Haagerup and also known as Gromov's a-T-menability, is a property of groups that is a strong negation of Kazhdan's property (T). Property (T) is considered a representation-theoretic form of rigidity, so the Haagerup property may be considered a form of strong nonrigidity; see below for details. The Haagerup property is interesting to many fields of mathematics, including harmonic analysis, representation theory, operator K-theory, and geometric group theory. Perhaps its most impressive consequence is that groups with the Haagerup Property satisfy the Baum–Connes conjecture and the related Novikov conjecture. Groups with the Haagerup property are also uniformly embeddable into a Hilbert space. Definitions: Let G be a second countable locally compact group. The following properties are all equivalent, and any of them may be taken to be definitions of the Haagerup property: There is a proper continuous conditionally negative definite function Ψ:G→R+ .G has the Haagerup approximation property, also known as Property C0 : there is a sequence of normalized continuous positive-definite functions ϕn which vanish at infinity on G and converge to 1 uniformly on compact subsets of G There is a strongly continuous unitary representation of G which weakly contains the trivial representation and whose matrix coefficients vanish at infinity on G There is a proper continuous affine isometric action of G on a Hilbert space. Examples: There are many examples of groups with the Haagerup property, most of which are geometric in origin. The list includes: All compact groups (trivially). Note all compact groups also have property (T). The converse holds as well: if a group has both property (T) and the Haagerup property, then it is compact. SO(n,1) SU(n,1) Groups acting properly on trees or on R -trees Coxeter groups Amenable groups Groups acting properly on CAT(0) cubical complexes Sources: Cherix, Pierre-Alain; Cowling, Michael; Jolissaint, Paul; Julg, Pierre; Valette, Alain (2001), Groups with the Haagerup property. Gromov's a-T-menability., Progress in Mathematics, vol. 197, Basel: Birkhäuser Verlag, doi:10.1007/978-3-0348-8237-8, ISBN 3-7643-6598-6, MR 1852148
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Uniquely colorable graph** Uniquely colorable graph: In graph theory, a uniquely colorable graph is a k-chromatic graph that has only one possible (proper) k-coloring up to permutation of the colors. Equivalently, there is only one way to partition its vertices into k independent sets and there is no way to partition them into k − 1 independent sets. Examples: A complete graph is uniquely colorable, because the only proper coloring is one that assigns each vertex a different color. Examples: Every k-tree is uniquely (k + 1)-colorable. The uniquely 4-colorable planar graphs are known to be exactly the Apollonian networks, that is, the planar 3-trees.Every connected bipartite graph is uniquely 2-colorable. Its 2-coloring can be obtained by choosing a starting vertex arbitrarily, coloring the vertices at even distance from the starting vertex with one color, and coloring the vertices at odd distance from the starting vertex with the other color. Properties: Some properties of a uniquely k-colorable graph G with n vertices and m edges: m ≥ (k - 1) n - k(k-1)/2. Related concepts: Minimal imperfection A minimal imperfect graph is a graph in which every subgraph is perfect. The deletion of any vertex from a minimal imperfect graph leaves a uniquely colorable subgraph. Related concepts: Unique edge colorability A uniquely edge-colorable graph is a k-edge-chromatic graph that has only one possible (proper) k-edge-coloring up to permutation of the colors. The only uniquely 2-edge-colorable graphs are the paths and the cycles. For any k, the stars K1,k are uniquely k-edge-colorable. Moreover, Wilson (1976) conjectured and Thomason (1978) proved that, when k ≥ 4, they are also the only members in this family. However, there exist uniquely 3-edge-colorable graphs that do not fit into this classification, such as the graph of the triangular pyramid. Related concepts: If a cubic graph is uniquely 3-edge-colorable, it must have exactly three Hamiltonian cycles, formed by the edges with two of its three colors, but some cubic graphs with only three Hamiltonian cycles are not uniquely 3-edge-colorable. Related concepts: Every simple planar cubic graph that is uniquely 3-edge-colorable contains a triangle, but W. T. Tutte (1976) observed that the generalized Petersen graph G(9,2) is non-planar, triangle-free, and uniquely 3-edge-colorable. For many years it was the only known such graph, and it had been conjectured to be the only such graph but now infinitely many triangle-free non-planar cubic uniquely 3-edge-colorable graphs are known. Related concepts: Unique total colorability A uniquely total colorable graph is a k-total-chromatic graph that has only one possible (proper) k-total-coloring up to permutation of the colors. Empty graphs, paths, and cycles of length divisible by 3 are uniquely total colorable graphs. Mahmoodian & Shokrollahi (1995) conjectured that they are also the only members in this family. Some properties of a uniquely k-total-colorable graph G with n vertices: χ″(G) = Δ(G) + 1 unless G = K2. Δ(G) ≤ 2 δ(G). Δ(G) ≤ n/2 + 1.Here χ″(G) is the total chromatic number; Δ(G) is the maximum degree; and δ(G) is the minimum degree.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bimatoprost/timolol** Bimatoprost/timolol: Bimatoprost/timolol, sold under the brand name Ganfort, is a medication for the treatment of certain conditions involving high pressure in the eyes, specifically open angle glaucoma and ocular hypertension. It is available as eye drops.It was approved for medical use in the European Union in May 2006. Medical uses: Bimatoprost/timolol is used for the treatment of open angle glaucoma or ocular hypertension in people for whom single-component eye drops such as prostaglandin analogs or beta blockers are insufficient. Contraindications: Because of the timolol component, which is a beta blocker, the drops are contraindicated in people with lung problems such as asthma or severe chronic obstructive pulmonary disease, or with heart problems such as sinus bradycardia (slow heartbeat), sick sinus syndrome, sino-atrial block, or severe atrioventricular block. Adverse effects: The most common side effect is conjunctival hyperaemia (increased bloodflow in the outer layer of the eye), which occurs in over 10% of people taking the drug. Side effects in less than 10% of people include other eye problems such as itching, foreign body sensation or dry eye, as headache or hyperpigmentation (darkening) of the skin around the eye.Hyperpigmentation is an adverse effect of bimatoprost, while the others are fairly common for eye drops in general. Interactions: No formal interaction studies have been done with bimatoprost/timolol eye drops. As timolol (in tablet form) can be used to lower blood pressure and heart rhythm, it might add to the effect of other antihypertensive (pressure lowering) drugs. Also, drugs that block the liver enzyme CYP2D6 may increase the effects of timolol. Pharmacology: Bimatoprost is a prostaglandin analog and lowers pressure in the eye by increased draining of aqueous humor via the trabecular meshwork. Timolol is a nonselective beta blocker, which lowers eye pressure by reducing aqueous humor production.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prism lighting** Prism lighting: Prism lighting is the use of prisms to improve the distribution of light in a space. It is usually used to distribute daylight, and is a form of anidolic lighting. Prism lighting was popular from its introduction in the 1890s through to the 1930s, when cheap electric lights became commonplace and prism lighting became unfashionable. While mass production of prism lighting systems ended around 1940, the 2010s have seen a revival using new materials. How it works: The human eye's response to light is non-linear: halving the light level does not halve the perceived brightness of a space, it makes it look only slightly dimmer. If light is redistributed from the brightest parts of a room to the dimmest, the room therefore appears brighter overall, and more space can be given a useful and comfortable level of illumination (see before and after images from an 1899 article, below). This can reduce the need for artificial lighting. How it works: Refraction and total internal reflection inside optical prisms can bend beams of light. This bending of the light allows it to be redistributed. Many small prisms may be joined at the edges into a sheet. A prism sheet is somewhat like a linear Fresnel lens, but each ridge may be identical. Unlike a Fresnel lens, the light is not intended to be focussed, but used for anidolic lighting. Types: Deck prisms carried light through the upper decks of ships and spread it in the decks below. Similarly, on land, prisms in sidewalk lights were used to light basements and vaults. Types: Prism tiles were used vertically, usually as a transom light above a window or door. They were also built into fixed and movable canopies, sloped glazing, and skylights. They bend light upwards, so that it penetrates more deeply into the room, rather than lighting the floor near the window.Modern prismatic panels are essentially an acrylic version of the old glass prism tiles. Like glass tiles, they can be mounted on adjustable canopies. Channel panels use slits that reflect light internally. Holographic optical elements can also be used to redirect light.Daylight redirecting window film (DRF) is a thin, flexible peel-and-stick sheet, with the optical layer generally made of acrylic. There are two types of film. Some film is moulded with tiny prisms, making a flexible peel-and-stick miniature prismatic panel. Other film is moulded with thin near-horizontal voids protruding into or through the acrylic; the slits reflect light hitting their top surfaces upwards. Refraction is minimized, to avoid colouring the light. The reflection-based films are more transparent (both are translucent), but they tend to send the light up at the ceiling, not deeper into the room. Refraction-based films are translucent rather than transparent, but offer finer control over the direction of the outgoing light beam; the film can be made in a variety of prism shapes to refract light by a variety of angles. Manufacture and repair: Older glass elements were cast, and might be cut and polished. Prism tiles were often made of single prisms joined with zinc, lead, or electroglazed copper strips (rather like the methods used to join traditional European stained glass). Sidewalk prisms were cast in one piece as single or multiple-prism lenses, and inserted into load-bearing frames. Daylight redirecting film is made of acrylic.Damaged prism tiles may be repaired, and as they came in standard designs, there is a salvage market in replacements. Replacements for one-piece castings can be commissioned. Weakened prism tiles may be reinforced with hidden bars, much like those used to reinforce stained glass. Architectural design: Sophisticated systems for lighting different sorts of spaces with prism tiles were developed. Generally, the goal was to send the available light across the room nearly horizontally. One company sold tiles with nine prescriptions, giving different angles of refraction. Different prescriptions were often used in different parts of the same window transom, sometimes to disperse the light vertically, and sometimes to bend light horizontally around obstacles like pillars.Prism tiles sometimes have elaborate artistic designs moulded into the outside; Frank Lloyd Wright created over forty prism tile designs.Prism lighting works more effectively in light, open spaces. Some believe that it contributed to the trend away from dark, subdivided Victorian interiors to open-plan, light-coloured ones. The removal or covering of old prism transom lights often leaves characteristically tall signage spaces over shop windows (see pictures). Architectural design: Daylight redirecting window film was initially made of one redirecting film and one glare-reducing diffusing film, often located on different interior surfaces of a double-glazed window, but integrated single films are now available. Some daylight redirecting films reflect incoming light upwards off tiny near-horizontal reflectors, so at high sun angles they bend it sharply, throwing it upwards to the ceiling, where a typical ceiling diffuses the daylight somewhat deeper into the space. Other daylight redirecting films refract light at any specified angle, ideally sending it nearly horizontally into the room. Redirecting films can be used as a substitute for opaque blinds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multibook** Multibook: A Multibook or a TACLANE Multibook is a single laptop that combines two to three different classified networks into a single device solution. Currently, most secure computing standards require the federal government and military personnel to maintain multiple PCs on different networks in an effort to allow users simultaneous access to unclassified and classified information. A multibook simply through a complex configuration allows separate enclaves and virtual machines through one display. Multibook: A Multibook has no hard drive and uses a cryptographic ignition key to create a virtual hard drive space with a Type 1 COMSEC element found inside the MultiBook’s integrated Suite B security module. Multibook: The security module known as a HAIPE protects information stored on the computer, as well as data being sent to and from networks classified Secret and below. Due to the lack of stored collateral data, multiBooks do not have any burdensome COMSEC handling requirements. There is no Data at Rest (DAR) when equipment is turned off.Some multibooks are NSA certified to protect information classified Secret and below. They are approved for Suite B information/processing with data in transit (DIT) encryption protecting information when sent to and from classified networks. Multibook: The multibook security benefit for the user is that the device is a CHVP device and is not considered CCI like other devices used in collateral processing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mineral Nutrition of Plants: Principles and Perspectives** Mineral Nutrition of Plants: Principles and Perspectives: Mineral Nutrition of Plants: Principles and Perspectives (1972) is a book about plant nutrition by Emanuel Epstein. Reception: F. C. Steward, C. Bould, and Manuel Lerdau have reviewed the book.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Egg in the basket** Egg in the basket: Egg in the basket—also known by many other names—is an egg fried in a hole in a slice of bread. Description: The dish consists of a slice of bread with an egg in the middle, fried with butter or oil. It is commonly prepared by cutting a circular or square hole in the center of a piece of bread, which may be buttered. The bread is fried in a pan with butter, margarine, cooking oil, or other fat. At some point, an egg is cracked into the hole in the bread. When the egg is added to the bread determines how well-done the egg and bread will be relative to each other in the final product. The pan may be covered and the bread flipped while on the heat to obtain even cooking. A waffle or bagel (with a large enough hole) can also be substituted for the slice of bread. Names and appearances in pop culture: There are many names for the dish, including "bullseye eggs", "eggs in a frame", "egg in a hole", "eggs in a nest", "gashouse eggs", "gashouse special", "gasthaus eggs", "hole in one", "one-eyed Jack", "one-eyed Pete", "one-eyed Sam", "pirate's eye", and "popeye". The name "toad in the hole" is sometimes used for this dish, though that name more commonly refers to sausages cooked in Yorkshire pudding batter. Names and appearances in pop culture: The dish is also known as "Guy Kibbee eggs", due to its preparation by actor Guy Kibbee in the 1935 Warner Bros film Mary Jane's Pa. In the film, Kibbee's character refers to the dish as a “one-eyed Egyptian sandwich”. It is also called "Betty Grable eggs", from the actress’ preparation of "gashouse eggs" in the 1941 film Moon Over Miami. It is prepared by both Hugo Weaving and Stephen Fry's characters in the 2005 film V for Vendetta, the latter referring to it as "eggy in the basket". Other film appearances include Moonstruck (1987) and The Meddler (2016). On television, the dish is prepared in a 1987 episode of Sledge Hammer!, with the title character using his revolver to shoot the hole in the bread. In a 1996 episode of Friends, character Joey Tribbiani refers to it as "eggs with the bread with the hole in the middle, à la me!" In a 2016 episode of Lucifer, it is prepared with Hawaiian bread. Other television appearances include Frasier (1993), Once Upon A Time (2013),The Marvelous Mrs. Maisel (2019), Atypical (2019), Search Party (2022). and Resident Alien (2022),Author Roald Dahl wrote numerous times of his fondness for the dish, which he referred to as "hot-house eggs".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Non-innocent ligand** Non-innocent ligand: In chemistry, a (redox) non-innocent ligand is a ligand in a metal complex where the oxidation state is not clear. Typically, complexes containing non-innocent ligands are redox active at mild potentials. The concept assumes that redox reactions in metal complexes are either metal or ligand localized, which is a simplification, albeit a useful one.C.K. Jørgensen first described ligands as "innocent" and "suspect": "Ligands are innocent when they allow oxidation states of the central atoms to be defined. The simplest case of a suspect ligand is NO..." Redox reactions of complexes of innocent vs. non-innocent ligands: Conventionally, redox reactions of coordination complexes are assumed to be metal-centered. The reduction of MnO4− to MnO42− is described by the change in oxidation state of manganese from 7+ to 6+. The oxide ligands do not change in oxidation state, remaining 2-. Oxide is an innocent ligand. Another example of conventional metal-centered redox couple is [Co(NH3)6]3+/[Co(NH3)6]2+. Ammonia is innocent in this transformation. Redox reactions of complexes of innocent vs. non-innocent ligands: Redox non-innocent behavior of ligands is illustrated by nickel bis(stilbenedithiolate) ([Ni(S2C2Ph2)2]z). As all bis(1,2-dithiolene) complexes of nd8 metal ions, three oxidation states can be identified: z = 2-, 1-, and 0. If the ligands are always considered to be dianionic (as is done in formal oxidation state counting), then z = 0 requires that that nickel has a formal oxidation state of +IV. The formal oxidation state of the central nickel atom therefore ranges from +II to +IV in the above transformations (see Figure). However, the formal oxidation state is different from the real (spectroscopic) oxidation state based on the (spectroscopic) metal d-electron configuration. The stilbene-1,2-dithiolate behaves as a redox non-innocent ligand, and the oxidation processes actually take place at the ligands rather than the metal. This leads to the formation of ligand radical complexes. The charge-neutral complex (z =0), showing a partial singlet diradical character, is therefore better described as a Ni2+ derivative of the radical anion S2C2Ph2•−. The diamagnetism of this complex arises from anti-ferromagnetic coupling between the unpaired electrons of the two ligand radicals. Redox reactions of complexes of innocent vs. non-innocent ligands: Another example is higher oxidation states of copper complexes of diamido phenyl ligands that are stabilized by intramolecular multi center hydrogen bonding Typical non-innocent ligands: Nitrosyl (NO) binds to metals in one of two extreme geometries - bent where NO is treated as a pseudohalide (NO−), and linear, where NO is treated as NO+. Typical non-innocent ligands: Dioxygen can be non-innocent, since it exists in two oxidation states, superoxide (O2−) and peroxide (O22−).Ligands with extended pi-delocalization such as porphyrins, phthalocyanines, and corroles and ligands with the generalised formulas [D-CR=CR-D]n− (D = O, S, NR’ and R, R' = alkyl or aryl) are often non-innocent. In contrast, [D-CR=CR-CR=D]− such as NacNac or acac are innocent. catecholates and related 1,2-dioxalenes. Typical non-innocent ligands: dithiolenes, such as maleonitriledithiolate (see example of [Ni(S2C2Ph2)2]n− above). 1,2-diimines such as derivatives of 1,2-diamidobenzene, 2,2'-bipyridine, and dimethylglyoxime. The complex Cr(2,2'-bipyridine)3 is a derivative of Cr(III) bound to three bipyridine1− ligands. On the other hand, one-electron oxidation of [Ru(2,2'-bipyridine)3]2+ is localized on Ru and the bipyridine is behaving as a normal, innocent ligand in this case. ligands containing ferrocene can have oxidation events centered on the ferrocene iron center rather than the catalytically active metal center. pyridine-2,6-diimine ligands can be reduced by one and two electrons. Redox non-innocent ligands in biology and homogeneous catalysis: In certain enzymatic processes, redox non-innocent cofactors provide redox equivalents to complement the redox properties of metalloenzymes. Of course, most redox reactions in nature involve innocent systems, e.g. [4Fe-4S] clusters. The additional redox equivalents provided by redox non-innocent ligands are also used as controlling factors to steer homogeneous catalysis. Hemes Porphyrin ligands can be innocent (2-) or noninnocent (1-). In the enzymes chloroperoxidase and cytochrome P450, the porphyrin ligand sustains oxidation during the catalytic cycle, notably in the formation of Compound I. In other heme proteins, such as myoglobin, ligand-centered redox does not occur and the porphyrin is innocent. Redox non-innocent ligands in biology and homogeneous catalysis: Galactose oxidase The catalytic cycle of galactose oxidase (GOase) illustrates the involvement of non-innocent ligands. GOase oxidizes primary alcohols into aldehydes using O2 and releasing H2O2. The active site of the enzyme GOase features a tyrosyl coordinated to a CuII ion. In the key steps of the catalytic cycle, a cooperative Brønsted-basic ligand-site deprotonates the alcohol, and subsequently the oxygen atom of the tyrosinyl radical abstracts a hydrogen atom from the alpha-CH functionality of the coordinated alkoxide substrate. The tyrosinyl radical participates in the catalytic cycle: 1e-oxidation is effected by the Cu(II/I) couple and the 1e oxidation is effected by the tyrosyl radical, giving an overall 2e change. The radical abstraction is fast. Anti-ferromagnetic coupling between the unpaired spins of the tyrosine radical ligand and the d9 CuII center gives rise to the diamagnetic ground state, consistent with synthetic models.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phenylhydroxylamine** Phenylhydroxylamine: Phenylhydroxylamine is the organic compound with the formula C6H5NHOH. It is an intermediate in the redox-related pair C6H5NH2 and C6H5NO. Phenylhydroxylamine should not be confused with its isomer α-phenylhydroxylamine or O-phenylhydroxylamine. Preparation: This compound can be prepared by the reduction of nitrobenzene with zinc in the presence of NH4Cl.Alternatively, it can be prepared by transfer hydrogenation of nitrobenzene using hydrazine as an H2 source over a rhodium catalyst. Reactions: Phenylhydroxylamine is unstable to heating, and in the presence of strong acids easily rearranges to 4-aminophenol via the Bamberger rearrangement. Oxidation of phenylhydroxylamine with dichromate gives nitrosobenzene. The compound condenses with benzaldehyde to form diphenylnitrone, a well-known 1,3-dipole: C6H5NHOH + C6H5CHO → C6H5N(O)=CHC6H5 + H2O Phenylhydroxylamine is attacked by NO+ sources to give cupferron: C6H5NHOH + C4H9ONO + NH3 → NH4[C6H5N(O)NO] + C4H9OH
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Circaseptan** Circaseptan: A circaseptan rhythm is a cycle consisting of approximately 7 days in which many biological processes of life, such as cellular immune system activity, resolve.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gurrufío** Gurrufío: Gurrufío is the Venezuelan term for a button whirligig or buzzer, a simply constructed traditional children’s toy. It consists of a central disk of wood, plastic or metal (even occasionally a soft drink bottle cap that has been hammered flat), with holes drilled or nailed equidistant and close to the center. A piece of string is inserted through both holes, leaving a length of about 15 to 30 centimeters on each side, and the loop is closed with a knot. How to play: Take the ends of the loop at both sides with the fingers, rotate the disk a bit, and draw the rope taut, fast, and release the tension a bit. The disk will then spin in one direction, reach its maximum, and, helped by another yank, start spinning in the opposite direction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Degree of start-stop distortion** Degree of start-stop distortion: In telecommunication, the term degree of start-stop distortion has the following meanings: In asynchronous serial communication data transmission, the ratio of (a) the absolute value of the maximum measured difference between the actual and theoretical intervals separating any significant instant of modulation (or demodulation) from the significant instant of the start element immediately preceding it to (b) the unit interval. Degree of start-stop distortion: The highest absolute value of individual distortion affecting the significant instants of a start-stop modulation.The degree of distortion of a start-stop modulation (or demodulation) is usually expressed as a percentage. Distinction can be made between the degree of late (positive) distortion and the degree of early (negative) distortion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metabolic rift** Metabolic rift: Metabolic rift is Karl Marx's key conception of ecological crisis tendencies under capitalism, or in Marx's own words, it is the "irreparable rift in the interdependent process of social metabolism".: 949  Marx theorized a rupture in the metabolic interaction between humanity and the rest of nature emanating from capitalist agricultural production and the growing division between town and country. According to John Bellamy Foster, who coined the term, metabolic rift is the development of Marx's earlier work in the Economic and Philosophical Manuscripts on species-being and the relationship between humans and nature. Metabolism is Marx's "mature analysis of the alienation of nature": ix  and presents "a more solid—and scientific—way in which to depict the complex, dynamic interchange between human beings and nature, resulting from human labor."As opposed to those who have attributed to Marx a disregard for nature and responsibility for the environmental problems of the Soviet Union and other purportedly communist states, Foster sees in the theory of metabolic rift evidence of Marx's ecological perspective. The theory of metabolic rift "enable[ed] [Marx] to develop a critique of environmental degradation that anticipated much of present-day ecological thought",: 142  including questions of sustainability as well as the limits of agricultural production using concentrated animal feeding operations. Researchers building on the original Marxist concept have developed other similar terms like carbon rift. Origins: Soil exhaustion and agricultural revolutions Marx's writings on metabolism were developed during England's "second" agricultural revolution (1815–1880), a period which was characterized by the development of soil chemistry and the growth of the use of chemical fertilizer.: 148  The depletion of soil fertility, or "soil exhaustion", had become a key concern for capitalist society, and demand for fertilizer was such that Britain and other powers initiated explicit policies for the importation of bone and guano, including raiding of Napoleonic battlefields: 375  and catacombs,: 150  British monopolization of Peruvian guano supplies,: 377  and, in the United States, "the imperial annexation of any islands thought to be rich in [guano]" through the Guano Islands Act (1856).: 151 : 377 Liebig and soil science Marx's theory drew heavily on contemporary advances in agricultural chemistry unknown to earlier classical economists such as Ricardo and Malthus. For them, different levels of soil fertility (and thus rent) was attributed "almost entirely to the natural or absolute productivity of the soil,": 374  with improvement (or degradation) playing only a minor role. Origins: German agricultural chemist Justus von Liebig, in his Organic Chemistry in Its Applications to Agriculture and Physiology (1840), presented the first convincing explanation of the role of soil nutrients in the growth of plants.: 376  In 1842, Liebig expanded the use of the term metabolism (Stoffwechsel), from referring to material exchanges in the body, up to the biochemical processes of natural systems.: 374 Foster argues that Liebig's work became more critical of capitalist agriculture as time went on. From the standpoint of nutrient cycling, the socio-economic relationship between rural and urban areas was self-evidently contradictory, hindering the possibility of sustainability: If it were practicable to collect, with the least loss, all the solid and fluid excrements of the inhabitants of the town, and return to each farmer the portion arising from produce originally supplied by him to the town, the productiveness of the land might be maintained almost unimpaired for ages to come, and the existing store of mineral elements in every fertile field would be amply sufficient for the wants of increasing populations.: 261 Human labor and nature Marx rooted his theory of social-ecological metabolism in Liebig's analysis but connected it to his understanding of the labor process.: 380  Marx understood that, throughout history, it was through labor that humans appropriated nature to satisfy their needs.: 141  Thus the metabolism, or interaction, of society with nature is "a universal and perpetual condition.": 145 In Capital, Marx integrated his materialist conception of nature with his materialist conception of history.: 141  Fertility, Marx argued, was not a natural quality of the soil, but was rather bound up with the social relations of the time. By conceptualizing the complex, interdependent processes of material exchange and regulatory actions that link human society with non-human nature as "metabolic relations," Marx allowed these processes to be both "nature-imposed conditions" and subject to human agency,: 381  a dynamic largely missed, according to Foster, by the reduction of ecological questions to issues of value.: 11 Writers since Marx: The central contribution of the metabolic rift perspective is to locate socio-ecological contradictions internal to the development of capitalism. Later socialists expanded upon Marx's ideas, including Nikolai Bukharin in Historical Materialism (1921) and Karl Kautsky in The Agrarian Question (1899), which developed questions of the exploitation of the countryside by the town and the "fertilizer treadmill" that resulted from metabolic rift.: 239 Contemporary eco-socialist theorists aside from Foster have also explored these directions, including James O'Connor, who sees capitalist undervaluing of nature as leading to economic crisis, what he refers to as the second contradiction of capitalism.Scholars from a variety of disciplines have drawn on Marx's metabolic approach and the concept of metabolic rift in analyzing the relation of society to the rest of nature. With increasing amounts of carbon dioxide being released into the environment from capitalist production, the theory of a carbon rift has also emerged.The metabolic rift is characterized in different ways by historical materialists. For Jason W. Moore, the distinction between social and natural systems is empirically false and theoretically arbitrary; following a different reading of Marx, Moore views metabolisms as relations of human and extra-human natures. In this view, capitalism's metabolic rift unfolds through the town-country division of labor, itself a "bundle" of relations between humans and the rest of nature. Moore sees it as constitutive of the endless accumulation of capital. Moore's perspective, although also rooted in historical materialism, produces a widely divergent view from that of Foster and others about what makes ecological crisis and how it relates to capital accumulation. Writers since Marx: Nine months after Foster's groundbreaking article appeared, Moore argued that the origins of the metabolic rift were not found in the 19th century but in the rise of capitalism during the "long" 16th century. The metabolic rift was not a consequence of industrial agriculture but capitalist relations pivoting on the law of value. Moore consequently focuses attention on the grand movements of primitive accumulation, colonialism, and the globalization of town-country relations that characterized early modern capitalism. There were, in this view, not one but many metabolic rifts; every great phase of capitalist development organized nature in new ways, each one with its own metabolic rift. In place of agricultural revolutions, Moore emphasizes recurrent agro-ecological revolutions, assigned the historical task of providing cheap food and cheap labor, in the history of capitalism, an interpretation that extends the analysis to the food crises of the early 21st century. Environmental contradiction under capitalism: Town and country Up until the 16th or 17th century, cities' metabolic dependency upon surrounding countryside (for resources, etc.), coupled with the technological limitations to production and extraction, prevented extensive urbanization. Early urban centers were bioregionally defined, and had relatively light "footprints," recycling city nightsoils back into the surrounding areas.: 410–411 However, with the rise of capitalism, cities expanded in size and population. Large-scale industry required factories, raw material, workers, and large amounts of food. As urban economic security was dependent upon its metabolic support system,: 411  cities now looked further afield for their resource and waste flows. As spatial barriers were broken down, capitalist society "violated" what were previously "nature-imposed conditions of sustainability.": 153–156 With trade and expansion, food and fiber were shipped longer distances. The nutrients of the soil were sent to cities in the form of agricultural produce, but these same nutrients, in the form of human and animal waste, were not returned to the land. Thus there was a one-way movement, a "robbing of the soil" in order to maintain the socio-economic reproduction of society.: 153–156 Marx thus linked the crisis of pollution in cities with the crisis of soil depletion. The rift was a result of the antagonistic separation of town and country, and the social-ecological relations of production created by capitalism were ultimately unsustainable. From Capital, volume 1, on "Large-scale Industry and Agriculture": Capitalist production collects the population together in great centres, and causes the urban population to achieve an ever-growing preponderance. This has two results. On the one hand it concentrates the historical motive force of society; on the other hand, it disturbs the metabolic interaction between man and the earth, i.e. it prevents the return to the soil of its constituent elements consumed by man in the form of food and clothing; hence it hinders the operation of the eternal natural condition for the lasting fertility of the soil... But by destroying the circumstances surrounding that metabolism... it compels its systematic restoration as a regulative law of social production, and in a form adequate to the full development of the human race... All progress in capitalist agriculture is a progress in the art, not only of robbing the worker, but of robbing the soil; all progress in increasing the fertility of the soil for a given time is a progress toward ruining the more long-lasting sources of that fertility... Capitalist production, therefore, only develops the techniques and the degree of combination of the social process of production by simultaneously undermining the original sources of all wealth—the soil and the worker (emphasis added).: 637–638 Future socialist society The concept of metabolic rift captures "the material estrangement of human beings within capitalist society from the natural conditions which formed the basis for their existence.": 163  However, Marx also emphasizes the importance of historical change. It was both necessary and possible to rationally govern human metabolism with nature, but this was something "completely beyond the capabilities of bourgeois society.": 141  In a future society of freely associated producers, however, humans could govern their relations with nature via collective control, rather than through the blind power of market relations.: 159  In Capital, volume 3, Marx states: Freedom, in this sphere...can consist only in this, that socialized man, the associated producers, govern the human metabolism with nature in a rational way, bringing it under their own collective control rather than being dominated by it as a blind power; accomplishing it with the least expenditure of energy and in conditions most worthy and appropriate for their human nature.: 959  However, Marx did not argue that a sustainable relation to the Earth was an automatic result of the transition to socialism.: 386  Rather, there was a need for planning and measures to address the division of labor and population between town and country and for the restoration and improvement of the soil.: 169 : 40–41 Metabolism and environmental governance: Despite Marx's assertion that a concept of ecological sustainability was "of very limited practical relevance to capitalist society," as it was incapable of applying rational scientific methods and social planning due to the pressures of competition,: 164  the theory of metabolic rift may be seen as relevant to, if not explicitly invoked in, many contemporary debates and policy directions of environmental governance. Metabolism and environmental governance: There is a rapidly growing body of literature on social-ecological metabolism. While originally limited to questions of soil fertility—essentially a critique of capitalist agriculture—the concept of metabolic rift has since been taken up in numerous fields and its scope expanded. For example, Clausen and Clark have extended the use of metabolic rift to marine ecology, while Moore uses the concept to discuss the broader concerns of global environmental crises and the viability of capitalism itself. Fischer-Kowalski discusses the application of "the biological concept of metabolism to social systems," tracing it through several contributing scientific traditions, including biology, ecology, social theory, cultural anthropology, and social geography. A social metabolism approach has become "one of the most important paradigms for the empirical analysis of the society-nature-interaction across various disciplines," particularly in the fields of industrial metabolism and material flow analysis. Metabolism and environmental governance: Urban political ecology David Harvey points out that much of the environmental movement has held (and in some areas continues to hold) a profound anti-urban sentiment, seeing cities as "the highpoint of plundering and pollution of all that is good and holy on planet earth.": 426  The problem is that such a perspective focuses solely on a particular form of nature, ignoring many people's lived experience of the environment and the importance of cities in ecological processes and as ecological sites in their own right.: 427 In contrast, Erik Swyngedouw and other theorists have conceptualized the city as an ecological space through urban political ecology, which connects material flows within cities and between the urban and non-urban. Metabolism and environmental governance: Sustainable cities In city planning policy circles, there has been a recent movement toward urban sustainability. Hodson and Marvin discuss a "new eco-urbanism" that seeks to integrate environment and infrastructure, "bundling" architecture, ecology and technology in order to "internalize" energy, water, food, waste and other material flows. Unlike previous efforts to integrate nature into the city, which, according to Harvey, were primarily aesthetic and bourgeois in nature,: 427  these new efforts are taking place in the context of climate change, resource constraints and the threat of environmental crises.In contrast to the traditional approach of capitalist urbanization, which sought more and more distant sources for material resources and waste sinks (as seen in the history of Los Angeles water), eco-urban sites would re-internalize their own resources and re-circulate wastes. The goal is autarky and greater ecological and infrastructural self-reliance through "closed-loop systems" that reduce reliance on external networks. Although difficult given the reliance on international supply chains, urban food movements are working to reduce the commodification of food and individual and social forms of alienation from food within cities. This takes place within actually existing conditions of neoliberalization, suggesting that healing metabolic rifts will be a process that requires both social and ecological transformations. However, critics link these efforts to "managerial environmentalism,": 427  and worry that eco-urbanism too closely falls into an "urban ecological security" approach, echoing Mike Davis' analysis of securitization and fortress urbanism. A Marxist critique might also question the feasibility of sustainable cities within the context of a global capitalist system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Computer addiction** Computer addiction: Computer addiction is a form of behavioral addiction that can be described as the excessive or compulsive use of the computer, which persists despite serious negative consequences for personal, social, or occupational function. Another clear conceptualization is made by Block, who stated that "Conceptually, the diagnosis is a compulsive-impulsive spectrum disorder that involves online and/or offline computer usage and consists of at least three subtypes: excessive gaming, sexual preoccupations, and e-mail/text messaging". Computer addiction is not currently included in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) as an official disorder. The concept of computer addiction is broadly divided into two types, namely offline computer addiction, and online computer addiction. Offline computer addiction is normally used when speaking about excessive gaming behavior, which can be practiced both offline and online. Online computer addiction, also known as Internet addiction, gets more attention in general from scientific research than offline computer addiction, mainly because most cases of computer addiction are related to the excessive use of the Internet.Experts on Internet addiction have described this syndrome as an individual working intensely on the Internet, prolonged use of the Internet, uncontrollable use of the Internet, unable to use the Internet in an efficient, timely matter, not being interested in the outside world, not spending time with people from the outside world, and an increase in their loneliness and dejection. Symptoms: Being drawn by the computer as soon as one wakes up and before one goes to bed Replacing old hobbies with excessive use of the computer and using the computer as one's primary source of entertainment and procrastination Lacking physical exercise and/or outdoor exposure because of constant use of the computer, which could contribute to many health problems such as obesity Backache Headaches Weight gain or loss Disturbances in sleep Carpal tunnel syndrome Blurred or strained vision Depression and marital infidelity Effects: Excessive computer use may result in, or occur with: Lack of face to face social interaction Computer vision syndrome Causes: Kimberly Young indicates that previous research links internet/computer addiction with existing mental health issues, most notably depression. She states that computer addiction has significant effects socially, such as low self-esteem, psychologically and occupationally, which led many subjects to academic failure. Causes: According to a Korean study on internet/computer addiction, pathological use of the internet results in negative life impacts such as job loss, marriage breakdown, financial debt, and academic failure. 70% of internet users in Korea are reported to play online games, 18% of whom are diagnosed as game addicts, which relates to internet/computer addiction. The authors of the article conducted a study using Kimberly Young's questionnaire. The study showed that the majority of those who met the requirements of internet/computer addiction experienced interpersonal difficulties and stress and that those addicted to online games specifically responded that they hoped to avoid reality. Types: Computers nowadays rely almost entirely on the internet, and thus relevant research articles relating to internet addiction may also be relevant to computer addiction. Gaming addiction: a hypothetical behavioral addiction characterized by excessive or compulsive use of computer games or video games, which interferes with a person's everyday life. Video game addiction may present itself as compulsive gaming, social isolation, mood swings, diminished imagination, and hyper-focus on in-game achievements, to the exclusion of other events in life. Types: Social media addiction: Data suggest that participants use social media to fulfill their social needs but are typically dissatisfied. Lonely individuals are drawn to the Internet for emotional support. This could interfere with "real-life socializing" by reducing face-to-face relationships. Some of these views are summed up in an Atlantic article by Stephen Marche entitled Is Facebook Making Us Lonely?, in which the author argues that social media provides more breadth, but not the depth of relationships that humans require and that users begin to find it difficult to distinguish between the meaningful relationships which we foster in the real world and the numerous casual relationships that are formed through social media.> Cyberstalking: According to Prof. Jordana N. Navarro et al. Cyberstalking is known as a behavior that includes but not limited to internet or technology use to stalk or harass an individual over time and in a menacing fashion. Cyberstalking has been on the rise since the 1990s. These cryptic behaviors are also noticeable. Many of the cyberstalking cases are from people that do not know each other. Cyberstalkers are not limited to geographical boundaries, research has suggested various impulses in cyberstalking aside from excreting power and control over the target. Internet addiction and cyberstalking share several key traits that should lend support to new investigations to further scrutinize the relationship between the two disorders. Studies have shown that cyberstalkers can have different motives, but these results are not necessarily indicative of mental health issues. A cyberstalker is usually an emotionally damaged individual, a loner who seeks attention, gratification, and connection and in the process becomes infatuated with someone (Navarro et al. 2015) Diagnostic Test: Many studies and surveys are being conducted to measure the extent of this type of addiction. Kimberly Young has created a questionnaire based on other disorders to assess the level of addiction. It is called the Internet Addict Diagnostic Questionnaire or IADQ. The questionnaire asks users about their online usage habits as well as their feelings about their internet usage. According to the IADQ sample, Internet Addiction resembles that of a Gambling disorder. Answering positively to five out of the eight questions on the IADQ may be indicative of online addiction.According to the article "Validating the Distinction between Computer Addiction and Engagement: Online Game Playing and Personality", the authors introduced a test to help identify the differences between addiction and engagement. Based on similar ideas, here are some ways to distinguish between computer engagement and addiction. Origin of the term and history: Observations about the addictiveness of computers, and more specifically, computer games date back at least to the mid-1970s. Addiction and addictive behavior were common among the users of the PLATO system at the University of Illinois. British e-learning academic Nicholas Rushby suggested in his 1979 book, An Introduction to Educational Computing, that people can be addicted to computers and experience withdrawal symptoms. The term was also used by M. Shotton in 1989 in her book Computer Addiction. However, Shotton concludes that the 'addicts' are not truly addicted. Dependency on computers, she argues, is better understood as a challenging and exciting pastime that can also lead to a professional career in the field. Computers do not turn gregarious, extroverted people into recluses; instead, they offer introverts a source of inspiration, excitement, and intellectual stimulation. Shotton's work seriously questions the legitimacy of the claim that computers cause addiction. Origin of the term and history: The term became more widespread with the explosive growth of the Internet, as well the availability of the personal computer. Computers and the Internet both started to take shape as a personal and comfortable medium that could be used by anyone who wanted to make use of it. With that explosive growth of individuals making use of PCs and the Internet, the question started to arise whether or not misuse or excessive use of these new technologies could be possible as well. It was hypothesized that, like any technology aimed specifically at human consumption and use, abuse could have severe consequences for the individual in the short term and the society in the long term. In the late nineties people who made use of PCs and the internet were already referred to the term webaholics or cyberholics. Pratarelli et al. suggested at that point already to label "a cluster of behaviors potentially causing problems" as a computer or Internet addiction.There are other examples of computer overuse that date back to the earliest computer games. Press reports have furthermore noted that some Finnish Defence Forces conscripts were not mature enough to meet the demands of military life and were required to interrupt or postpone military service for a year. One reported source of the lack of needed social skills is an overuse of computer games or the Internet. Forbes termed this overuse "Web fixations", and stated that they were responsible for 12 such interruptions or deferrals over the 5 years from 2000 to 2005.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tunisian units of measurement** Tunisian units of measurement: A number of different units of measurement were used in Tunisia to measure length, mass, capacity, etc. In Tunisia, Metric System is compulsory since 1895. System before the metric system: A number of units were used. Some of these units were still used even in 1920s too, for example. System before the metric system: Length Several units were used. The unit, pic was used depending on the measuring object. Some of the units are given below: 1 pic Arabic = 0.488 m 1 pic Turc = 0.637 m 1 pic endazé = 0.673 cm.Dra's hendaseh, for woolen goods, was equal to 26.49 in. Pik for linen was equal to 18.62 in, and pik for silk was equal to 24.83 in. System before the metric system: Mass A number of different units were used. One uckir was equal to 31.495 kg. Some other units are given below: 1 rottolo attari = 16 uckir 1 rottolo sucki (for meat, etc.) = 18 uckir 1 rottolo khaddari (for vegetables) = 20 uckir 1 cantaro = 100 uckir (1 cantart (attari) = 100 rottolo attari = 1600;1 cantart (sucki) = 100 rottolo sucki = 1800 unkir;1 cantaro (khaddari) = 100 rottolo attari = 2000 uckir).According to some sources, one rottolo (rotl) was equal to 1.1175 lb, and rotl sucky for meat, etc. was equal to 1.2582 lb, and one rotl ghredari for vegetables was equal to 1.4098 lb. One metical for gold and silver was equal to 59.7 grains. System before the metric system: Capacity Several units were used. Liquid One metter (mitre) was equal to 2.6417 gallon. Dry One cafisso (cafiz) was equal to 496 L, and one millerole (Marseilles) was approximately equal to 64 L. Some other units are given below: 1 saah= 1⁄128 cafisso 1 whiba = 1⁄16 cafisso.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anita Layton** Anita Layton: Anita T. Layton is an applied mathematician who applies methods from computational mathematics and partial differential equations to model kidney function. She presently holds a Canada 150 Research Chair in Mathematical Biology and Medicine at the University of Waterloo. She is also a professor in the university's Department of Applied Mathematics. She joined the Waterloo faculty in 2018. Previously, she was the Robert R. & Katherine B. Penn Professor of Mathematics at Duke University, where she also held appointments in the department of biomedical engineering and the department of medicine. Early life and education: Layton was born in Hong Kong, where her father was a secondary school mathematics teacher. She did her undergraduate studies at Duke, entering with the plan of studying physics but eventually switching to computer science and graduating in 1994. She went to the University of Toronto for her graduate studies, and completed a Ph.D. there in 2001. Her dissertation, High-Order Spatial Discretization Methods for the Shallow Water Equations, concerned numerical weather prediction, and was jointly supervised by Kenneth R. Jackson and Christina C. Christara. Research: Layton's main research interest is the application of mathematics to biological systems. She works with physiologists and clinicians to formulate detailed computational models of kidney function, which she uses to understand the impacts of diabetes and hypertension on kidney function, and the effectiveness of novel therapeutic treatments. With Aurélie Edwards, Layton is the author of Mathematical Modeling in Renal Physiology (Springer, Lecture Notes on Mathematical Modelling in the Life Sciences, 2014). Recognition: In 2018, Layton was awarded the Canada 150 Research Chair, and then joined the University of Waterloo, Department of Applied Mathematics. Layton is the 2021 winner of the Krieger–Nelson Prize of the Canadian Mathematical Society., a 2021 winner of the Top 100 Most Powerful Women in Canada by the Women’s Executive Network, and a 2022 Fellow of the Association for Women in Mathematics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Safari Jack** Safari Jack: Safari Jack is a 1998 board game published by Cheapass Games. Gameplay: Safari Jack is a game in which the players use cards to create a safari landscape and then hunt the animals that live there. Reception: The online second version of Pyramid reviewed Safari Jack and commented that "It's a clever, simple design with lots of replay value, lots of animals with silly names, and enough strategy to keep most gamers interested until they break out, well, probably some other Cheapass game."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antigen presentation** Antigen presentation: Antigen presentation is a vital immune process that is essential for T cell immune response triggering. Because T cells recognize only fragmented antigens displayed on cell surfaces, antigen processing must occur before the antigen fragment, now bound to the major histocompatibility complex (MHC), is transported to the surface of the cell, a process known as presentation, where it can be recognized by a T-cell receptor. If there has been an infection with viruses or bacteria, the cell will present an endogenous or exogenous peptide fragment derived from the antigen by MHC molecules. There are two types of MHC molecules which differ in the behaviour of the antigens: MHC class I molecules (MHC-I) bind peptides from the cell cytosol, while peptides generated in the endocytic vesicles after internalisation are bound to MHC class II (MHC-II). Cellular membranes separate these two cellular environments - intracellular and extracellular. Each T cell can only recognize tens to hundreds of copies of a unique sequence of a single peptide among thousands of other peptides presented on the same cell, because an MHC molecule in one cell can bind to quite a large range of peptides. Predicting which (fragments of) antigens will be presented to the immune system by a certain MHC/HLA type is difficult, but the technology involved is improving. Presentation of intracellular antigens: Class I: Cytotoxic T cells (also known as Tc, killer T cell, or cytotoxic T-lymphocyte (CTL)) express CD8 co-receptors and are a population of T cells that are specialized for inducing programmed cell death of other cells. Cytotoxic T cells regularly patrol all body cells to maintain the organismal homeostasis. Whenever they encounter signs of disease, caused for example by the presence of viruses or intracellular bacteria or a transformed tumor cell, they initiate processes to destroy the potentially harmful cell. All nucleated cells in the body (along with platelets) display class I major histocompatibility complex (MHC-I molecules). Antigens generated endogenously within these cells are bound to MHC-I molecules and presented on the cell surface. This antigen presentation pathway enables the immune system to detect transformed or infected cells displaying peptides from modified-self (mutated) or foreign proteins.In the presentation process, these proteins are mainly degraded into small peptides by cytosolic proteases in the proteasome, but there are also other cytoplasmic proteolytic pathways. Then, peptides are distributed to the endoplasmic reticulum (ER) via the action of heat shock proteins and the transporter associated with antigen processing (TAP) which translocates the cytosolic peptides into the ER lumen in an ATP-dependent transport mechanism. There are several ER chaperones involved in MHC-I assembly, such as calnexin, calreticulin, Erp57, protein disulfide isomerase (PDI), and tapasin. Specifically, the complex of TAP, tapasin, MHS Class 1, ERp57, and calreticulin is called the peptide-loading complex (PLC). Peptides are loaded to MHC-I peptide binding groove between two alpha helices at the bottom of the α1 and α2 domains of the MHC class I molecule. After releasing from tapasin, peptide-MHC-I complexes (pMHC-I) exit the ER and are transported to the cell surface by exocytic vesicles.Naïve anti-viral T cells (CD8+) cannot directly eliminate transformed or infected cells. They have to be activated by the pMHC-I complexes of antigen-presenting cells (APCs). Here, antigen can be presented directly (as described above) or indirectly (cross-presentation) from virus-infected and non-infected cells. After the interaction between pMHC-I and TCR, in presence of co-stimulatory signals and/or cytokines, T cells are activated, migrate to the peripheral tissues and kill the target cells (infected or damaged cells) by inducing cytotoxicity.Cross-presentation is a special case in which MHC-I molecules are able to present extracellular antigens, usually displayed only by MHC-II molecules. This ability appears in several APCs, mainly plasmacytoid dendritic cells in tissues that stimulate CD8+ T cells directly. This process is essential when APCs are not directly infected, triggering local antiviral and anti-tumor immune responses immediately without trafficking the APCs in the local lymph nodes. Presentation of extracellular antigens: Class II: Antigens from the extracellular space and sometimes also endogenous ones, are enclosed into endocytic vesicles and presented on the cell surface by MHC-II molecules to the helper T cells expressing CD4 molecule. Only APCs such as dendritic cells, B cells or macrophages express MHC-II molecules on their surface in substantial quantity, so expression of MHC-II molecules is more cell-specific than MHC-I.APCs usually internalise exogenous antigens by endocytosis, but also by pinocytosis, macroautophagy, endosomal microautophagy or chaperone-mediated autophagy. In the first case, after internalisation, the antigens are enclosed in vesicles called endosomes. There are three compartments involved in this antigen presentation pathway: early endosomes, late endosomes or endolysosomes and lysosomes, where antigens are hydrolized by lysosome-associated enzymes (acid-dependent hydrolases, glycosidases, proteases, lipases). This process is favored by gradual reduction of the pH. The main proteases in endosomes are cathepsins and the result is the degradation of the antigens into oligopeptides.MHC-II molecules are transported from the ER to the MHC class II loading compartment together with the protein invariant chain (Ii, CD74). A non classical MHC-II molecule (HLA-DO and HLA-DM) catalyses the exchange of part of the CD74 (CLIP peptide) with the peptide antigen. Peptide-MHC-II complexes (pMHC-II) are transported to the plasma membrane and the processed antigen is presented to the helper T cells in the lymph nodes.APCs undergo a process of maturation while migrating, via chemotactic signals, to lymphoid tissues, in which they lose the phagocytic capacity and develop an increased ability to communicate with T-cells by antigen-presentation. As well as in CD8+ cytotoxic T cells, APCs need pMHC-II and additional costimulatory signals to fully activate naive T helper cells. Presentation of extracellular antigens: Class II: Alternative pathway of endogenous antigen processing and presentation over MHC-II molecules exists in medullary thymic epithelial cells (mTEC) via the process of autophagy. It is important for the process of central tolerance of T cells in particular the negative selection of autoreactive clones. Random gene expression of the whole genome is achieved via the action of AIRE and a self-digestion of the expressed molecules presented on both MHC-I and MHC-II molecules. Presentation of native intact antigens to B cells: B-cell receptors on the surface of B cells bind to intact native and undigested antigens of a structural nature, rather than to a linear sequence of a peptide which has been digested into small fragments and presented by MHC molecules. Large complexes of intact antigen are presented in lymph nodes to B cells by follicular dendritic cells in the form of immune complexes. Some APCs expressing comparatively lower levels of lysosomal enzymes are thus less likely to digest the antigen they have captured before presenting it to B cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semantic desktop** Semantic desktop: In computer science, the semantic desktop is a collective term for ideas related to changing a computer's user interface and data handling capabilities so that data are more easily shared between different applications or tasks and so that data that once could not be automatically processed by a computer could be. It also encompasses some ideas about being able to share information automatically between different people. This concept is very much related to the Semantic Web, but is distinct insofar as its main concern is the personal use of information. General description: The vision of the semantic desktop can be considered as a response to the perceived problems of existing user interfaces. Without good metadata, computers cannot easily learn many commonly needed attributes about files. For example, suppose one downloads a document by a particular author on a particular subject - though the document will likely clearly indicate its subject, author, source and possibly copyright information there may be no easy way for the computer to obtain this information and process it across applications like file managers, desktop search engines, and other services. This means the computer cannot search, filter or otherwise act upon the information as effectively as it otherwise could. This is very much the problem that the Semantic Web is concerned with. General description: Secondly there is the problem of relating different files with each other. For example, on operating systems such as Unix, e-mails are stored separately from files. Neither has anything to do with tasks, notes or planned activities that may be stored in a calendar program. Contacts might be stored in another program. However, all these forms of information might simultaneously be relevant and necessary for a particular task. General description: Related to this, a user will often access a lot of data from the Internet which are segregated from the data stored locally on the computer and accessed through a browser or other program. As well as accessing data, a user has to share data, often through e-mail or separate file transfer programs. The semantic desktop is an attempt to solve some or all of these problem by extending the operating system's capabilities to handle all data using Semantic Web technologies. Based on this data integration, improved user interfaces (or plugins to existing applications) can give the user an integrated view on stored knowledge. Some operating systems such as BeOS have database filesystems which store metadata about a document natively in the filesystem, which is a move towards a more semantic desktop. General description: A definition of Semantic Desktop was given (Sauermann et al. 2005): A Semantic Desktop is a device in which an individual stores all her digital information like documents, multimedia and messages. These are interpreted as Semantic Web resources, each is identified by a Uniform Resource Identifier (URI) and all data is accessible and queryable as Resource Description Framework (RDF) graph. Resources from the web can be stored and authored content can be shared with others. Ontologies allow the user to express personal mental models and form the semantic glue interconnecting information and systems. Applications respect this and store, read and communicate via ontologies and Semantic Web protocols. The Semantic Desktop is an enlarged supplement to the user’s memory. Different interpretations of the semantic desktop: There are various interpretations of the semantic desktop. At its most limited state it might be interpreted as adding mechanisms for relating machine readable metadata to files. In a more extreme way it could be viewed as a complete replacement to existing user interfaces, which unifies all forms of data and provides a consistent single interface. There are many degrees between these two depending on which of the above problems are being dealt with. Standardization effort: To foster interoperability between different implementations and publish standards, the community around the Nepomuk project founded the OSCA Foundation (OSCAF) in 2008. Since June 2009, the developers from the Nepomuk-KDE communities and Xesam collaborate with OSCAF to help standardizing the data formats for KDE, GNOME and freedesktop.org. The Nepomuk/OSCAF standards are taken up by these projects and Nokia's Maemo Platform. Relationship with the Semantic Web: The Semantic Web is mainly concerned with making machine readable metadata to enable computers to process shared information, and the creation of formats and standards related to this. As such the aims of allowing more of a user's data to be processed by a computer and allowing data to more easily be shared could be considered as a subset of those of the Semantic Web, but extended to a user's local computer, rather than just files stored on the Internet. Relationship with the Semantic Web: However the aims of creating a unified interface and allowing data to be accessed in a format independent way are not really the concerns of the Semantic Web. In practice most projects related to the semantic desktop make use of Semantic Web protocols for storing their data. In particular RDF's concepts are used, and the format itself is used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fineness** Fineness: The fineness of a precious metal object (coin, bar, jewelry, etc.) represents the weight of fine metal therein, in proportion to the total weight which includes alloying base metals and any impurities. Alloy metals are added to increase hardness and durability of coins and jewelry, alter colors, decrease the cost per weight, or avoid the cost of high-purity refinement. For example, copper is added to the precious metal silver to make a more durable alloy for use in coins, housewares and jewelry. Coin silver, which was used for making silver coins in the past, contains 90% silver and 10% copper, by mass. Sterling silver contains 92.5% silver and 7.5% of other metals, usually copper, by mass. Fineness: Various ways of expressing fineness have been used and two remain in common use: millesimal fineness expressed in units of parts per 1,000 and karats or carats used only for gold. Karats measure the parts per 24, so that 18 karat = 18⁄24 = 75% and 24 karat gold is considered 100% gold. Millesimal fineness: Millesimal fineness is a system of denoting the purity of platinum, gold and silver alloys by parts per thousand of pure metal by mass in the alloy. For example, an alloy containing 75% gold is denoted as "750". Many European countries use decimal hallmark stamps (i.e., "585", "750", etc.) rather than "14 k", "18 k", etc., which is used in the United Kingdom and United States. Millesimal fineness: It is an extension of the older karat system of denoting the purity of gold by fractions of 24, such as "18 karat" for an alloy with 75% (18 parts per 24) pure gold by mass. The millesimal fineness is usually rounded to a three figure number, particularly where used as a hallmark, and the fineness may vary slightly from the traditional versions of purity. Here are the most common millesimal finenesses used for precious metals and the most common terms associated with them. Platinum 999.5: what most dealers would buy as if 100% pure; the most common purity for platinum bullion coins and bars 999—three nines fine 950: the most common purity for platinum jewelry 900—one nine fine 850 750 Gold 999.999—six nines fine: The purest gold ever produced. Refined by the Perth Mint in 1957. 999.99—five nines fine: The purest type of gold currently produced; the Royal Canadian Mint regularly produces commemorative coins in this fineness, including the world's largest, at 100 kg. Millesimal fineness: 999.9—four nines fine: Most popular. E.g. ordinary Canadian Gold Maple Leaf and American Buffalo coins 999—24 karat, also occasionally known as three nines fine: e.g., Chinese Gold Panda coins 995: The minimum allowed in Good Delivery gold bars 990—two nines fine 986—Ducat fineness: Formerly used by Venetian and Holy Roman Empire mints; still in use in Austria and Hungary 958.3—23 karat 916—22 karat: Crown gold. Historically the most widely used fineness for gold bullion coins, such as the oldest American Eagle denominations from 1795–1833. Currently used for British Sovereigns, South African Krugerrands, and the modern (1986—present) American Gold Eagles. Millesimal fineness: 900—one nine fine: American Eagle denominations for 1837–1933; currently used in Latin Monetary Union mintage (e.g. French and Swiss "Napoleon coin" 20 francs) 899—American Eagles briefly for 1834—1836 834—20 karat 750—18 karat: In Spain oro de primera ley (first law gold) 625—15 karat 585—14 karat 583.3—14 karat: In Spain oro de segunda ley (second law gold) 500—12 karat 417—10 karat: Lowest legal solid gold karat made in the US 375—9 karat 333—8 karat: Minimum standard for gold in Germany after 1884 Silver 999.99—five nines fine: The purest silver ever produced. This was achieved by the Royal Silver Company of Bolivia. Millesimal fineness: 999.9—four nines fine: ultra-fine silver used by the Royal Canadian Mint for their Silver Maple Leaf and other silver coins 999—fine silver or three nines fine: used in Good Delivery bullion bars and most current silver bullion coins. Used in U.S. silver commemorative coins and silver proof coins starting in 2019. Millesimal fineness: 980: common standard used in Mexico ca. 1930–1945 958: (23⁄24) Britannia silver 950: French 1st Standard 947.9: 91 zolotnik Russian silver 935: Swiss standard for watchcases after 1887, to meet the British Merchandise Marks Act and to be of equal grade to 925 sterling. Sometimes claimed to have arisen as a Swiss misunderstanding of the standard required for British sterling. Usually marked with three Swiss bears. Millesimal fineness: 935: used in the Art Deco period in Austria and Germany. Scandinavian silver jewellers used 935 silver after the 2nd World War 925: (37⁄40) Sterling silver The UK has used this alloy from the early 12th century. Equivalent to plata de primera ley in Spain (first law silver) 917: a standard used for the minting of Indian silver (rupees), during the British raj and for some coins during the first Brazilian Republic. Millesimal fineness: 916: 88 zolotnik Russian silver 900: one nine fine, coin-silver , or 90% silver: e.g. Flowing Hair and 1837–1964 U.S. silver coins. Also used in U.S. silver commemorative coins and silver proof coins 1982–2018. 892.4: US coinage 1485⁄1664 fine "standard silver" as defined by the Coinage Act of 1792: e.g. Draped Bust and Capped Bust U.S. silver coins (1795–1836) 875: 84 zolotnik is the most common fineness for Russian silver. Swiss standard, commonly used for export watchcases (also 800 and later 935). Millesimal fineness: 835: A standard predominantly used in Germany after 1884, and for some Dutch silver; and for the minting of coins in countries of the Latin Monetary Union 833: (5⁄6) a common standard for continental silver especially among the Dutch, Swedish, and Germans 830: A common standard used in older Scandinavian silver 800: The minimum standard for silver in Germany after 1884; the French 2nd standard for silver; "plata de segunda ley" in Spain (second law silver); Egyptian silver; Canadian silver circulating coinage from 1920-1966/7 750: An uncommon silver standard found in older German, Swiss and Austro-Hungarian silver 720: Decoplata :many Mexican and Dutch silver coins use this standard, as well as some coins from Portugal's former colonies, Japan, Uruguay, Ecuador, Egypt, and Morocco. Millesimal fineness: 600: Used in some examples of postwar Japanese coins, such as the 1957-1966 100 yen coin 500: Standard used for making British coinage 1920–1946 as well as Canadian coins from 1967-1968, and some coins from Colombia and Brazil. 400: Standard used for US half dollars between 1965 and 1970, and commemorative issue Eisenhower dollars between 1971 and 1978. Also used in some Swedish Krona coins. 350: Standard used for US Jefferson "war nickels" minted between 1942 and 1945. Karat: The karat (US spelling, symbol k or Kt) or carat (UK spelling, symbol c or Ct) is a fractional measure of purity for gold alloys, in parts fine per 24 parts whole. The karat system is a standard adopted by US federal law. Karat: Mass K = 24 × Mg / Mmwhere K is the karat rating of the material, Mg is the mass of pure gold in the alloy, and Mm is the total mass of the material.24-karat gold is pure (while 100% purity is unattainable, this designation is permitted in commerce for 99.95% purity), 18-karat gold is 18 parts gold, 6 parts another metal (forming an alloy with 75% gold), 12-karat gold is 12 parts gold (12 parts another metal), and so forth.In England, the carat was divisible into four grains, and the grain was divisible into four quarts. For example, a gold alloy of 127⁄128 fineness (that is, 99.2% purity) could have been described as being 23-karat, 3-grain, 1-quart gold. Karat: The karat fractional system is increasingly being complemented or superseded by the millesimal system, described above. Karat: Conversion between percentage of pure gold and karats: 58.33–62.50% = 14 k (acclaimed 58.33%) 75.00–79.16% = 18 k (acclaimed 75.00%) 91.66–95.83% = 22 k (acclaimed 91.66%) 95.83–99.95% = 23 k (acclaimed 95.83%) 99.95–100% = 24 k (acclaimed 99.95%) Volume However, this system of calculation gives only the mass of pure gold contained in an alloy. The term 18-karat gold means that the alloy's mass consists of 75% of gold and 25% of other metals. The quantity of gold by volume in a less-than-24-karat gold alloy differs according to the alloys used. For example, knowing that standard 18-karat yellow gold consists of 75% gold, 12.5% silver and the remaining 12.5% of copper (all by mass), the volume of pure gold in this alloy will be 60% since gold is much denser than the other metals used: 19.32 g/cm3 for gold, 10.49 g/cm3 for silver and 8.96 g/cm3 for copper. Karat: This formula gives the amount of gold in cubic centimeters or in milliliters in an alloy: Au 24 19.32 where VAu is the volume of gold in cubic centimeters or in milliliters Ma is the total mass of the alloy in grams kt is the karat purity of the alloyTo have the percentage of the volume of gold in an alloy, divide the volume of gold in cubic centimeters or in milliliters by the total volume of the alloy in cubic centimeters or in milliliters. Karat: For 10-carat gold, the gold volume in the alloy represents about 26% of the total volume for standard yellow gold. Karat: Etymology Karat is a variant of carat. First attested in English in the mid-15th century, the word carat came from Middle French carat, in turn derived either from Italian carato or Medieval Latin carratus. These were borrowed into Medieval Europe from the Arabic qīrāṭ meaning "fruit of the carob tree", also "weight of 5 grains", (قيراط) and was a unit of mass though it was probably not used to measure gold in classical times. The Arabic term ultimately originates from the Greek kerátion (κεράτιον) meaning carob seed (literally "small horn") (diminutive of κέρας – kéras, "horn"). Karat: In 309 CE, Roman Emperor Constantine I began to mint a new gold coin solidus that was 1⁄72 of a libra (Roman pound) of gold equal to a mass of 24 siliquae, where each siliqua (or carat) was 1⁄1728 of a libra. This is believed to be the origin of the value of the karat. Verifying fineness: While there are many methods of detecting fake precious metals, there are realistically only two options available for verifying the marked fineness of metal as being reasonably accurate: assaying the metal (which requires destroying it), or using X-ray fluorescence (XRF). XRF will measure only the outermost portion of the piece of metal and so may get misled by thick plating. Verifying fineness: That becomes a concern because it would be possible for an unscrupulous refiner to produce precious metals bars that are slightly less pure than marked on the bar. A refiner doing $1 billion of business each year that marked .980 pure bars as .999 fine would make about an extra $20 million in profit. In the United States, the actual purity of gold articles must be no more than .003 less than the marked purity (e.g. .996 fine for gold marked .999 fine), and the actual purity of silver articles must be no more than .004 less than the marked purity. Fine weight: A piece of alloy metal containing a precious metal may also have the weight of its precious component referred to as its "fine weight". For example, 1 troy ounce of 18 karat gold (which is 75% gold) may be said to have a fine weight of 0.75 troy ounces. Most modern government-issued bullion coins specify their fine weight. For example, the American Gold Eagle is embossed One Oz. Fine Gold and weighs 1.091 troy oz. Troy mass of silver content: Fineness of silver in Britain was traditionally expressed as the mass of silver expressed in troy ounces and pennyweights (1⁄20 troy ounce) in one troy pound (12 troy ounces) of the resulting alloy. Britannia silver has a fineness of 11 ounces, 10 pennyweights, or about 11 10 20 12 95.833 % silver, whereas sterling silver has a fineness of 11 ounces, 2 pennyweights, or exactly 11 20 12 92.5 % silver.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crypt (C)** Crypt (C): crypt is a POSIX C library function. It is typically used to compute the hash of user account passwords. The function outputs a text string which also encodes the salt (usually the first two characters are the salt itself and the rest is the hashed result), and identifies the hash algorithm used (defaulting to the "traditional" one explained below). This output string forms a password record, which is usually stored in a text file. Crypt (C): More formally, crypt provides cryptographic key derivation functions for password validation and storage on Unix systems. Relationship to Unix crypt utility: There is an unrelated crypt utility in Unix, which is often confused with the C library function. To distinguish between the two, writers often refer to the utility program as crypt(1), because it is documented in section 1 of the Unix manual pages, and refer to the C library function as crypt(3), because its documentation is in manual section 3. Details: This same crypt function is used both to generate a new hash for storage and also to hash a proffered password with a recorded salt for comparison. Details: Modern Unix implementations of the crypt library routine support a variety of hash schemes. The particular hash algorithm used can be identified by a unique code prefix in the resulting hashtext, following a de facto standard called Modular Crypt Format.The crypt() library function is also included in the Perl, PHP, Pike, Python (although it is now deprecated as of 3.11), and Ruby programming languages. Key derivation functions supported by crypt: Over time various algorithms have been introduced. To enable backward compatibility, each scheme started using some convention of serializing the password hashes that was later called the Modular Crypt Format (MCF). Old crypt(3) hashes generated before the de facto MCF standard may vary from scheme to scheme. A well-defined subset of the Modular Crypt Format was created during the Password Hashing Competition. The format is defined as:$<id>[$<param>=<value>(,<param>=<value>)*][$<salt>[$<hash>]] where id: an identifier representing the hashing algorithm (such as 1 for MD5, 5 for SHA-256 etc.) param name and its value: hash complexity parameters, like rounds/iterations count salt: radix-64 encoded salt hash: radix-64 encoded result of hashing the password and saltThe radix-64 encoding in crypt is called B64 and uses the alphabet ./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz which is different than the more common RFC 4648 base64 The PHC subset covers a majority of MCF hashes. A number of extra application-defined methods exist. Key derivation functions supported by crypt: Original implementation using the password as a key The original implementation of the crypt() library function in Third Edition Unix mimicked the M-209 cipher machine. Rather than encrypting the password with a key, which would have allowed the password to be recovered from the encrypted value and the key, it used the password itself as a key, and the password database contained the result of encrypting the password with this key. Key derivation functions supported by crypt: Traditional DES-based scheme The original password encryption scheme was found to be too fast and thus subject to brute force enumeration of the most likely passwords. In Seventh Edition Unix, the scheme was changed to a modified form of the DES algorithm. A goal of this change was to make encryption slower. In addition, the algorithm incorporated a 12-bit salt in order to ensure that an attacker would be forced to crack each password independently as opposed to being able to target the entire password database simultaneously. Key derivation functions supported by crypt: In detail, the user's password is truncated to eight characters, and those are coerced down to only 7-bits each; this forms the 56-bit DES key. That key is then used to encrypt an all-bits-zero block, and then the ciphertext is encrypted again with the same key, and so on for a total of 25 DES encryptions. A 12-bit salt is used to perturb the encryption algorithm, so standard DES implementations can't be used to implement crypt(). The salt and the final ciphertext are encoded into a printable string in a form of base64. Key derivation functions supported by crypt: This is technically not encryption since the data (all bits zero) is not being kept secret; it's widely known to all in advance. However, one of the properties of DES is that it's very resistant to key recovery even in the face of known plaintext situations. It is theoretically possible that two different passwords could result in exactly the same hash. Thus the password is never "decrypted": it is merely used to compute a result, and the matching results are presumed to be proof that the passwords were "the same." The advantages of this method have been that the hashtext can be stored and copied among Unix systems without exposing the corresponding plaintext password to the system administrators or other users. This portability has worked for over 30 years across many generations of computing architecture, and across many versions of Unix from many vendors. Key derivation functions supported by crypt: Weaknesses of the traditional scheme The traditional DES-based crypt algorithm was originally chosen because DES was resistant to key recovery even in the face of "known plaintext" attacks, and because it was computationally expensive. On the earliest Unix machines it took over a full second to compute a password hash. This also made it reasonably resistant to dictionary attacks in that era. At that time password hashes were commonly stored in an account file (/etc/passwd) which was readable to anyone on the system. (This account file was also used to map user ID numbers into names, and user names into full names, etc.). Key derivation functions supported by crypt: In the three decades since that time, computers have become vastly more powerful. Moore's Law has generally held true, so the computer speed and capacity available for a given financial investment has doubled over 20 times since Unix was first written. This has long since left the DES-based algorithm vulnerable to dictionary attacks, and Unix and Unix-like systems such as Linux have used "shadow" files for a long time, migrating just the password hash values out of the account file (/etc/passwd) and into a file (conventionally named /etc/shadow) which can only be read by privileged processes. Key derivation functions supported by crypt: To increase the computational cost of password breaking, some Unix sites privately started increasing the number of encryption rounds on an ad hoc basis. This had the side effect of making their crypt() incompatible with the standard crypt(): the hashes had the same textual form, but were now calculated using a different algorithm. Some sites also took advantage of this incompatibility effect, by modifying the initial block from the standard all-bits-zero. This did not increase the cost of hashing, but meant that precomputed hash dictionaries based on the standard crypt() could not be applied. Key derivation functions supported by crypt: BSDi extended DES-based scheme BSDi used a slight modification of the classic DES-based scheme. BSDi extended the salt to 24 bits and made the number of rounds variable (up to 224-1). The chosen number of rounds is encoded in the stored password hash, avoiding the incompatibility that occurred when sites modified the number of rounds used by the original scheme. These hashes are identified by starting with an underscore (_), which is followed by 4 characters representing the number of rounds then 4 characters for the salt. Key derivation functions supported by crypt: The BSDi algorithm also supports longer passwords, using DES to fold the initial long password down to the eight 7-bit bytes supported by the original algorithm. Key derivation functions supported by crypt: MD5-based scheme Poul-Henning Kamp designed a baroque and (at the time) computationally expensive algorithm based on the MD5 message digest algorithm. MD5 itself would provide good cryptographic strength for the password hash, but it is designed to be quite quick to calculate relative to the strength it provides. The crypt() scheme is designed to be expensive to calculate, to slow down dictionary attacks. The printable form of MD5 password hashes starts with $1$. Key derivation functions supported by crypt: This scheme allows users to have any length password, and they can use any characters supported by their platform (not just 7-bit ASCII). (In practice many implementations limit the password length, but they generally support passwords far longer than any person would be willing to type.) The salt is also an arbitrary string, limited only by character set considerations. Key derivation functions supported by crypt: First the passphrase and salt are hashed together, yielding an MD5 message digest. Then a new digest is constructed, hashing together the passphrase, the salt, and the first digest, all in a rather complex form. Then this digest is passed through a thousand iterations of a function which rehashes it together with the passphrase and salt in a manner that varies between rounds. The output of the last of these rounds is the resulting passphrase hash. Key derivation functions supported by crypt: The fixed iteration count has caused this scheme to lose the computational expense that it once enjoyed and variable numbers of rounds are now favoured. In June 2012, Poul-Henning Kamp declared the algorithm insecure and encouraged users to migrate to stronger password scramblers. Blowfish-based scheme Niels Provos and David Mazières designed a crypt() scheme called bcrypt based on Blowfish, and presented it at USENIX in 1999. The printable form of these hashes starts with $2$, $2a$, $2b$, $2x$ or $2y$ depending on which variant of the algorithm is used: $2$ – Obsolete. Key derivation functions supported by crypt: $2a$ – The current key used to identify this scheme. Since a major security flaw was discovered in 2011 in a non-OpenBSD crypt_blowfish implementation of the algorithm, hashes indicated by this string are now ambiguous and might have been generated by the flawed implementation, or a subsequent fixed, implementation. The flaw may be triggered by some password strings containing non-ASCII (8th-bit-set) characters. Key derivation functions supported by crypt: $2b$ – Used by recent OpenBSD implementations to include a mitigation to a wraparound problem. Previous versions of the algorithm have a problem with long passwords. By design, long passwords are truncated at 72 characters, but there is a byte integer wraparound problem with certain password lengths resulting in weak hashes. $2x$ – A flag added after the crypt_blowfish bug discovery. Old hashes can be renamed to be $2x$ to indicate that they were generated with the broken algorithm. These hashes are still weak, but at least it's clear which algorithm was used to generate them. Key derivation functions supported by crypt: $2y$ – A flag in crypt_blowfish to unambiguously use the new, corrected algorithm. On an older implementation suffering from the bug, $2y$ simply won't work. On a newer, fixed implementation, it will produce the same result as using $2b$.Blowfish is notable among block ciphers for its expensive key setup phase. It starts off with subkeys in a standard state, then uses this state to perform a block encryption using part of the key, and uses the result of that encryption (really, a hashing) to replace some of the subkeys. Then it uses this modified state to encrypt another part of the key, and uses the result to replace more of the subkeys. It proceeds in this fashion, using a progressively modified state to hash the key and replace bits of state, until all subkeys have been set. Key derivation functions supported by crypt: The number of rounds of keying is a power of two, which is an input to the algorithm. The number is encoded in the textual hash, e.g. $2y$10... NT hash scheme FreeBSD implemented support for the NT LAN Manager hash algorithm to provide easier compatibility with NT accounts via MS-CHAP. The NT-Hash algorithm is known to be weak, as it uses the deprecated md4 hash algorithm without any salting. FreeBSD used the $3$ prefix for this. Its use is not recommended, as it is easily broken. Key derivation functions supported by crypt: SHA2-based scheme The commonly used MD5 based scheme has become easier to attack as computer power has increased. Although the Blowfish-based system has the option of adding rounds and thus remain a challenging password algorithm, it does not use a NIST-approved algorithm. In light of these facts, Ulrich Drepper of Red Hat led an effort to create a scheme based on the SHA-2 (SHA-256 and SHA-512) hash functions. The printable form of these hashes starts with $5$ (for SHA-256) or $6$ (for SHA-512) depending on which SHA variant is used. Its design is similar to the MD5-based crypt, with a few notable differences: It avoids adding constant data in a few steps. Key derivation functions supported by crypt: The MD5 algorithm would repeatedly add the first letter of the password; this step was changed significantly. Inspired by Sun's crypt() implementation, functionality to specify the number of iterations (rounds) the main loop in the algorithm performs was added The number of iterations is 5000 by default, with a minimum of 1000, and a maximum of 999,999,999.The specification and sample code have been released into the public domain; it is often referred to as "SHAcrypt". Other hashes $y$ yescrypt is an extension of scrypt ($7$) and a PHC finalist. It is used in several Linux distributions as an alternative to the existing schemes. To use this hash, the libcrypt from glibc is replaced with a backward-compatible one from the "libxcrypt" project. $argon2d$, $argon2i$, $argon2ds$ These are PHC-assigned names for the Argon2 algorithm, but do not seem to be widely used.Additional formats, if any, are described in the man pages of implementations. Key derivation functions supported by crypt: Archaic Unix schemes BigCrypt is the modified version of DES-Crypt used on HP-UX, Digital Unix, and OSF/1. The main difference between it and DES is that BigCrypt uses all the characters of a password, not just the first 8, and has a variable length hash.Crypt16 is the minor modification of DES, which allows passwords of up to 16 characters. Used on Ultrix and Tru64. Support in operating systems: Linux The GNU C Library used by almost all Linux distributions provides an implementation of the crypt function which supports the DES, MD5, and (since version 2.7) SHA-2 based hashing algorithms mentioned above. Ulrich Drepper, the glibc maintainer, rejected bcrypt (scheme 2) support since it isn't approved by NIST. A public domain crypt_blowfish library is available for systems without bcrypt. It has been integrated into glibc in SUSE Linux. In addition, the aforementioned libxcrypt is used to replace the glibc crypt() on yescrypt-enabled systems. The musl C library supports schemes 1, 2, 5, and 6, plus the tradition DES scheme. The traditional DES code is based on the BSD FreeSec, with modification to be compatible with the glibc UFC-Crypt. macOS Darwin's native crypt() provides limited functionality, supporting only DES and BSDi. OS X uses a few systems for its own password hashes, ranging from the old NeXTStep netinfo to the newer directory services (ds) system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brinkley stick** Brinkley stick: A Brinkley stick is a safety device used to discharge high voltage capacitors and ensure HT (high voltage) electrical circuits are discharged. The tool consists of a hook attached to the end of an insulated rod. The hook is connected by a length of insulated wire to a suitable ground or earth, often via a suitably valued resistor. Named after Charles Brinkley, an amputee ferry boatman who carried radar staff across the river Deben. One of the Trade test colour films, On the Safe Side includes a fictionalised sequence during which the life of a technician is preserved by his decision to deploy a Brinkley stick.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paper-and-pencil game** Paper-and-pencil game: Paper-and-pencil games or paper-and-pen games (or some variation on those terms) are games that can be played solely with paper and pencils (or other writing implements), usually without erasing. They may be played to pass the time, as icebreakers, or for brain training. In recent times, they have been supplanted by mobile games. Some popular examples of pencil-and-paper games include Tic-tac-toe, Sprouts, Dots and Boxes, Hangman, MASH, Paper soccer, and Spellbinder. The term is unrelated to the use in role-playing games to differentiate tabletop games from role-playing video games. Paper-and-pencil game: Board games where pieces are never moved or removed from the board once being played, particularly abstract strategy games like Gomoku and Connect Four, can also be played as pencil-and-paper games.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fractional wavelet transform** Fractional wavelet transform: Fractional wavelet transform (FRWT) is a generalization of the classical wavelet transform (WT). This transform is proposed in order to rectify the limitations of the WT and the fractional Fourier transform (FRFT). The FRWT inherits the advantages of multiresolution analysis of the WT and has the capability of signal representations in the fractional domain which is similar to the FRFT. Definition: The fractional Fourier transform (FRFT), a generalization of the Fourier transform (FT), serves a useful and powerful analyzing tool in optics, communications, signal and image processing, etc. This transform, however, has one major drawback due to using global kernel, i.e., the fractional Fourier representation only provides such FRFT spectral content with no indication about the time localization of the FRFT spectral components. Therefore, the analysis of non-stationary signals whose FRFT spectral characteristics change with time requires joint signal representations in both time and FRFT domains, rather than just a FRFT-domain representation. Definition: The first modification to the FRFT to allow analysis of aforementioned non-stationary signals came as the short-time FRFT (STFRFT). The idea behind the STFRFT was segmenting the signal by using a time-localized window, and performing FRFT spectral analysis for each segment. Definition: Since the FRFT was computed for every windowed segment of the signal, the STFRFT was able to provide a true joint signal representation in both time and FRFT domains. However, the drawback is that the STFRFT has the limitation of a fixed window width which needs to be fixed a priori; this effectively means that it does not provide the requisite good resolution in both time and FRFT domains. In other words, the efficiency of the STFRFT techniques is limited by the fundamental uncertainty principle, which implies that narrow windows produce good time resolution but poor spectral resolution, whereas wide windows provide good spectral resolution but poor time resolution. Most of the signals of practical interest are such that they have high spectral components for short durations and low spectral components for long durations. Definition: As a generalization of the wavelet transform, Mendlovic and David first introduced the fractional wavelet transform (FRWT) as a way to deal with optical signals, which was defined as a cascade of the FRFT and the ordinary wavelet transform (WT), i.e., Wα(a,b)=1a∫R∫RKα(u,t)f(t)ψ∗(u−ba)dtdu=1a∫R(∫Rf(t)Kα(u,t)dt)ψ∗(u−ba)du=1a∫RFα(u)ψ∗(u−ba)du where the transform kernel Kα(u,t) is given by cot csc ⁡α,α≠kπδ(t−u),α=2kπδ(t+u),α=(2k−1)π where cot ⁡α)/2π , and Fα(u) denotes the FRFT of f(t) . But it could not be regarded as a kind of joint time-FRFT-domain representation since time information is lost in this transform. Moreover, Prasad and Mahato expressed the ordinary WT of a signal in terms of the FRFTs of the signal and mother wavelet, and also called the expression the FRWT. That is, csc sin sin sin ⁡2α−jbudu where sin ⁡α) and sin ⁡α) denote the FTs (with their arguments scaled by sin ⁡α ) f(t) and ψ(t) , respectively. Clearly, this so-called FRWT is identical to the ordinary WT. Definition: Recently, Shi et al. proposed a new definition of the FRWT by introducing a new structure of the fractional convolution associated with the FRFT. Specifically, the FRWT of any function f(t)∈L2(R) is defined as [8] Wfα(a,b)=Wα[f(t)](a,b)=∫Rf(t)ψα,a,b∗(t)dt where ψα,a,b(t) is a continuous affine transformation and chirp modulation of the mother wavelet ψ(t) , i.e., cot ⁡α in which a∈R+ and b∈R are scaling and translation parameters, respectively. Definition: Conversely, the inverse FRWT is given by f(t)=12πCψ∫R∫R+Wfα(a,b)ψα,a,b(t)daa2db where Cψ is a constant that depends on the wavelet used. The success of the reconstruction depends on this constant called, the admissibility constant, to satisfy the following admissibility condition: Cψ=∫R|Ψ(Ω)|2|Ω|dΩ<∞ where Ψ(Ω) denotes the FT of ψ(t) . The admissibility condition implies that Ψ(0)=0 , which is ∫Rψ(t)dt=0 . Consequently, continuous fractional wavelets must oscillate and behave as bandpass filters in the fractional Fourier domain. From this viewpoint, the FRWT of f(t) can be expressed in terms of the FRFT-domain representation as csc ⁡α)Kα∗(u,b)du where Fα(u) indicates the FRFT of f(t) , and csc ⁡α) denotes the FT (with its argument scaled by csc ⁡α ) of ψ(t) . Note that when α=π/2 , the FRWT reduces to the classical WT. For more details of this type of the FRWT, see [8] and. Multiresolution Analysis (MRA) Associated with Fractional Wavelet Transform: A comprehensive overview of MRA and orthogonal fractional wavelets associated with the FRWT can be found in the paper.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Submerged arc welding** Submerged arc welding: Submerged arc welding (SAW) is a common arc welding process. The first SAW patent was taken out in 1935. The process requires a continuously fed consumable solid or tubular (metal cored) electrode. The molten weld and the arc zone are protected from atmospheric contamination by being "submerged" under a blanket of granular fusible flux consisting of lime, silica, manganese oxide, calcium fluoride, and other compounds. When molten, the flux becomes conductive, and provides a current path between the electrode and the work. This thick layer of flux completely covers the molten metal thus preventing spatter and sparks as well as suppressing the intense ultraviolet radiation and fumes that are a part of the shielded metal arc welding (SMAW) process.SAW is normally operated in the automatic or mechanized mode, however, semi-automatic (hand-held) SAW guns with pressurized or gravity flux feed delivery are available. The process is normally limited to the flat or horizontal-fillet welding positions (although horizontal groove position welds have been done with a special arrangement to support the flux). Deposition rates approaching 45 kg/h (100 lb/h) have been reported — this compares to ~5 kg/h (10 lb/h) (max) for shielded metal arc welding. Although currents ranging from 300 to 2000 A are commonly utilized, currents of up to 5000 A have also been used (multiple arcs). Submerged arc welding: Single or multiple (2 to 5) electrode wire variations of the process exist. SAW strip-cladding utilizes a flat strip electrode (e.g. 60 mm wide x 0.5 mm thick). DC or AC power can be used, and combinations of DC and AC are common on multiple electrode systems. Constant voltage welding power supplies are most commonly used; however, constant current systems in combination with a voltage sensing wire-feeder are available. Features: Welding head It feeds flux and filler metal to the welding joint. The electrode (filler metal) gets energized here. Flux hopper It stores the flux and controls the rate of flux deposition on the welding joint. Features: Flux The granulated flux shields and thus protects molten weld from atmospheric contamination. The flux cleans weld metal and can also modify its chemical composition. The flux is granulated to a definite size. It may be of fused, bonded or mechanically mixed type. The flux may consist of fluorides of calcium and oxides of calcium, magnesium, silicon, aluminium and manganese compounds. Alloying elements may be added as per requirements. Substances involving large amounts of gas during welding are never mixed with the flux. Flux with fine and coarse particle sizes are recommended for welding heavier and smaller thickness respectively. Features: Electrode SAW filler material usually is a standard wire as well as other special forms. This wire normally has a thickness of 1.6 mm to 6 mm (1/16 in. to 1/4 in.). In certain circumstances, twisted wire can be used to give the arc an oscillating movement. This helps fuse the toe of the weld to the base metal. Features: The electrode composition depends upon the material being welded. Alloying elements may be added in the electrodes. Electrodes are available to weld mild steels, high carbon steels, low and special alloy steels, stainless steel and some of the nonferrous of copper and nickel. Electrodes are generally copper coated to prevent rusting and to increase their electrical conductivity. Electrodes are available in straight lengths and coils. Their diameters may be 1.6, 2.0, 2.4, 3, 4.0, 4.8, and 6.4 mm. The approximate value of currents to weld with 1.6, 3.2 and 6.4 mm diameter electrodes are 150–350, 250–800 and 650–1350 Amps respectively. Welding Operation: The flux starts depositing on the joint to be welded. Since the flux is not electrically conductive when cold, the arc may be struck either by touching the electrode with the work piece or by placing steel wool between electrode and job before switching on the welding current or by using a high frequency unit. In all cases the arc is struck under a cover of flux. Flux otherwise is an insulator but once it melts due to heat of the arc, it becomes highly conductive and hence the current flow is maintained between the electrode and the workpiece through the molten flux. The upper portion of the flux, in contact with atmosphere, which is visible remains granular (unchanged) and can be reused. The lower, melted flux becomes slag, which is waste material and must be removed after welding. Welding Operation: The electrode is continuously fed to the joint to be welded at a predetermined speed. In semi-automatic welding sets the welding head is moved manually along the joint. In automatic welding a separate drive moves either the welding head over the stationary job or the job moves/rotates under the stationary welding head. Welding Operation: The arc length is kept constant by using the principle of a self-adjusting arc. If the arc length decreases, arc voltage will increase, arc current and therefore burn-off rate will increase thereby causing the arc to lengthen. The reverse occurs if the arc length increases more than the normal.A backing plate of steel or copper may be used to control penetration and to support large amounts of molten metal associated with the process. Welding Operation: Key SAW process variables Wire feed speed (main factor in welding current control) Arc voltage Travel speed Electrode stick-out (ESO) or contact tip to work (CTTW) Polarity and current type (AC or DC) and variable balance AC current Material applications: Carbon steels (structural and vessel construction) Low alloy steels Stainless steels Nickel-based alloys Surfacing applications (wear-facing, build-up, and corrosion resistant overlay of steels) Advantages: High deposition rates (over 45 kg/h (100 lb/h) have been reported). High operating factors in mechanized applications. Deep weld penetration. Sound welds are readily made (with good process design and control). High speed welding of thin sheet steels up to 5 m/min (16 ft/min) is possible. Minimal welding fume or arc light is emitted. Practically no edge preparation is necessary depending on joint configuration and required penetration. The process is suitable for both indoor and outdoor works. Welds produced are sound, uniform, ductile, corrosion resistant and have good impact value. Single pass welds can be made in thick plates with normal equipment. The arc is always covered under a blanket of flux, thus there is no chance of spatter of weld. 50% to 90% of the flux is recoverable, recycled and reused. Limitations: Limited to ferrous (steel or stainless steels) and some nickel-based alloys. Normally limited to the 1F, 1G, and 2F positions. Normally limited to long straight seams or rotated pipes or vessels. Requires relatively troublesome flux handling systems. Flux and slag residue can present a health and safety concern. Requires inter-pass and post weld slag removal. Requires backing strips for proper root penetration. Limited to high thickness materials.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Toe board** Toe board: A roofing toe board is one of the most basic pieces of safety equipment a roofer can use. A toe board is a long piece of 2 inch x 4 inch (a 2x4) wood nailed horizontally along a roof in various places. Toe board: Most roofers work in a variety of weather conditions, sometimes severe heat, and resist wearing an apparatus such as a safety harness. As a result of needing both an uncumbered work environment and the need to stay as cool as possible, roofers prefer the toe board due to its freeness of movement. If an accident happens and a roofer loses his/her footing, the 2x4 would stop the roofer from sliding down and/or off the roof.More deaths occur in falls than for any other reason in the construction profession.More generally, a toe board is a small vertical barrier attached to a raised floor or raised platform. A toe board is like a tiny wall - usually between 4 and 12 inches - whose purpose is to prevent objects or people from falling over, or rolling over, the side of a raised platform, such as preventing a screwdriver dropped on the floor of elevated construction scaffolding from rolling off the side onto people or objects below.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**View camera** View camera: A view camera is a large-format camera in which the lens forms an inverted image on a ground-glass screen directly at the film plane. The image is viewed and then the glass screen is replaced with the film, and thus the film is exposed to exactly the same image as was seen on the screen.This type of camera was first developed in the era of the daguerreotype (1840s–1850s) and is still in use today, some with various drive mechanisms for movements (rather than loosen-move-tighten), more scale markings, and/or more spirit levels. It comprises a flexible bellows that forms a light-tight seal between two adjustable standards, one of which holds a lens, and the other a ground glass or a photographic film holder or a digital back. There are three general types, the rail camera, the field camera, and others that don't fit into either category. View camera: The bellows is a flexible, accordion-pleated box. It encloses the space between the lens and film, and flexes to accommodate the movements of the standards.: p. 34  The front standard is a frame that holds the lensboard, to which the lens (perhaps with shutter) is attached. View camera: At the other end of the bellows, the rear standard is a frame that holds a ground glass plate, used for focusing and composing the image before exposure—and is replaced by a holder containing the light-sensitive film, plate, or image sensor for exposure. The front and rear standards can move in various ways relative to each other, unlike most other camera types. Whereas most cameras today control only the distance of the plane of focus from the camera, the view camera can also provide control over the orientation of the plane of focus, and perspective control. The camera is usually used on a tripod or other support. Types: Several types of view cameras are used for different purposes, and provide different degrees of movement and portability. They include: Rail camera - There is the smaller more maneuverable monorail camera and the large stable immovable multi-rail camera known as the process camera. Types: The monorail camera is the most common type of studio view camera, with front and rear standards mounted to a single rail that is fixed to a camera support. This design gives the greatest range of movements and flexibility, with both front and rear standards able to tilt, shift, rise, fall, and swing in similar proportion. These are generally made of metal with leather or synthetic bellows, and are difficult to pack for travel. Sinar and Toyo are popular manufacturers of monorail view camera systems. ARCA-Swiss produces monorail cameras for field use in addition to models for the more conventional studio applications. Many manufacturers also offer monorail extensions that move the front or rear standards farther away from each other to facilitate focusing on close objects (macrophotography). Types: The stationary process camera is used for copying nearly flat artwork that is held to a copyboard located at the far end of the camera rails of horizontally mounted cameras, or at the base of vertical cameras. The work to be copied and the film are often held in place by vacuum and copying is usually at 1:1 magnification. They use various sizes of film depending upon what is needed for the specific job. Types: Field camera - These have the front and rear standard mounted on sliding rails fixed to a hinged flat bed that can be attached to a camera support, such as a tripod. These cameras are usually made of wood, or sometimes lightweight and strong composites such as carbon fiber. With the bellows fully retracted, the flat bed folds up, reducing the camera to a relatively small, light, and portable box. The trade off for this portability is that the standards are not as mobile or as adjustable as on a monorail design. The rear standard in particular may be fixed and offer no movement. These large format but transportable cameras are popular with landscape photographers. Tachihara and Wisner are examples of modern field cameras at opposite ends of the price scale. Types: Extremely large field cameras use 11×14 film and larger, or panoramic film sizes such as 4×10 or 8×20. These are sometimes called banquet cameras, and once were commonly used to photograph large, posed groups of people to mark occasions, such as banquets or weddings. Studio and salon cameras are similar to field cameras, but do not fold up for portability. Folding plate cameras with limited movements were often used. An example is the Goerz Taro-Tenax 9x12cm. Types: Press cameras that have a ground glass integral to the film-holder mechanism allow critical focus and use of available movements. More expensive examples had a wide array of movements, as well as focusing and composing aids like rangefinders and viewfinders. They are most often made of metal, designed to fold up quickly for portability, used by press photographers before and during the second world war. Some press cameras have more adjustment capabilities, including some ability to tilt the rear standard, and can be either hand-held or attached to a tripod for support.: p.33  Other view cameras - Many unique view cameras have been built and used for special purposes or for general purpose.View cameras use large format sheet film—one sheet per photograph. Standard sizes in inches are: 4×5, 5×7, 4×10, 5×12, 8×10, 11×14, 7×17, 8×20, 12×20, 20×24, and larger for process cameras. (It is usual to list the short side first in the Americas, and the long side in many other countries, thus 4×5 is the same as 5×4). A similar, but not identical, range of metric sizes is used in many countries; thus 9×12 cm is similar to, but not interchangeable with, 4×5 inches and 13×18 cm is similar to, but not interchangeable with, 5×7 inches. The most widely used format is 4×5, followed by 8×10. Types: A few rollfilm cameras have movements that make them as versatile as a sheet film view camera. Rollfilm and instant film backs are available to use in place of a sheetfilm holder on a single-film camera. Movements: Photographers use view cameras to control focus and convergence of parallel lines. Image control is done by moving the front and/or rear standards. Movements are the ways the front and rear standards can move to alter perspective and focus. The term can also refer to the mechanisms on the standards that control their position. Not all cameras have all movements available to both front and rear standards, and some cameras have more movements available than others. Some cameras have mechanisms that facilitate intricate movement combinations. Some limited view camera–type movements are possible with SLR cameras using various tilt/shift lenses. Also, as use of view cameras declines in favor of digital photography, these movements are simulated using computer software. Movements: Rise and fall Rise and fall are the movements of either the front or rear standard vertically along a line in a plane parallel to the film (or sensor) plane. Rise is a very important movement especially in architectural photography. Generally, the lens is moved vertically—either up or down—along the lens plane to change the portion of the image captured on the film. In the 35 mm format, special shift lenses (sometimes called perspective control lenses) emulate the rise or fall of view cameras. Movements: The main effect of rise is to eliminate converging parallels when photographing tall buildings. If a camera without movements is pointed at a tall building, the top is off. If the camera is tilted upwards to get it all in, the film plane is not parallel to the building, and the building seems narrower at the top than the bottom: lines that are parallel in the object converge in the image. Movements: To avoid this apparent distortion, a wide-angle lens gets more of the building in, but includes more of the foreground and alters the perspective. A camera with rising front lets a normal lens be raised to include the top of the building without tilting the camera. Movements: This requires that the image circle of the lens be larger than is required to cover the film without use of movements. If the lens can produce a circular image just large enough to cover the film, it can't cover the bottom of the film as it rises. Consequently, lens coverage must be larger to accommodate rise (and fall, tilt and shift). Movements: In Figure a) below (images are upside down, as a photographer would see them on the ground glass of a view camera), the lens has been shifted down (fall). Notice that much of the unwanted foreground is included, but not the top of the tower. In Figure b), the lens has been shifted up (rise): the top of the tower is now inside the area captured on film, at the sacrifice of unwanted green foreground. Movements: Shift Moving the front standard left or right from its normal position is called lens shift, or simply shift. This movement is similar to rise and fall, but moves the image horizontally rather than vertically. One use for shift is to remove the image of the camera from the final image when photographing a reflective surface. Movements: Tilt The axis of the lens is normally perpendicular to the film (or sensor). Changing the angle between axis and film by tilting the lens standard backwards or forwards is called lens tilt, or just tilt. Tilt is especially useful in landscape photography. By using the Scheimpflug principle, the “plane of sharp focus” can be changed so that any plane can be brought into sharp focus. When the film plane and lens plane are parallel as is the case for most 35 mm cameras, the plane of sharp focus is also parallel to these two planes. If, however, the lens plane is tilted with respect to the film plane, the plane of sharp focus is also tilted according to geometrical and optical properties. The three planes intersect in a line below the camera for downward lens tilt. The tilted plane of sharp focus is useful, in that this plane can be made to coincide with a near and far object. Thus, both near and far objects on the plane are in focus. Movements: This effect is often incorrectly thought of as increasing the depth of field. Depth of field depends on the focal length, aperture, and subject distance. As long as the photographer wants sharpness in a plane that is parallel to the film, tilt is of no use. However, tilt has a strong effect on the depth of field by drastically altering its shape, making it asymmetrical. Without tilt, the limits of near and far acceptable focus are parallel to the plane of sharp focus as well as parallel to the film. With forward tilt, the plane of sharp focus tilts even more and the near and far limits of acceptable focus form a wedge shape (viewed from the side). Thus, the lens still sees a cone shaped portion of whatever is in front of it while the wedge of acceptable focus is now more closely aligned with this cone. Therefore, depending on the shape of the subject, a wider aperture can be used, lessening concerns about camera stability due to slow shutter speed and diffraction due to too-small aperture. Movements: Tilting achieves the desired depth of field using the aperture at which the lens performs best. Too small an aperture risks losses to diffraction and camera/subject motion what is gained from depth of field. Only testing a given scene, or experience, shows whether tilting is better than leaving the standards neutral and relying on the aperture alone to achieve the desired depth of field. If the scene is sharp enough at f/32 with 2 degrees of tilt but would need f/64 with zero tilt, then tilt is the solution. If another scene would need f/45 with or without tilt, then nothing is gained. See Merklinger and Luong for extensive discussions on determining the optimal tilt (if any) in challenging situations. Movements: With a forward tilt, the shape of the portion of a scene in acceptable focus is a wedge. Thus, the scene most likely to benefit from tilting is short in the front and expands to a greater height or thickness toward the horizon. A scene consisting of tall trees in the near, middle and far distance may not lend itself to tilting unless the photographer is willing to sacrifice either the top of the near trees and/or the bottom of the far trees. Movements: Assuming lens axis front tilt, here are the trade offs in choosing between a small degree of tilt (say less than 3) and a larger tilt: A small tilt causes a wider or fatter wedge but one that is far off axis from the cone of light seen by the lens. Conversely, a large tilt (say 10 degrees) makes the wedge more aligned with the lens view, but with a narrower wedge. Thus, a modest tilt is often, or even usually, the best starting point. Movements: Small and medium format cameras have fixed bodies that do not allow for misalignment of the film and lens planes, intentionally or not. Tilt/shift (“TS”) or perspective control (“PC”) lenses that provide limited movements for these cameras can be purchased from a number of lens makers. High-quality TS or PC lenses are expensive. The price of a new Canon TS-E or Nikon PC-E lens is comparable to that of a good used large-format camera, which offers a much greater range of adjustment. Movements: Swing Altering the angle of the lens standard in relation to the film plane by swiveling it from side to side is called swing. Swing is like tilt, but it changes the angle of the focal plane in the horizontal axis instead of the vertical axis. For example, swing can help achieve sharp focus along the entire length of a picket fence that is not parallel to the film plane. Movements: Back tilt/swing Angular movements of the rear standard change the angle between the lens plane and the film plane just as front standard angular movements do. Though rear standard tilt changes the plane of sharp focus in the same manner as front standard tilt, this is not usually the reason to use rear tilt/swing. When a lens is a certain distance (its focal length) away from the film, distant objects, such as faraway mountains, are in focus. Moving the lens farther from the film brings closer objects into focus. Tilting or swinging the film plane puts one side of the film farther from the lens than the center is and the opposite point of the film is therefore closer to the lens. Movements: One reason to swing or tilt the rear standard is to keep the film plane parallel to the face of the subject. Another reason to swing or tilt the rear standard is to control apparent convergence of lines when shooting subjects at an angle. Movements: It is often incorrectly stated that rear movements can be used to change perspective. The only thing that truly controls perspective is the location of the camera in relation to the objects in the frame. Rear movements can let a photographer shoot a subject from a perspective that puts the camera at an angle to the subject, yet still achieves parallel lines. Thus, rear movements allow a change of perspective by allowing a different camera location, yet no view camera movement actually alters perspective. Lenses: A view camera lens typically consists of: A front lens element— sometimes referred to as a cell. A shutter—an electronic or spring-actuated mechanism that controls exposure duration. Some early shutters were air-actuated. For long exposures, a lens with no shutter (a barrel lens) can be uncovered for the duration of the exposure by removing a lens cap. Lenses: The aperture diaphragm A lensboard—a flat board, typically square in shape and made of metal or wood, that locks securely into the front standard of a particular view camera, with a central hole of the right size to insert a lens and shutter assembly, usually secured and made light-tight by screwing a ring onto a thread on the rear of the lens assembly. Lensboards, complete with lenses, can be removed and fitted quickly. Lenses: A rear lens element (or cell).Almost any lens of the appropriate coverage area may be used with almost any view camera. All that is required is that the lens be mounted on a lensboard compatible with the camera. Not all lensboards work with all models of view camera, though different cameras may be designed to work with a common lensboard type. Lensboards usually come with a hole sized according to the shutter size, often called the Copal Number. Copal is the most popular maker of leaf shutters for view camera lenses. Lenses: The lens is designed to split into two pieces, the front and rear elements screwed, usually by a trained technician, into the front and back of the shutter assembly, and the whole fitted in a lensboard. Lenses: View camera lenses are designed with both focal length and coverage in mind. A 300 mm lens may give a different angle of view (either over 31° or over 57°), depending on whether it was designed to cover a 4×5 or 8×10 image area. Most lenses are designed to cover more than just the image area to accommodate camera movements. Lenses: Focusing involves moving the entire front standard with the lens assembly closer to or further away from the rear standard, unlike many lenses on smaller cameras in which one group of lens elements is fixed and another moves. Lenses: Very long focus lenses may require that the camera be fitted with special extra-long rails and bellows. Very short focal length wide-angle lenses may require that the standards be closer together than a normal concertina-folded bellows allows. Such a situation requires a bag bellows, a simple light-tight flexible bag. Recessed lensboards are also sometimes used to get the rear element of a wide angle lens close enough to the film plane; they may also be of use with telephoto lenses, since these compressed long-focus lenses may also have very small spacing between the back of the lens and the film plane. Lenses: Zoom lenses are not used in view camera photography, as there is no need for rapid and continuous change of focal length with static subjects, and the price, size, weight, and complexity would be excessive. Some lenses are "convertible": the front or rear element only, or both elements, may be used, giving three different focal lengths, though the quality of the single elements is not as good at larger apertures as the combination. These are popular with field photographers who can save weight by carrying one convertible lens rather than two or three lenses of different focal lengths. Lenses: Soft focus lenses introduce spherical aberration deliberately into the optical formula for an ethereal effect considered pleasing, and flattering to subjects with less than perfect complexions. The degree of soft-focus effect is determined by either aperture size or special disks that fit into the lens to modify the aperture shape. Some antique lenses, and some modern SLR soft focus lenses, provide a lever that controls the softening effect by altering the optical formula. Film: View cameras use sheet film but can use roll film (generally 120/220 size) by using special roll film holders. Popular "normal" image formats for the 4×5 camera are 6×6, 6×7, and 6×9 cm. 6×12 and 6×17 cm are suited to panoramic photography. Film: With an inexpensive modification of the darkslide, and no modification to the camera, half a sheet of film can be exposed at a time. While this technique could be used for economy where a larger image is not required, it is almost always used with the intention of obtaining a panoramic format so that, for example, a 4×5 camera can take two 2×5 photos, an 8×10 can take two 4×10s etc. This is popular for landscape photography, and in the past was common for group photographs (hence, half-frame panorama formats such as 4x10 are commonly referred to as "Banquet formats") Digital camera backs are available for view cameras to create digital images instead of using film. Prices are high compared to smaller digital cameras. Operation: The camera must be set up in a suitable position. In some cases the subject can also be manipulated, as in a studio. In others the camera must be positioned to photograph subjects such as landscapes. The camera must be mounted in a way that prevents camera motion for the duration of the exposure. Usually a tripod is used—a camera with a long bellows extension may require two. Operation: To operate the view camera, the photographer opens the shutter on the lens to focus and compose the image on a ground glass plate on the rear standard. The rear standard holds the ground glass in the same plane that the film later occupies—so that an image focused on the ground glass is focused on the film. The ground glass image can be somewhat dim and difficult to view in bright light. Photographers often use a focusing cloth or "dark cloth" over their heads and the rear of the camera. The dark cloth shrouds the viewing area and keeps environmental light from obscuring the image. In the dark space created by the dark cloth, the image appears as bright as it can, so the photographer can view, focus, and compose the image. Operation: Often, a photographer uses a magnifying lens, usually a high quality loupe, to critically focus the image. An addition over the ground glass called a Fresnel lens can considerably brighten the ground glass image (with a slight loss of focusing accuracy). The taking lens may be stopped down to help gauge depth of field effects and vignetting, but the photographer generally opens the lens to its widest setting for focusing. Operation: The ground glass and frame assembly, known as the spring back, is held in place by springs that pull and hold the ground glass firmly into the plane of focus during the focusing and composition process. Once focusing is complete, the same springs act as a flexible clamping mechanism to press the film holder into the same plane of focus that the ground glass occupied. Operation: To take the photograph, the photographer pulls back the ground glass and slides the film holder into its place. The shutter is then closed and cocked, the shutter speed and aperture set. The photographer removes the darkslide that covers the sheet of film in the film holder, and triggers the shutter to make the exposure. Finally, the photographer replaces the darkslide and removes the film holder with the exposed film. Operation: Sheet film holders are generally interchangeable between various brands and models of view cameras, adhering to de facto standards. The largest cameras and more uncommon formats are less standardized. Operation: Special film holders and accessories can fit in place of standard film holders for specific purposes. A Grafmatic, for example, can fit six sheets of film in the space of an ordinary two-sheet holder, and some light meters have an attachment that inserts into the film holder slot on the camera back so the photographer can measure light that falls at a specific point on the film plane. The entire film holder/back assembly is often an industry standard Graflex back, removable so accessories like roll-film holders and digital imagers can be used without altering focus. Pros and cons compared to medium and 35mm formats: Advantages The ability to skew the plane of critical focus: In a camera without movements the film plane is always parallel to the lens plane. A camera with tilts and swings lets the photographer skew the plane of focus away from the parallel in any direction, which in many cases can bring the image of a subject that is not parallel to the lens plane into near-to-far focus without stopping down the aperture excessively. Both standards can be tilted through the horizontal or swung through the vertical axes to change the plane of focus. Tilts and swings of the front standard alone do not alter or distort shapes or converging lines in the image; tilts and swings of the rear standard do affect these things, as well as the plane of focus: if the plane of focus must be skewed without altering shapes in the image, front movements alone must be used. The Scheimpflug principle explains the relationship between lens tilts and swings, and the plane of sharp focus. Pros and cons compared to medium and 35mm formats: The ability to distort the shape of the image by skewing the film plane: This is most often to reduce or eliminate, or deliberately exaggerate, convergence of lines that are parallel in the subject. If a camera with parallel film and lens planes is pointed at an angle to a plane subject with parallel lines, the lines appear to converge in the image, becoming closer to each other the further away from the camera they are. With a view camera the rear standard can be swung toward the wall to reduce this convergence. If the standard is parallel to the wall, convergence is eliminated. Moving the rear standard this way skews the plane of focus, which can be corrected with a front swing in the same direction as the rear swing. Pros and cons compared to medium and 35mm formats: Improved image quality for a print of a given size: The larger a piece of film is, the less detail is lost at a given print size because the larger film requires less enlargement for the same size print. In other words, the same scene photographed on a large-format camera provides a better-quality image and allows greater enlargement than the same image in a smaller format. Additionally, the larger a piece of film is, the more subtle and varied the tonal palette and gradations are at a given print size. A large film size also allows same-size contact printing. Pros and cons compared to medium and 35mm formats: Shallow depth of field: view cameras require longer focal length lenses than smaller format cameras, especially for the larger sizes, with shallower depth of field, letting the photographer focus solely on the subject. Smaller apertures can be used: much smaller apertures can be used than with smaller format cameras before diffraction becomes significant for a given print size. Low resale value is an advantage for buyers, but not for sellers. A top-of-the-line 8×10 camera that cost $8,000 new can often be bought in excellent condition, with additional accessories, for $1,500. Disadvantages Lack of automation: most view cameras are fully manual, requiring time, and allowing even experienced photographers to make mistakes. Some cameras, such as Sinars, have some degree of automation with self-cocking shutters and film-plane metering. Pros and cons compared to medium and 35mm formats: Steep learning curve: In addition to needing the knowledge required to operate a fully manual camera, view camera operators must understand a large number of technical matters that are not an issue to most small format photographers. They must understand, for example, view camera movements, bellows factors, and reciprocity. A great amount of time and study is needed to master those aspects of large format photography, so learning view camera operation requires a high degree of dedication. Pros and cons compared to medium and 35mm formats: Large size and weight: monorail view cameras are unsuitable for handheld photography and are in most cases difficult to transport. A folding bed field camera like a Linhof Technika with a lens-coupled range finder system even allows action photography. Shallow depth of field: view cameras require longer focal length lenses than smaller format cameras, especially for the larger sizes, with shallower depth of field. Small maximum aperture: it is not feasible to make long focal length lenses with the wide maximum apertures available with shorter focal lengths. Pros and cons compared to medium and 35mm formats: High cost: there is limited demand for view cameras, so that there are no economies of scale and they are much more expensive than mass-produced cameras. Some are handmade. Even though the cost of sheet film and processing is much higher than rollfilm, fewer sheets of film are exposed, which partially offsets the cost.Some of these disadvantages can be viewed as advantages. For example, slow setup and composure time allow the photographer to better visualize the image before making an exposure. The shallow depth of field can be used to emphasize certain details and deemphasize others (in bokeh style, for example), especially combined with camera movements. The high cost of film and processing encourages careful planning. Because view cameras are rather difficult to set up and focus, the photographer must seek the best camera position, perspective, etc. before exposing. Beginning 35 mm photographers are even sometimes advised to use a tripod specifically because it slows down the picture-taking process.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scouring powder** Scouring powder: Scouring powder is a household cleaning product consisting of an abrasive powder mixed with a dry soap or detergent, soda, and possibly dry bleach. Scouring powder is used to clean encrusted deposits on hard surfaces such as ceramic tiles, pots and pans, baking trays, grill, porcelain sinks, bathtubs, toilet bowls and other bathroom fixtures. It is meant to be rubbed over the surface with a little water. The abrasive removes the dirt by mechanical action, and is eventually washed away, together with the powder, by rinsing with water. Scouring powders are similar to scouring soaps and scouring creams in general composition and mode of action, but differ somewhat in the form (dry powder, instead of a bar or paste) and in the primary intended applications. Scouring powders compete in their intended uses with scouring pads and steel wool. Composition: A typical scouring powder consists of an insoluble abrasive powder (about 80%), a soluble base (18%) and a detergent (2%). It may also include perfume and/or a dry bleaching agent.The abrasive can be silica (quartz, SiO2), feldspar (such as orthoclase), pumice, kaolinite, soapstone, talc, calcium carbonate (limestone, chalk), calcite, etc.. The particles should have reasonably uniform size, less than 50 μm in diameter. Hard abrasives like silica and pumice can remove tougher stains but may also scratch glass, metal, and glazed ceramics. Composition: The soluble base is meant to break down fatty substances by saponification; it may be sodium carbonate (lye, washing soda, Na2CO3). The detergent is usually an anionic surfactant. Its role is to help remove greasy material (such as grease) as an emulsion, and to keep the removed stain particles suspended in the abrasive paste.The dry bleach is usually a product that releases chlorine (more precisely hypochlorite, the classical household bleaching agent), such as trichloroisocyanuric acid.The Bar Keepers Friend scouring powder has oxalic acid instead of the base, which makes it effective against rust stains rather than grease and other organic dirt. History: Abrasive powders have been used to wash off grease and other hard stains since antiquity. Bathing in ancient Greece and Rome started with rubbing the body with fine sand mixed with oil or other substances, and then scraping it off with a special curved spatula, a strigil. The plants in the genus Equisetum ("horsetails") are also called "scouring rushes" because their microscopic silica scales (phitoliths). History: An early industrialized scouring powder was Bon Ami, launched in 1886 by the J.T. Robertson Soap Company as a gentler alternative to quartz-based scouring powders then available on store shelves. Another early commercial brand was Vim (1904), one of the first products created by William Lever. The abrasive was obtained from sandstone mined at the Cambrian Quarry at Gwernymynydd between 1905 and about 1950. The brand is still marketed in some countries, but Unilever has been replacing it with Jif and then Cif. Scouring powders have been marketed with many other companies and brand names, such as Ajax, Bon Ami, Radium, Comet, Sano, and Zud.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Internet Routing Registry** Internet Routing Registry: An Internet Routing Registry (IRR) is a database of Internet route objects for determining, and sharing route and related information used for configuring routers, with a view to avoiding problematic issues between Internet service providers. Internet Routing Registry: The Internet routing registry works by providing an interlinked hierarchy of objects designed to facilitate the organization of IP routing between organizations, and also to provide data in an appropriate format for automatic programming of routers. Network engineers from participating organizations are authorized to modify the Routing Policy Specification Language (RPSL) objects, in the registry, for their own networks. Then, any network engineer, or member of the public, is able to query the route registry for particular information of interest. Relevant objects: AUT-NUM INETNUM6 ROUTE INETNUM ROUTE6 AS-SET Status of implementation: In some RIR regions, the adoption/updates of for e.g. AUT-NUM (Represents for e.g. Autonomous system (Internet)) is only done when the record is created by the RIR, and as long nobody complains about issues, the records remain unreliable/original-state. Most global ASNs provide valid information about their resources in their e.g. AS-SET objects. Peering networks are highly automated, and it would be very harmful for the ASNs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sigma Ursae Majoris** Sigma Ursae Majoris: The Bayer designation σ Ursae Majoris (Sigma Ursae Majoris, σ UMa) is shared by two star systems in the constellation Ursa Major: σ1 (11 Ursae Majoris) σ2 (13 Ursae Majoris)They are separated by 0.33° in the sky. The two stars, Sigma1 and Sigma2 together, are considered an optical double star. They are not a binary star, in that they are not gravitationally linked, but they are close to each other as seen in the sky.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phenylacetyl-CoA dehydrogenase** Phenylacetyl-CoA dehydrogenase: In enzymology, a phenylacetyl-CoA dehydrogenase (EC 1.17.5.1) is an enzyme that catalyzes the chemical reaction phenylacetyl-CoA + H2O + 2 quinone ⇌ phenylglyoxylyl-CoA + 2 quinolThe 3 substrates of this enzyme are phenylacetyl-CoA, H2O, and quinone, whereas its two products are phenylglyoxylyl-CoA and quinol. This enzyme belongs to the family of oxidoreductases, specifically those acting on CH or CH2 groups with a quinone or similar compound as acceptor. The systematic name of this enzyme class is phenylacetyl-CoA:quinone oxidoreductase. This enzyme is also called phenylacetyl-CoA:acceptor oxidoreductase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**On- and off-hook** On- and off-hook: In telephony, on-hook and off-hook are two states of a communication circuit. On subscriber telephones the states are produced by placing the handset onto or off the hookswitch. Placing the circuit into the off-hook state is also called seizing the line. Off-hook originally referred to the condition that prevailed when telephones had a separate earpiece (receiver), which hung from its switchhook until the user initiated a telephone call by removing it. When off hook the weight of the receiver no longer depresses the spring-loaded switchhook, thereby connecting the instrument to the telephone line. Off-hook: The term off-hook has the following meanings: The condition that exists when a telephone or other user instrument is in use, i.e., during dialing or communicating. A general description of one of two possible signaling states at an interface between telecommunications systems, such as tone or no tone and ground connection versus battery connection. Note that if off-hook pertains to one state, on-hook pertains to the other. Off-hook: The active state (i.e., a closed loop (short circuit between the wires) of a subscriber line or PBX user loop) An operating state of a communications link in which data transmission is enabled either for (a) voice or data communications or (b) network signaling.On an ordinary two-wire telephone line, off-hook status is communicated to the telephone exchange by a resistance short across the pair. When an off-hook condition persists without dialing, for example because the handset has fallen off or the cable has been flooded, it is treated as a permanent loop or permanent signal. Off-hook: The act of going off-hook is also referred to as seizing the line or channel. On-hook: The term on-hook has the following meanings: The condition that exists when a telephone or other user instrument is not in use, i.e., when idle waiting for a call. Note: on-hook originally referred to the storage of an idle telephone receiver, i.e., separate earpiece, on a switchhook. The weight of the receiver depresses the spring-loaded switchhook thereby disconnecting the idle instrument (except its bell) from the telephone line. On-hook: One of two possible signaling states, such as tone or no tone, or ground connection versus battery connection. Note: if on-hook pertains to one state, off-hook pertains to the other. The idle state, i.e., an open loop of a subscriber line or PBX user loop. On-hook: An operating state of a telecommunication circuit in which transmission is disabled and a high impedance, or "open circuit", is presented to the link by the end instrument(s). Note: during the on-hook condition, the link is responsive to ringing signals.The act of going on-hook is also referred to as releasing the line or channel, and may initiate the process of clearing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cafestol** Cafestol: Cafestol is a diterpenoid molecule present in coffee beans. It is one of the compounds that may be responsible for proposed biological and pharmacological effects of coffee. Sources: A typical bean of Coffea arabica contains about 0.4% to 0.7% cafestol by weight. Cafestol is present in highest quantity in unfiltered coffee drinks such as French press coffee, Turkish coffee or Greek coffee. In paper-filtered coffee drinks such as drip brewed coffee, it is present in only negligible amounts, as the paper filter in drip filtered coffee retains the diterpenes. Research into biological activity: Coffee consumption has been associated with a number of effects on health and cafestol has been proposed to produce these through a number of biological actions. Studies have shown that regular consumption of boiled coffee increases serum cholesterol whereas filtered coffee does not. Cafestol may act as an agonist ligand for the nuclear receptor farnesoid X receptor and pregnane X receptor, blocking cholesterol homeostasis. Thus cafestol can increase cholesterol synthesis.Cafestol has also shown anticarcinogenic properties in rats.Cafestol also has neuroprotective effects in a Drosophila fruit fly model of Parkinson's disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nutrient agar** Nutrient agar: Nutrient agar is a general purpose liquid medium supporting growth of a wide range of non-fastidious organisms. It typically contains (mass/volume): 0.5% peptone - this provides organic nitrogen 0.3% beef extract/yeast extract - the water-soluble content of these contribute vitamins, carbohydrates, nitrogen, and salts 1.5% agar - this gives the mixture solidity 0.5% sodium chloride - this gives the mixture proportions similar to those found in the cytoplasm of most organisms distilled water - water serves as a transport medium for the agar's various substances pH adjusted to neutral (6.8) at 25 °C (77 °F).Nutrient broth has the same composition,but lacks agar.These ingredients are combined and boiled for approximately one minute to ensure they are mixed and then sterilized by autoclaving, typically at 121 °C (250 °F) for 15 minutes. Then they are cooled to around 50 °C (122 °F) and poured into Petri dishes which are covered immediately. Once the dishes hold solidified agar, they are stored upside down and are often refrigerated until used. Inoculation takes place on warm dishes rather than cool ones: if refrigerated for storage, the dishes must be rewarmed to room temperature prior to inoculation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Architect** Architect: An architect is a person who plans, designs and oversees the construction of buildings. To practice architecture means to provide services in connection with the design of buildings and the space within the site surrounding the buildings that have human occupancy or use as their principal purpose. Etymologically, the term architect derives from the Latin architectus, which derives from the Greek (arkhi-, chief + tekton, builder), i.e., chief builder.The professional requirements for architects vary from location to location. An architect's decisions affect public safety and thus the architect must undergo specialized training consisting of advanced education and a practicum (or internship) for practical experience to earn a license to practice architecture. Practical, technical, and academic requirements for becoming an architect vary by jurisdiction though the formal study of architecture in academic institutions has played a pivotal role in the development of the profession. Origins: Throughout ancient and medieval history, most architectural design and construction was carried out by artisans—such as stone masons and carpenters, rising to the role of master builder. Until modern times, there was no clear distinction between architect and engineer. In Europe, the titles architect and engineer were primarily geographical variations that referred to the same person, often used interchangeably. "Architect" derives from Greek ἀρχιτέκτων (arkhitéktōn, "master builder", "chief tektōn). Origins: It is suggested that various developments in technology and mathematics allowed the development of the professional 'gentleman' architect, separate from the hands-on craftsman. Paper was not used in Europe for drawing until the 15th century, but became increasingly available after 1500. Pencils were used for drawing by 1600. The availability of both paper and pencils allowed pre-construction drawings to be made by professionals. Concurrently, the introduction of linear perspective and innovations such as the use of different projections to describe a three-dimensional building in two dimensions, together with an increased understanding of dimensional accuracy, helped building designers communicate their ideas. However, development was gradual and slow going. Until the 18th-century, buildings continued to be designed and set out by craftsmen, with the exception of high-status projects. Architecture: In most developed countries, only those qualified with an appropriate license, certification, or registration with a relevant body (often governmental), may legally practice architecture. Such licensure usually required a university degree, successful completion of exams, as well as a training period. Representation of oneself as an architect through the use of terms and titles were restricted to licensed individuals by law, although in general, derivatives such as architectural designer were not legally protected. Architecture: To practice architecture implies the ability to practice independently of supervision. The term building design professional (or design professional), by contrast, is a much broader term that includes professionals who practice independently under an alternate profession such as engineering professionals, or those who assist in the practice of architecture under the supervision of a licensed architect such as intern architects. In many places, independent, non-licensed individuals, may perform design services outside the professional restrictions such as the design houses or other smaller structures. Practice: In the architectural profession, technical and environmental knowledge, design, and construction management, require an understanding of business as well as design. However, design is the driving force throughout the project and beyond. An architect accepts a commission from a client. The commission might involve preparing feasibility reports, building audits, designing a building or several buildings, structures, and the spaces among them. The architect participates in developing the requirements the client wants in the building. Throughout the project (planning to occupancy), the architect coordinates a design team. Structural, mechanical, and electrical engineers are hired by the client or architect, who must ensure that the work is coordinated to construct the design. Practice: Design role The architect, once hired by a client, is responsible for creating a design concept that meets the requirements of that client and provides a facility suitable to the required use. The architect must meet with and put questions to the client, in order to ascertain all the requirements (and nuances) of the planned project. Often the full brief is not clear in the beginning. It involves a degree of risk in the design undertaking. The architect may make early proposals to the client which may rework the terms of the brief. The "program" (or brief) is essential to producing a project that meets all the needs of the owner. This becomes a guide for the architect in creating the design concept. Practice: Design proposal(s) are generally expected to be both imaginative and pragmatic. Much depends upon the time, place, finance, culture, and available crafts and technology in which the design takes place. The extent and nature of these expectations will vary. Foresight is a prerequisite when designing buildings as it is a very complex and demanding undertaking. Practice: Any design concept during the early stage of its generation must take into account a great number of issues and variables including qualities of space(s), the end-use and life-cycle of these proposed spaces, connections, relations, and aspects between spaces including how they are put together and the impact of proposals on the immediate and wider locality. Selection of appropriate materials and technology must be considered, tested, and reviewed at an early stage in the design to ensure there are no setbacks (such as higher-than-expected costs) which could occur later in the project. The site and its surrounding environment as well as the culture and history of the place, will also influence the design. The design must also balance increasing concerns with environmental sustainability. The architect may introduce (intentionally or not), aspects of mathematics and architecture, new or current architectural theory, or references to architectural history. Practice: A key part of the design is that the architect often must consult with engineers, surveyors and other specialists throughout the design, ensuring that aspects such as structural supports and air conditioning elements are coordinated. The control and planning of construction costs are also a part of these consultations. Coordination of the different aspects involves a high degree of specialized communication including advanced computer technology such as building information modeling (BIM), computer-aided design (CAD), and cloud-based technologies. Finally, at all times, the architect must report back to the client who may have reservations or recommendations which might introduce further variables into the design. Practice: Architects also deal with local and federal jurisdictions regarding regulations and building codes. The architect might need to comply with local planning and zoning laws such as required setbacks, height limitations, parking requirements, transparency requirements (windows), and land use. Some jurisdictions require adherence to design and historic preservation guidelines. Health and safety risks form a vital part of the current design, and in some jurisdictions, design reports and records are required to include ongoing considerations of materials and contaminants, waste management and recycling, traffic control, and fire safety. Practice: Means of design Previously, architects employed drawings to illustrate and generate design proposals. While conceptual sketches are still widely used by architects, computer technology has now become the industry standard. Furthermore, design may include the use of photos, collages, prints, linocuts, 3D scanning technology, and other media in design production. Increasingly, computer software is shaping how architects work. BIM technology allows for the creation of a virtual building that serves as an information database for the sharing of design and building information throughout the life-cycle of the building's design, construction, and maintenance. Virtual reality (VR) presentations are becoming more common for visualizing structural designs and interior spaces from the point-of-view perspective. Practice: Environmental role Since modern buildings are known to place carbon into the atmosphere, increasing controls are being placed on buildings and associated technology to reduce emissions, increase energy efficiency, and make use of renewable energy sources. Renewable energy sources may be designed into the proposed building by local or national renewable energy providers. As a result, the architect is required to remain abreast of current regulations that are continually being updated. Some new developments exhibit extremely low energy use or passive solar building design. Practice: However, the architect is also increasingly being required to provide initiatives in a wider environmental sense. Examples of this include making provisions for low-energy transport, natural daylighting instead of artificial lighting, natural ventilation instead of air conditioning, pollution, and waste management, use of recycled materials, and employment of materials which can be easily recycled. Construction role As the design becomes more advanced and detailed, specifications and detail designs are made of all the elements and components of the building. Techniques in the production of a building are continually advancing which places a demand on the architect to ensure that he or she remains up to date with these advances. Depending on the client's needs and the jurisdiction's requirements, the spectrum of the architect's services during each construction stage may be extensive (detailed document preparation and construction review) or less involved (such as allowing a contractor to exercise considerable design-build functions). Practice: Architects typically put projects to tender on behalf of their clients, advise them on the award of the project to a general contractor, facilitate and administer a contract of agreement which is often between the client and the contractor. This contract is legally binding and covers a wide range of aspects including the insurance and commitments of all stakeholders, the status of the design documents, provisions for the architect's access, and procedures for the control of the works as they proceed. Depending on the type of contract utilized, provisions for further sub-contract tenders may be required. The architect may require that some elements are covered by a warranty which specifies the expected life and other aspects of the material, product, or work. Practice: In most jurisdictions, prior notification to the relevant authority must be given before commencement of the project, giving the local authority notice to carry out independent inspections. The architect will then review and inspect the progress of the work in coordination with the local authority. Practice: The architect will typically review contractor shop drawings and other submittals, prepare and issue site instructions, and provide Certificates for Payment to the contractor (see also Design-bid-build) which is based on the work done as well as any materials and other goods purchased or hired in the future. In the United Kingdom and other countries, a quantity surveyor is often part of the team to provide cost consulting. With large, complex projects, an independent construction manager is sometimes hired to assist in the design and management of the construction. Practice: In many jurisdictions, mandatory certification or assurance of the completed work or part of works, is required. This demand for certification entails a high degree of risk; therefore, regular inspections of the work as it progresses on site is required to ensure that the design is in compliance itself as well as following all relevant statutes and permissions. Alternate practice and specializations Recent decades have seen the rise of specializations within the profession. Many architects and architectural firms focus on certain project types (e.g. healthcare, retail, public housing, and event management), technological expertise, or project delivery methods. Some architects specialize in building code, building envelope, sustainable design, technical writing, historic preservation(US) or conservation (UK), and accessibility. Many architects elect to move into real estate (property) development, corporate facilities planning, project management, construction management, chief sustainability officers interior design, city planning, user experience design, and design research. Professional requirements Although there are variations in each location, most of the world's architects are required to register with the appropriate jurisdiction. Architects are typically required to meet three common requirements: education, experience, and examination. Basic educational requirement generally consist of a university degree in architecture. The experience requirement for degree candidates is usually satisfied by a practicum or internship (usually two to three years). Finally, a Registration Examination or a series of exams is required prior to licensure. Practice: Professionals who engaged in the design and supervision of construction projects prior to the late 19th century were not necessarily trained in a separate architecture program in an academic setting. Instead, they often trained under established architects. Prior to modern times, there was no distinction between architects and engineers and the title used varied depending on geographical location. They often carried the title of master builder or surveyor after serving a number of years as an apprentice (such as Sir Christopher Wren). The formal study of architecture in academic institutions played a pivotal role in the development of the profession as a whole, serving as a focal point for advances in architectural technology and theory. The use of "Architect" or abbreviations such as "Ar." as a title attached to a person's name was regulated by law in some countries. Fees: Architects' fee structure was typically based on a percentage of construction value, as a rate per unit area of the proposed construction, hourly rates, or a fixed lump sum fee. Combination of these structures was also common. Fixed fees were usually based on a project's allocated construction cost and could range between 4 and 12% of new construction cost for commercial and institutional projects, depending on a project's size and complexity. Residential projects ranged from 12 to 20%. Renovation projects typically commanded higher percentages such as 15-20%.Overall billings for architectural firms range widely, depending on their location and economic climate. Billings have traditionally been dependent on the local economic conditions, but with rapid globalization, this is becoming less of a factor for large international firms. Salaries could also vary depending on experience, position within the firm (i.e. staff architect, partner, or shareholder, etc.), and the size and location of the firm. Professional organizations: A number of national professional organizations exist to promote career and business development in architecture. Professional organizations: The International Union of Architects (UIA) The American Institute of Architects (AIA) US Royal Institute of British Architects (RIBA) UK Architects Registration Board (ARB) UK The Australian Institute of Architects (AIA) Australia The South African Institute of Architects (SAIA) South Africa Association of Consultant Architects (ACA) UK Association of Licensed Architects (ALA) US The Consejo Profesional de Arquitectura y Urbanismo (CPAU) Argentina Indian Institute of Architects (IIA) & Council of Architecture (COA) India The National Organization of Minority Architects (NOMA) US Prizes, awards: A wide variety of prizes is awarded by national professional associations and other bodies, recognizing accomplished architects, their buildings, structures, and professional careers. Prizes, awards: The most lucrative award an architect can receive is the Pritzker Prize, sometimes termed the "Nobel Prize for architecture." The inaugural Pritzker Prize winner was Philip Johnson who was cited "for 50 years of imagination and vitality embodied in a myriad of museums, theatres libraries, houses gardens and corporate structure". The Pritzker Prize has been awarded for forty-two straight editions without interruption, and there are now 22 countries with at least one winning architect. Other prestigious architectural awards are the Royal Gold Medal, the AIA Gold Medal (USA), AIA Gold Medal (Australia), and the Praemium Imperiale.Architects in the UK, who have made contributions to the profession through design excellence or architectural education, or have in some other way advanced the profession, might until 1971 be elected Fellows of the Royal Institute of British Architects and can write FRIBA after their name if they feel so inclined. Those elected to chartered membership of the RIBA after 1971 may use the initials RIBA but cannot use the old ARIBA and FRIBA. An Honorary Fellow may use the initials, Hon. FRIBA. and an International Fellow may use the initials Int. FRIBA. Architects in the US, who have made contributions to the profession through design excellence or architectural education, or have in some other way advanced the profession, are elected Fellows of the American Institute of Architects and can write FAIA after their name. Architects in Canada, who have made outstanding contributions to the profession through contribution to research, scholarship, public service, or professional standing to the good of architecture in Canada, or elsewhere, may be recognized as a Fellow of the Royal Architectural Institute of Canada and can write FRAIC after their name. In Hong Kong, those elected to chartered membership may use the initial HKIA, and those who have made a special contribution after nomination and election by The Hong Kong Institute of Architects (HKIA), may be elected as fellow members of HKIA and may use FHKIA after their name. Prizes, awards: Architects in the Philippines and Filipino communities overseas (whether they are Filipinos or not), especially those who also profess other jobs at the same time, are addressed and introduced as Architect, rather than Sir/Madam in speech or Mr./Mrs./Ms. (G./Gng./Bb. in Filipino) before surnames. That word is used either in itself or before the given name or surname.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stadial** Stadial: Stadials and interstadials are phases dividing the Quaternary period, or the last 2.6 million years. Stadials are periods of colder climate, and interstadials are periods of warmer climate. Each Quaternary climate phase is associated with a Marine Isotope Stage (MIS) number, which describes the alternation between warmer and cooler temperatures, as measured by oxygen isotope data. Stadials have even MIS numbers, and interstadials have odd MIS numbers. The current Holocene interstadial is MIS 1, and the Last Glacial Maximum stadial is MIS 2. Marine Isotope Stages are sometimes further subdivided into stadials and interstadials by minor climate fluctuations within the overall stadial or interstadial regime, which are indicated by letters. The odd-numbered interstadial MIS 5, also known as the Sangamonian interglacial, contains two periods of relative cooling, and so is subdivided into three interstadials (5a, 5c, 5e) and two stadials (5b, 5d). A stadial isotope stage like MIS 6 would be subdivided by periods of relative warming, and so in that case the first and last subdivisions would be stadials; MIS 6a, 6c and 6e are stadials while 6b and 6d are interstadials. Distinction between stadials and glacials: Generally, stadials endure for a thousand years or less and interstadials for less than ten thousand years, and interglacials last for more than ten thousand and glacials for about one hundred thousand. For a period to be considered an interglacial, it changes from Arctic through sub-Arctic to boreal to temperate conditions and back again. An interstadial reaches only the stage of boreal vegetation.The MIS 1 interstadial encompasses the entirety of the present Holocene interglacial, but the Wisconsin glaciation encompasses MIS 2, 3, and 4. Distinction between stadials and glacials: Glacials and Interglacials refer to the 100,000-year cycles associated with Milankovitch cycles, and stadials and interstadials are defined by the actual oxygen-isotope temperature record. List of stadials and interstadials: Bølling/Allerød interstadial The Bølling oscillation and the Allerød oscillation, where they are not clearly distinguished in the stratigraphy, are taken together to form the Bølling/Allerød interstadial, and dated from about 14,700 to 12,700 years before the present. Dryas Periods The Oldest, Older, and Younger Dryas are three stadials that occurred during the warming since the Last Glacial Maximum. The Older Dryas occurred between the Bølling and Allerød interstadials. All three periods are named for the arctic plant species, Dryas octopetala, which proliferated during these cold periods. Dansgaard-Oeschger events Greenland ice cores show 24 interstadials during the 100,000 years of the Wisconsin glaciation. Referred to as the Dansgaard-Oeschger events, they have been extensively studied, and in their northern European contexts are sometimes named after towns, such as the Brorup, the Odderade, the Oerel, the Glinde, the Hengelo, or the Denekamp.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Batter board** Batter board: Batter boards (or battre boards, Sometimes mispronounced as "battle boads") are temporary frames, set beyond the corners of a planned foundation at precise elevations. These batter boards are then used to hold layout lines (construction twine) to indicate the limits (edges and corners) of the foundation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gateway-to-Gateway Protocol** Gateway-to-Gateway Protocol: The Gateway-to-Gateway Protocol (GGP) is an obsolete protocol defined for routing datagrams between Internet gateways. It was first outlined in 1982.The Gateway-to-Gateway Protocol was designed as an Internet Protocol (IP) datagram service similar to the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). However, it is classified as an Internet Layer protocol.GGP uses a minimum hop algorithm, in which it measures distance in router hops. A router is defined to be zero hops from directly connected networks, one hop from networks that are reachable through one other gateway. The protocol implements a distributed shortest-path methodology, and therefore requires global convergence of the routing tables after any change of link connectivity in the network. Gateway-to-Gateway Protocol: Each GGP message has a field header that identifies the message type and the format of the remaining fields. Because only core routers participated in GGP, and because core routers were controlled by a central authority, other routers could not interfere with the exchange.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minor planet** Minor planet: According to the International Astronomical Union (IAU), a minor planet is an astronomical object in direct orbit around the Sun that is exclusively classified as neither a planet nor a comet. Before 2006, the IAU officially used the term minor planet, but that year's meeting reclassified minor planets and comets into dwarf planets and small Solar System bodies (SSSBs).Minor planets include asteroids (near-Earth objects, Mars-crossers, main-belt asteroids and Jupiter trojans), as well as distant minor planets (centaurs and trans-Neptunian objects), most of which reside in the Kuiper belt and the scattered disc. As of May 2022, there are 1,131,201 known objects, divided into 611,678 numbered (secured discoveries) and 519,523 unnumbered minor planets, with only five of those officially recognized as a dwarf planet.The first minor planet to be discovered was Ceres in 1801. The term minor planet has been used since the 19th century to describe these objects. The term planetoid has also been used, especially for larger, planetary objects such as those the IAU has called dwarf planets since 2006. Historically, the terms asteroid, minor planet, and planetoid have been more or less synonymous. This terminology has become more complicated by the discovery of numerous minor planets beyond the orbit of Jupiter, especially trans-Neptunian objects that are generally not considered asteroids. A minor planet seen releasing gas may be dually classified as a comet. Minor planet: Objects are called dwarf planets if their own gravity is sufficient to achieve hydrostatic equilibrium and form an ellipsoidal shape. All other minor planets and comets are called small Solar System bodies. The IAU stated that the term minor planet may still be used, but the term small Solar System body will be preferred. However, for purposes of numbering and naming, the traditional distinction between minor planet and comet is still used. Populations: Hundreds of thousands of minor planets have been discovered within the Solar System and thousands more are discovered each month. The Minor Planet Center has documented over 213 million observations and 794,832 minor planets, of which 541,128 have orbits known well enough to be assigned permanent official numbers. Of these, 21,922 have official names. As of 8 November 2021, the lowest-numbered unnamed minor planet is (4596) 1981 QB, and the highest-numbered named minor planet is 594913 ꞌAylóꞌchaxnim.There are various broad minor-planet populations: Asteroids; traditionally, most have been bodies in the inner Solar System.Near-Earth asteroids, those whose orbits take them inside the orbit of Mars. Further subclassification of these, based on orbital distance, is used:Apohele asteroids orbit inside of Earth's perihelion distance and thus are contained entirely within the orbit of Earth. Populations: Aten asteroids, those that have a semimajor axis of less than Earth's and an aphelion (furthest distance from the Sun) greater than 0.983 AU. Apollo asteroids are those asteroids with a semimajor axis greater than Earth's while having a perihelion distance of 1.017 AU or less. Like Aten asteroids, Apollo asteroids are Earth-crossers. Amor asteroids are those near-Earth asteroids that approach the orbit of Earth from beyond but do not cross it. Amor asteroids are further subdivided into four subgroups, depending on where their semimajor axis falls between Earth's orbit and the asteroid belt. Earth trojans, asteroids sharing Earth's orbit and gravitationally locked to it. As of 2022, two Earth trojans are known: 2010 TK7 and 2020 XL5. Mars trojans, asteroids sharing Mars's orbit and gravitationally locked to it. As of 2007, eight such asteroids are known. Asteroid belt, whose members follow roughly circular orbits between Mars and Jupiter. These are the original and best-known group of asteroids. Jupiter trojans, asteroids sharing Jupiter's orbit and gravitationally locked to it. Numerically they are estimated to equal the main-belt asteroids. Distant minor planets, an umbrella term for minor planets in the outer Solar System. Centaurs, bodies in the outer Solar System between Jupiter and Neptune. They have unstable orbits due to the gravitational influence of the giant planets, and therefore must have come from elsewhere, probably outside Neptune. Neptune trojans, bodies sharing Neptune's orbit and gravitationally locked to it. Although only a handful are known, there is evidence that Neptune trojans are more numerous than either the asteroids in the asteroid belt or the Jupiter trojans. Trans-Neptunian objects, bodies at or beyond the orbit of Neptune, the outermost planet. The Kuiper belt, objects inside an apparent population drop-off approximately 55 AU from the Sun. Classical Kuiper belt objects like Makemake, also known as cubewanos, are in primordial, relatively circular orbits that are not in resonance with Neptune. Resonant Kuiper belt objects. Plutinos, bodies like Pluto that are in a 2:3 resonance with Neptune. Scattered disc objects like Eris, with aphelia outside the Kuiper belt. These are thought to have been scattered by Neptune. Resonant scattered disc objects. Detached objects such as Sedna, with both an aphelion and a perihelion outside the Kuiper belt. Sednoids, detached objects with a perihelion greater than 75 AU (Sedna, 2012 VP113, and Leleākūhonua). The Oort cloud, a hypothetical population thought to be the source of long-period comets and that may extend to 50,000 AU from the Sun. Naming conventions: All astronomical bodies in the Solar System need a distinct designation. The naming of minor planets runs through a three-step process. First, a provisional designation is given upon discovery—because the object still may turn out to be a false positive or become lost later on—called a provisionally designated minor planet. After the observation arc is accurate enough to predict its future location, a minor planet is formally designated and receives a number. It is then a numbered minor planet. Finally, in the third step, it may be named by its discoverers. However, only a small fraction of all minor planets have been named. The vast majority are either numbered or have still only a provisional designation. Example of the naming process: 1932 HA – provisional designation upon discovery on 24 April 1932 (1862) 1932 HA – formal designation, receives an official number 1862 Apollo – named minor planet, receives a name, the alphanumeric code is dropped Provisional designation A newly discovered minor planet is given a provisional designation. For example, the provisional designation 2002 AT4 consists of the year of discovery (2002) and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. 433 Eros). The formal naming convention uses parentheses around the number, but dropping the parentheses is quite common. Informally, it is common to drop the number altogether or to drop it after the first mention when a name is repeated in running text. Naming conventions: Minor planets that have been given a number but not a name keep their provisional designation, e.g. (29075) 1950 DA. Because modern discovery techniques are finding vast numbers of new asteroids, they are increasingly being left unnamed. The earliest discovered to be left unnamed was for a long time (3360) 1981 VA, now 3360 Syrinx. In November 2006 its position as the lowest-numbered unnamed asteroid passed to (3708) 1974 FV1 (now 3708 Socus), and in May 2021 to (4596) 1981 QB. On rare occasions, a small object's provisional designation may become used as a name in itself: the then-unnamed (15760) 1992 QB1 gave its "name" to a group of objects that became known as classical Kuiper belt objects ("cubewanos") before it was finally named 15760 Albion in January 2018.A few objects are cross-listed as both comets and asteroids, such as 4015 Wilson–Harrington, which is also listed as 107P/Wilson–Harrington. Naming conventions: Numbering Minor planets are awarded an official number once their orbits are confirmed. With the increasing rapidity of discovery, these are now six-figure numbers. The switch from five figures to six figures arrived with the publication of the Minor Planet Circular (MPC) of October 19, 2005, which saw the highest-numbered minor planet jump from 99947 to 118161. Naming The first few asteroids were named after figures from Greek and Roman mythology, but as such names started to dwindle the names of famous people, literary characters, discoverers' spouses, children, colleagues, and even television characters were used. Naming conventions: Gender The first asteroid to be given a non-mythological name was 20 Massalia, named after the Greek name for the city of Marseille. The first to be given an entirely non-Classical name was 45 Eugenia, named after Empress Eugénie de Montijo, the wife of Napoleon III. For some time only female (or feminized) names were used; Alexander von Humboldt was the first man to have an asteroid named after him, but his name was feminized to 54 Alexandra. This unspoken tradition lasted until 334 Chicago was named; even then, female names showed up in the list for years after. Naming conventions: Eccentric As the number of asteroids began to run into the hundreds, and eventually, in the thousands, discoverers began to give them increasingly frivolous names. The first hints of this were 482 Petrina and 483 Seppina, named after the discoverer's pet dogs. However, there was little controversy about this until 1971, upon the naming of 2309 Mr. Spock (the name of the discoverer's cat). Although the IAU subsequently discouraged the use of pet names as sources, eccentric asteroid names are still being proposed and accepted, such as 4321 Zero, 6042 Cheshirecat, 9007 James Bond, 13579 Allodd and 24680 Alleven, and 26858 Misterrogers. Naming conventions: Discoverer's name A well-established rule is that, unlike comets, minor planets may not be named after their discoverer(s). One way to circumvent this rule has been for astronomers to exchange the courtesy of naming their discoveries after each other. An exception to this rule is 96747 Crespodasilva, which was named after its discoverer, Lucy d'Escoffier Crespo da Silva, because she died shortly after the discovery, at age 22. Naming conventions: Languages Names were adapted to various languages from the beginning. 1 Ceres, Ceres being its Anglo-Latin name, was actually named Cerere, the Italian form of the name. German, French, Arabic, and Hindi use forms similar to the English, whereas Russian uses a form, Tserera, similar to the Italian. In Greek, the name was translated to Δήμητρα (Demeter), the Greek equivalent of the Roman goddess Ceres. In the early years, before it started causing conflicts, asteroids named after Roman figures were generally translated in Greek; other examples are Ἥρα (Hera) for 3 Juno, Ἑστία (Hestia) for 4 Vesta, Χλωρίς (Chloris) for 8 Flora, and Πίστη (Pistis) for 37 Fides. In Chinese, the names are not given the Chinese forms of the deities they are named after, but rather typically have a syllable or two for the character of the deity or person, followed by 神 'god(dess)' or 女 'woman' if just one syllable, plus 星 'star/planet', so that most asteroid names are written with three Chinese characters. Thus Ceres is 穀神星 'grain goddess planet', Pallas is 智神星 'wisdom goddess planet', etc. Physical properties of comets and minor planets: Commission 15 of the International Astronomical Union is dedicated to the Physical Study of Comets & Minor Planets. Physical properties of comets and minor planets: Archival data on the physical properties of comets and minor planets are found in the PDS Asteroid/Dust Archive. This includes standard asteroid physical characteristics such as the properties of binary systems, occultation timings and diameters, masses, densities, rotation periods, surface temperatures, albedoes, spin vectors, taxonomy, and absolute magnitudes and slopes. In addition, European Asteroid Research Node (E.A.R.N.), an association of asteroid research groups, maintains a Data Base of Physical and Dynamical Properties of Near Earth Asteroids. Environmental properties: Environmental characteristics have three aspects: space environment, surface environment and internal environment, including geological, optical, thermal and radiological environmental properties, etc., which are the basis for understanding the basic properties of minor planets, carrying out scientific research, and are also an important reference basis for designing the payload of exploration missions Radiation environment Without the protection of an atmosphere and its own strong magnetic field, the minor planet's surface is directly exposed to the surrounding radiation environment. In the cosmic space where minor planets are located, the radiation on the surface of the planets can be divided into two categories according to their sources: one comes from the sun, including electromagnetic radiation from the sun, and ionizing radiation from the solar wind and solar energy particles; the other comes from the sun outside the solar system, that is, galactic cosmic rays, etc. Environmental properties: Optical environment Usually during one rotation period of a minor planet, the albedo of a minor planet will change slightly due to its irregular shape and uneven distribution of material composition. This small change will be reflected in the periodic change of the planet's light curve, which can be observed by ground-based equipment, so as to obtain the planet's magnitude, rotation period, rotation axis orientation, shape, albedo distribution, and scattering properties. Generally speaking, the albedo of minor planets is usually low, and the overall statistical distribution is bimodal, corresponding to C-type (average 0.035) and S-type (average 0.15) minor planets. In the minor planet exploration mission, measuring the albedo and color changes of the planet surface is also the most basic method to directly know the difference in the material composition of the planet surface. Environmental properties: Geological environment The geological environment on the surface of minor planets is similar to that of other unprotected celestial bodies, with the most widespread geomorphological feature present being impact craters: however, the fact that most minor planets are rubble pile structures, which are loose and porous, gives the impact action on the surface of minor planets its unique characteristics. On highly porous minor planets, small impact events produce spatter blankets similar to common impact events: whereas large impact events are dominated by compaction and spatter blankets are difficult to form, and the longer the planets receive such large impacts, the greater the overall density. In addition, statistical analysis of impact craters is an important means of obtaining information on the age of a planet surface. Although the Crater Size-Frequency Distribution (CSFD) method of dating commonly used on minor planet surfaces does not allow absolute ages to be obtained, it can be used to determine the relative ages of different geological bodies for comparison. In addition to impact, there are a variety of other rich geological effects on the surface of minor planets, such as mass wasting on slopes and impact crater walls, large-scale linear features associated with graben, and electrostatic transport of dust. By analysing the various geological processes on the surface of minor planets, it is possible to learn about the possible internal activity at this stage and some of the key evolutionary information about the long-term interaction with the external environment, which may lead to some indication of the nature of the parent body's origin. Many of the larger planets are often covered by a layer of soil (regolith) of unknown thickness. Compared to other atmosphere-free bodies in the solar system (e.g. the Moon), minor planets have weaker gravity fields and are less capable of retaining fine-grained material, resulting in a somewhat larger surface soil layer size. Soil layers are inevitably subject to intense space weathering that alters their physical and chemical properties due to direct exposure to the surrounding space environment. In silicate-rich soils, the outer layers of Fe are reduced to nano-phase Fe (np-Fe), which is the main product of space weathering. For some small planets, their surfaces are more exposed as boulders of varying sizes, up to 100 metres in diameter, due to their weaker gravitational pull. These boulders are of high scientific interest, as they may be either deeply buried material excavated by impact action or fragments of the planet's parent body that have survived. The rocks provide more direct and primitive information about the material inside the minor planet and the nature of its parent body than the soil layer, and the different colours and forms of the rocks indicate different sources of material on the surface of the minor planet or different evolutionary processes. Environmental properties: Magnetic environment Usually in the interior of the planet, the convection of the conductive fluid will generate a large and strong magnetic field. However, the size of a minor planet is generally small and most of the minor planets have a "crushed stone pile" structure, and there is basically no "dynamo" structure inside, so it will not generate a self-generated dipole magnetic field like the Earth. But some minor planets do have magnetic fields—on the one hand, some minor planets have remanent magnetism: if the parent body had a magnetic field or if the nearby planetary body has a strong magnetic field, the rocks on the parent body will be magnetised during the cooling process and the planet formed by the fission of the parent body will still retain remanence, which can also be detected in extraterrestrial meteorites from the minor planets; on the other hand, if the minor planets are composed of electrically conductive material and their internal conductivity is similar to that of carbon- or iron-bearing meteorites, the interaction between the minor planets and the solar wind is likely to be unipolar induction, resulting in an external magnetic field for the minor planet. In addition, the magnetic fields of minor planets are not static; impact events, weathering in space and changes in the thermal environment can alter the existing magnetic fields of minor planets. At present, there are not many direct observations of minor planet magnetic fields, and the few existing planets detection projects generally carry magnetometers, with some targets such as Gaspra and Braille measured to have strong magnetic fields nearby, while others such as Lutetia have no magnetic field.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cyclogyro** Cyclogyro: The cyclogyro, or cyclocopter, is an aircraft configuration that uses a horizontal-axis cyclorotor as a rotor wing to provide lift and sometimes also propulsion and control. In principle, the cyclogyro is capable of vertical take off and landing and hovering performance like a helicopter, while potentially benefiting from some of the advantages of a fixed-wing aircraft. The cyclogyro is distinct from the Flettner airplane which uses a cylindrical wing rotor to harness the Magnus effect. Principles of operation: The cyclogyro wing resembles a paddle wheel, with airfoil blades replacing the paddles. Like a helicopter, the blade pitch (angle of attack) can be adjusted either collectively all together or cyclically as they move around the rotor's axis. In normal forward flight the blades are given a slight positive pitch at the upper and forward portions of their arc producing lift and, if powered, also forward thrust. They are given flat or negative pitch at the bottom, and are "flat" through the rest of the circle to produce little or no lift in other directions. Blade pitch can be adjusted to change the thrust profile, allowing the cyclogyro to travel in any direction without the need for separate control surfaces. Differential thrust between the two wings (one on either side of the fuselage) can be used to turn the aircraft around its vertical axis, although conventional tail surfaces may be used as well. History: Jonathan Edward Caldwell took out a patent on the cyclogyro which was granted in 1927, but he never followed it up.The Schroeder S1 of 1930 was a full-size prototype which used the cyclogyro for forward thrust only. Adolf Rohrbach of Germany designed a full VTOL version in 1933, which was later developed in the US and featured a tall streamlined fuselage to keep the wings clear of the ground. Another example was built by Rahn Aircraft in 1935, which used two large-chord rotary wings instead of a multi-blade wheel driven by a 240 hp supercharged Wright WhirlwindThe cyclogyro has been revisited in the twenty-first century, as a possible configuration for unmanned aerial vehicles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Magic polygon** Magic polygon: A magic polygon is a polygonal magic graph with integers on its vertices. Perimeter magic polygon: A magic polygon, also called a perimeter magic polygon, is a polygon with an integers on its sides that all add up to a magic constant. It is where positive integers (from 1 to N) on a k-sided polygon add up to a constant. Magic polygons are a generalization of other magic shapes such as magic triangles. Magic polygon with a center point: Victoria Jakicic and Rachelle Bouchat defined magic polygons as n-sided regular polygons with 2n+1 nodes such that the sum of the three nodes are equal. In their definition, a 3 × 3 magic square can be viewed as a magic 4-gon. There are no magic odd-gons with this definition. Magic polygons and degenerated magic polygons: Danniel Dias Augusto and Josimar da Silva defined the magic polygon P(n,k) as a set of vertices of k/2 concentric n-gon and a center point. In this definition, magic polygons of Victoria Jakicic and Rachelle Bouchat can be viewed as P(n,2) magic polygons. They also defined degenerated magic polygons.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scalable Link Interface** Scalable Link Interface: Scalable Link Interface (SLI) is the brand name for a now discontinued multi-GPU technology developed by Nvidia for linking two or more video cards together to produce a single output. SLI is a parallel processing algorithm for computer graphics, meant to increase the available processing power. Scalable Link Interface: The initialism SLI was first used by 3dfx for Scan-Line Interleave, which was introduced to the consumer market in 1998 and used in the Voodoo2 line of video cards. After buying out 3dfx, Nvidia acquired the technology but did not use it. Nvidia later reintroduced the SLI name in 2004 and intended for it to be used in modern computer systems based on the PCI Express (PCIe) bus; however, the technology behind the name SLI has changed dramatically. Implementation: SLI allows two, three, or four graphics processing units (GPUs) to share the workload when rendering real-time 3D computer graphics. Ideally, identical GPUs are installed on the motherboard that contains enough PCI Express slots, set up in a master–slave configuration. All graphics cards are given an equal workload to render, but the final output of each card is sent to the master card via a connector called the SLI bridge. For example, in a two graphics card setup, the master works on the top half of the scene, the slave the bottom half. Once the slave is done, it sends its render to the master to combine into one image before sending it to the monitor. Implementation: The SLI bridge is used to reduce bandwidth constraints and send data between both graphics cards directly. It is possible to run SLI without using the bridge connector on a pair of low-end to mid-range graphics cards (e.g., 7100GS or 6600GT) with Nvidia's Forceware drivers 80.XX or later. Since these graphics cards do not use as much bandwidth, data can be relayed through just the chipsets on the motherboard. However, if there are two high-end graphics cards installed and the SLI bridge is omitted, the performance will suffer severely, as the chipset does not have enough bandwidth. Implementation: Configurations currently include: Two-way, three-way, and four-way SLI uses two, three, or four individual graphics cards respectively. Implementation: Two GPUs on one graphics card. Examples include the GeForce GTX 590, the GeForce GTX 690 and the GeForce GTX Titan Z. This configuration has the advantage of implementing two-way SLI, while only occupying one PCI Express slot and (usually) two expansion I/O slots. This also allows for four-way SLI using only two cards (which is referred to as Quad SLI).Nvidia has created a set of custom video game profiles in cooperation with video game publishers that will automatically enable SLI in the mode that gives the largest performance boost. Implementation: Nvidia has three types of SLI bridges: Standard bridge (400 MHz pixel clock and 1 GB/s bandwidth) LED bridge (540 MHz pixel clock) High-bandwidth bridge (650 MHz pixel clock and 2 GB/s bandwidth)The standard bridge is traditionally included with motherboards that support SLI and is recommended for monitors up to 1920×1080 and 2560×1440 at 60 Hz. The LED bridge is sold by Nvidia, EVGA, and others and is recommended for monitors up to 2560×1440 at 120 Hz and above and 4K. The LED bridges can only function at the increased pixel clock if the GPU supports that clock. The high-bandwidth bridge is only sold by Nvidia and is recommended for monitors up to 5K and surround. Implementation: The following table provides an overview on the maximum theoretical bandwidth for data transfers depending on bridge type specifications as found on the open market: SLI modes: Split-frame rendering (SFR) This analyzes the rendered image in order to split the workload equally between the two GPUs. To do this, the frame is split horizontally in varying ratios depending on geometry. For example, in a scene where the top half of the frame is mostly empty sky, the dividing line will lower, balancing geometry workload between the two GPUs. SLI modes: Alternate-frame rendering (AFR) Each GPU renders entire frames in sequence. For example, in a two-way setup, one GPU renders the odd frames, the other the even frames, one after the other. Finished outputs are sent to the master for display. Ideally, this would result in the rendering time being cut by the number of GPUs available. In their advertising, Nvidia claims up to 1.9 times the performance of one card with the two-way setup. While AFR may produce higher overall framerates than SFR, it also exhibits the temporal artifact known as micro stuttering, which may affect frame rate perception. It is noteworthy that while the frequency at which frames arrive may be doubled, the time to produce the frame is not reduced – which means that AFR is not a viable method of reducing input lag. SLI modes: SLI antialiasing This is a standalone rendering mode that offers up to double the antialiasing performance by splitting the antialiasing workload between the two graphics cards, offering superior image quality. One GPU performs an antialiasing pattern which is slightly offset to the usual pattern (for example, slightly up and to the right), and the second GPU uses a pattern offset by an equal amount in the opposite direction (down and to the left). Compositing both the results gives higher image quality than is normally possible. This mode is not intended for higher frame rates, and can actually lower performance, but is instead intended for games which are not GPU-bound, offering a clearer image in place of better performance. When enabled, SLI antialiasing offers advanced antialiasing options: SLI 8×, SLI 16×, and SLI 32× (for quad SLI systems only). SLI modes: Hybrid SLI Hybrid SLI is the generic name for two technologies, GeForce Boost and HybridPower.GeForce Boost allows the rendering power of an integrated graphics processor (IGP) and a discrete GPU to be combined in order to increase performance.HybridPower, on the other hand, is another mode that is not for performance enhancement. The setup consists of an IGP as well as a GPU on MXM module. The IGP would assist the GPU to boost performance when the laptop is plugged to a power socket while the MXM module would be shut down when the laptop was unplugged from power socket to lower overall graphics power consumption. Hybrid SLI is also available on desktop Motherboards and PCs with PCI-E discrete video cards. NVIDIA claims that twice the performance can be achieved with a Hybrid SLI capable IGP motherboard and a GeForce 8400 GS video card.HybridPower was later renamed as Nvidia Optimus. SLI HB: In May 2016 Nvidia announced that the GeForce 10 series would feature a new SLI HB (High Bandwidth) bridge; this bridge uses 2 SLI fingers on the PCB of each card and essentially doubles the available bandwidth between them. Currently, only GeForce 10 series cards support SLI HB and only 2-way SLI is supported over this bridge for single-GPU cards. SLI HB interface runs at 650 MHz, while legacy SLI interface runs at slower 400 MHz. SLI HB: Electrically there is little difference between the regular SLI bridge and the SLI HB bridge. It is similar to two regular bridges combined in one PCB. The signal quality of the bridge improved, however, as the SLI HB bridge has an adjusted trace-length to make sure all traces on the bridge have exactly the same length.A PC gaming magazine has done research with x-rays for comparing SLI bridges with their SLI HB successors and found adjusted signal lengths by planned Meandering of certain wires so that clock rates can go up from 400 MHz to 650 MHz and thus the data rates along with that. With the increased bus width a noticeable bandwidth increase should be expected. Tests with a GTX 1080 GPU-board model showed that the improvements in gaming performance is quite marginal. Other than that it was discovered or determined, that LED illuminated bridges (often back-side-illuminating some logo) will mainly result in a noticeable increased market price at a comparable base functionality. Caveats: Not all motherboards with multiple PCI-Express x16 slots support SLI. On August 10, 2009, Nvidia announced that Intel and other leading motherboard manufacturers including ASUS, EVGA, Gigabyte and MSI have all licensed Nvidia SLI technology for inclusion on their Intel P55 Express Chipset-based motherboards designed for the upcoming Intel Core i7 and i5 processor in the LGA 1156 socket. Older motherboards using the P55's predecessors Intel P35 or Intel P45 do not support SLI. Recent motherboards as of October 2017 that support it are Intel's Z and X series chipsets (Z68, Z77, Z87, Z97, Z170, Z270, Z370, X79, X99 and X299) along with AMD's 990FX, X370 and X399 chipsets. Earlier chipsets, such as the Intel X58, could support 2-way SLI over 16 lane PCI-e. In order for motherboards of that generation to support more than two GPUs they were required to implement Nvidia nForce chipsets. Caveats: In an SLI configuration, cards can be of mixed manufacturers, card model names, BIOS revisions or clock speeds. However, they must be of the same GPU series (e.g., 8600, 8800) and GPU model name (e.g., GT, GTS, GTX). There are rare exceptions for "mixed SLI" configurations on some cards that only have a matching core codename (e.g., G70, G73, G80, etc.), but this is otherwise not possible, and only happens when two matched cards differ only very slightly, an example being a differing amount of video memory, stream processors, or clockspeed. In this case, the slower/lesser card becomes dominant, and the other card matches. Another exception is the GTS 250, which can be paired with the 9800 GTX+, as the GTS 250 GPU is a rebadged 9800 GTX+ GPU. Caveats: In cases where two cards are not identical, the faster card – or the card with more memory - will run at the speed of the slower card or disable its additional memory. (Note that while the FAQ still claims different memory size support, the support has been removed since revision 100.xx of Nvidia's Forceware driver suite.) SLI does not always give a performance benefit – in some extreme cases, it can lower the frame rate due to the particulars of an application's coding. This is also true for AMD's CrossFire, as the problem is inherent in multi-GPU systems. This is often witnessed when running an application at low resolutions. Caveats: Vsync + Triple buffering is not supported in some cases in SLI AFR mode. Users having a Hybrid SLI setup must manually change modes between HybridPower and GeForce Boost, while automatically changing mode will not be available until future updates become available. Hybrid SLI currently supports only single link DVI at 1920×1200 screen resolution. When using SLI with AFR, the subjective framerate can often be lower than the framerate reported by benchmarking applications, and may even be poorer than the framerate of its single-GPU equivalent. This phenomenon is known as micro stuttering and also applies to CrossFire since it is inherent to multi-GPU configurations. Caveats: With the new RTX 20xx series of graphics cards as launched in 2018 the interconnect is no longer SLI HB. These newer cards are using NVLink as its communication base and require either a three-slot-long or four-slot-long NVLink bridge - reasoned partially by thermal considerations and by socket availability. As of now only two GPU cards can be connected with NVLink; three-way, four-way and quad are not possible using NVLink bridges even if NVLink by principle is a very versatile interface. Caveats: As of the GeForce RTX 3000-series, SLI has been effectively replaced with NVLink.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AROS Public License** AROS Public License: The AROS Public License (APL) is a software license which is primarily used as the license for the AROS Research Operating System software project. Version 1.1 is based on the text of Mozilla Public License 1.1, with some definitions added and Netscape-specific texts changed. It has not been officially approved as a free or open source license by the FSF, OSI, or Debian. The license the APL is based on, MPL v1.1, is incompatible with the GPL, unless the developer offers the choice of licensing the program under the GPL or any other GPL-compatible license.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metisazone** Metisazone: Methisazone (USAN) or metisazone (INN) is an antiviral drug that works by inhibiting mRNA and protein synthesis, especially in pox viruses. During trials in the 1960s it showed promising results against smallpox infection, but widespread use was considered logistically impractical in the developing countries facing smallpox cases, and it saw only limited use. In developed countries able to cope with the logistic challenge, treatment of smallpox could be achieved just as effectively with immunoglobulin therapy, without the severe nausea associated with metisazone.Methisazone has been described as being used in prophylaxis since at least 1965.The condensation of N-methylisatin with thiosemicarbazide leads to methisazone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Early effect** Early effect: The Early effect, named after its discoverer James M. Early, is the variation in the effective width of the base in a bipolar junction transistor (BJT) due to a variation in the applied base-to-collector voltage. A greater reverse bias across the collector–base junction, for example, increases the collector–base depletion width, thereby decreasing the width of the charge carrier portion of the base. Explanation: In Figure 1, the neutral (i.e. active) base is green, and the depleted base regions are hashed light green. The neutral emitter and collector regions are dark blue and the depleted regions hashed light blue. Under increased collector–base reverse bias, the lower panel of Figure 1 shows a widening of the depletion region in the base and the associated narrowing of the neutral base region. Explanation: The collector depletion region also increases under reverse bias, more than does that of the base, because the collector is less heavily doped than the base. The principle governing these two widths is charge neutrality. The narrowing of the collector does not have a significant effect as the collector is much longer than the base. The emitter–base junction is unchanged because the emitter–base voltage is the same. Explanation: Base-narrowing has two consequences that affect the current: There is a lesser chance for recombination within the "smaller" base region. Explanation: The charge gradient is increased across the base, and consequently, the current of minority carriers injected across the collector-base junction increases, which net current is called ICB0 .Both these factors increase the collector or "output" current of the transistor with an increase in the collector voltage, but only the second is called Early effect. This increased current is shown in Figure 2. Tangents to the characteristics at large voltages extrapolate backward to intercept the voltage axis at a voltage called the Early voltage, often denoted by the symbol VA. Large-signal model: In the forward active region the Early effect modifies the collector current ( IC ) and the forward common-emitter current gain ( βF ), as typically described by the following equations: IC=ISeVBEVT(1+VCEVA)βF=βF0(1+VCEVA) Where VCE is the collector–emitter voltage VBE is the base–emitter voltage IS is the reverse saturation current VT is the thermal voltage kT/q ; see thermal voltage: role in semiconductor physics VA is the Early voltage (typically 15–150 V; smaller for smaller devices) βF0 is forward common-emitter current gain at zero bias.Some models base the collector current correction factor on the collector–base voltage VCB (as described in base-width modulation) instead of the collector–emitter voltage VCE. Using VCB may be more physically plausible, in agreement with the physical origin of the effect, which is a widening of the collector–base depletion layer that depends on VCB. Computer models such as those used in SPICE use the collector–base voltage VCB. Small-signal model: The Early effect can be accounted for in small-signal circuit models (such as the hybrid-pi model) as a resistor defined as rO=VA+VCEIC≈VAIC in parallel with the collector–emitter junction of the transistor. This resistor can thus account for the finite output resistance of a simple current mirror or an actively loaded common-emitter amplifier. Small-signal model: In keeping with the model used in SPICE and as discussed above using VCB the resistance becomes: rO=VA+VCBIC which almost agrees with the textbook result. In either formulation, rO varies with DC reverse bias VCB , as is observed in practice.In the MOSFET the output resistance is given in Shichman–Hodges model (accurate for very old technology) as: rO=1+λVDSλID=1ID(1λ+VDS) where VDS = drain-to-source voltage, ID = drain current and λ = channel-length modulation parameter, usually taken as inversely proportional to channel length L. Small-signal model: Because of the resemblance to the bipolar result, the terminology "Early effect" often is applied to the MOSFET as well. Current–voltage characteristics The expressions are derived for a PNP transistor. For an NPN transistor, n has to be replaced by p, and p has to be replaced by n in all expressions below. The following assumptions are involved when deriving ideal current-voltage characteristics of the BJT Low level injection Uniform doping in each region with abrupt junctions One-dimensional current Negligible recombination-generation in space charge regions Negligible electric fields outside of space charge regions.It is important to characterize the minority diffusion currents induced by injection of carriers. With regard to pn-junction diode, a key relation is the diffusion equation. d2ΔpB(x)dx2=ΔpB(x)LB2 A solution of this equation is below, and two boundary conditions are used to solve and find C1 and C2 .ΔpB(x)=C1exLB+C2e−xLB The following equations apply to the emitter and collector region, respectively, and the origins 0 , 0′ , and 0″ apply to the base, collector, and emitter. ΔnB(x″)=A1ex″LB+A2e−x″LBΔnc(x′)=B1ex′LB+B2e−x′LB A boundary condition of the emitter is below: EB −1) The values of the constants A1 and B1 are zero due to the following conditions of the emitter and collector regions as x″→0 and x′→0 .ΔnE(x″)→0Δnc(x′)→0 Because A1=B1=0 , the values of ΔnE(0″) and Δnc(0′) are A2 and B2 , respectively. EB CB kT−1)e−x′LC Expressions of IEn and ICn can be evaluated. CB kT−1) Because insignificant recombination occurs, the second derivative of ΔpB(x) is zero. There is therefore a linear relationship between excess hole density and x .ΔpB(x)=D1x+D2 The following are boundary conditions of ΔpB .ΔpB(0)=D2ΔpB(W)=D1W+ΔpB(0) with W the base width. Substitute into the above linear relation. ΔpB(x)=−1W[ΔpB(0)−ΔpB(W)]x+ΔpB(0) With this result, derive value of IEp .IEp(0)=−qADBdΔpBdx|x=0IEp(0)=qADBW[ΔpB(0)−ΔpB(W)] Use the expressions of IEp , IEn , ΔpB(0) , and ΔpB(W) to develop an expression of the emitter current. CB EB EB CB kT−1)] Similarly, an expression of the collector current is derived. EB CB kT−1)] An expression of the base current is found with the previous results. EB CB kT−1)]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Onion soup** Onion soup: Onion soup is a type of vegetable soup with sliced onions as the main ingredient. It is prepared in different variations in many different countries, the most famous of which is the French onion soup or Parisian onion soup. Because of the affordable ingredients, it has primarily been a dish for the poor for a long time. Onion soup: Common for all variations of onion soup is the use of thinly sliced or chopped onions soaked in fat, and a liquid base such as water or broth, possibly including white wine, after which the soup is cooked for a while so that the onions lose their strong flavour and the soup gains a sweet, spicy flavour. In many recipes the soup is thickened with flour or egg yolks. French onion soup: An early recipe called soupe à l'oignon could be found in the recipe collection Le Viandier by Guillaume Tirel in the 15th century. François Pierre La Varenne, the chef of Marie de' Medici, described the use of bread as an addition to onion soup in his 1651 cookbook Le Cuisinier François.The French or Parisian onion soup (soupe à l'oignon or soupe d'oignons aux Halles) was already offered as food for merchants, customers and tourists in the Quartier des Halles in the 18th century. French onion soup: The classic way to prepare the dish is to slowly soak thinly sliced onions, sometimes also garlic, in butter or vegetable oil, until they gain a golden yellow colour, then to sprinkle them with flour and soak them in white wine. Then water and vegetable broth or more commonly, meat broth is added and the soup is slowly cooked. It is also common to soak roasted bits of white bread (croutons) in the soup, add shredded cheese and gratinate the whole dish (soupe à l'oignon gratinée or soupe au fromage, see also cheese soup).The Strasbourg onion soup is similar to the Parisian one, the difference being the use of croutons and a raw egg yolk. French onion soup: A further variety is the soupe Soubise, where the onions are puréed and the soup is thickened with Béchamel sauce. Instead of roast bread the soup is served with croutons and bits of choux pastry. Similarly to Soubise sauce, the soup was named after its creator, the field marshal and gourmet Charles, Prince of Soubise. German onion soup: Onion soup similar to the French variety is also known in Germany. The Palatinate onion soup is prepared with cream and wine, spiced with caraway, and served with roast bread without cheese. The south German onion soup is thickened with light roux, run through a sieve, thickened with egg yolk and served with bits of roast bread. Marie Buchmeier's 1885 cookbook Praktischem Kochbuch für die bürgerliche, sowie feine Küche contains an extremely simple recipe for the soup, where sliced dark bread is soaked in a terrine, soaked with boiling water and served with sliced onions turned yellow with butter. Onion soup from the Rhine area contains onions, meat broth, bits of carrot and roasted bratwurst. The soup is run through a sieve and spiced with vinegar, the bratwurst is sliced and served along the soup. Hamburgian onion soup is prepared with shallots, meat broth and sherry and served with croutons and cheese. Italian onion soup: As well as the French onion soup, there is also a variety in Italy, where milk is substituted for the broth and the wine. It is served with croutons and shredded Parmesan cheese, which are not roasted. The Cipollata is a dish of its own, resembling the German Eintopf soup. It is prepared with onions, sliced bacon and tomatoes, which are thoroughly boiled and added to the soup with a mix of Parmesan and whipped eggs. If the soup becomes too thick it is watered down. Cipollata is also served with roast bread. Arabian onion soup: Cherbah, Arabian onion soup, includes bell peppers, tomatoes and garlic, which are sliced together with onions. The soup is thickened with meat broth. The soup is spiced with black pepper, salt, fresh mint and lemon juice. Cherbah is finally thickened with egg yolk.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chainline** Chainline: The chainline is the angle of a bicycle chain relative to the centerline of the bicycle frame. A bicycle is said to have perfect chainline if the chain is parallel to the centerline of the frame, which means that the rear sprocket is directly behind the front chainring. Chainline can also refer to the distance between a sprocket and the centerline of the frame. Chainline: Bicycles without a straight chainline are slightly less efficient due to frictional losses incurred by running the chain at an angle between the front chainring and rear sprocket. This is the main reason that a single-speed bicycle can be more efficient than a derailleur geared bicycle. Single-speed bicycles should have the straightest possible chainline.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zhu–Takaoka string matching algorithm** Zhu–Takaoka string matching algorithm: In computer science, the Zhu–Takaoka string matching algorithm is a variant of the Boyer–Moore string-search algorithm. It uses two consecutive text characters to compute the bad-character shift. It is faster when the alphabet or pattern is small, but the skip table grows quickly, slowing the pre-processing phase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Product innovation** Product innovation: Product innovation is the creation and subsequent introduction of a goods or service that is either new, or an improved version of previous goods or services. This is broader than the normally accepted definition of innovation that includes the invention of new products which, in this context, are still considered innovative. Introduction: Product innovation is defined as: the development of new products, changes in design of established products, or use of new materials or components in the manufacture of established products Numerous examples of product innovation include introducing new products, enhanced quality and improving its overall performance. Product innovation, alongside cost-cutting innovation and process innovation, are three different classifications of innovation which aim to develop a company's production methods.Thus product innovation can be divided into two categories of innovation: radical innovation which aims at developing a new product, and incremental innovation which aims at improving existing products. Advantages and disadvantages: Advantages of product innovation include: Growth, expansion and gaining a competitive advantage: A business that is capable of differentiating their product from other businesses in the same industry to large extent will be able to reap profits. This can be applied to how smaller businesses can use product innovation to better differentiate their product from others. Product differentiation can be defined as "A marketing process that showcases the differences between products. Differentiation looks to make a product more attractive by contrasting its unique qualities with other competing products. Successful product differentiation creates a competitive advantage for the seller, as customers view these products as unique or superior." Therefore, small businesses that are able to utilize product innovation effectively will be able to expand and grow into larger businesses, while gaining a competitive advantage over its remaining competitors. Advantages and disadvantages: Brand switching: Businesses that once again are able to successfully utilize product innovation will thus entice customers from rival brands to buy its product instead as it becomes more attractive to the customer. One example of successful product innovation that have led to brand switching are the introduction of the iPhone to the mobile phone industry (which has caused mobile phone users to switch from Nokia, Motorola, Sony Ericsson, etc. to the Apple iPhone).Disadvantages of product innovation include: Counter effect of product innovation: Not all businesses/competitors do not always create products/resources from scratch, but rather substitute different resources to create productive innovation and this could have an opposite effect of what the business/ competitor is trying to do. Thus, some of these businesses/ competitors could be driven out of the industry and will not last long enough to enhance their product during their time in the industry. Advantages and disadvantages: High costs and high risk of failure: When a business attempts to innovate its product, it injects much capital and time into it, which requires severe experimentation. Constant experimentation could result in failure for the business and will also cause the business to incur significantly higher costs. Furthermore, it could take years for a business to successfully innovate a product, thus resulting in an uncertain return. Advantages and disadvantages: Disrupting the outside world: For product innovation to occur, the business will have to change the way it runs, and this could lead to the breaking down of relationships between the business and its customers, suppliers and business partners. In addition, changing too much of a business's product could lead to the business gaining a less reputable image due to a loss of credibility and consistency. Theories of product innovation: Popular theories of product innovation - what causes it and how it is achieved - include Outcome-Driven Innovation and "Jobs to be Done" (JTBD). JTBD Theory is used extensively as part of a methodical approach to product innovation postulating that users "hire" a product to do a "job" and that innovation can be achieved by providing a better way of getting a particular job done. Theories of product innovation: Used as a framework, JTBD is very similar to outcome-driven innovation, focusing on the functional, emotional, and social 'jobs' that users want to perform. However, this is one of two main interpretations of the theory known as "Jobs-as-action." The second interpretation is known as "Jobs-as-progress" and focuses on what the user wants to be, stating that the jobs a product user wants to do are secondary to (and a result of) the person they want to be. New product development: New product development is the initial step before the product life cycle can be examined, and plays a vital role in the manufacturing process. To prevent loss of profits or liquidation for businesses in the long term, new products have to be created to replace the old products. Peter Drucker suggests in his book 'Innovation and Entrepreneurship' that both product innovation and entrepreneurship are interconnected and must be used together in unison for a business to be successful, and this relates to the process of new product development. New product development: Stages These are the few stages that a business has to undergo when introducing a new product line into the market: Market research: This can be done in the form of primary and secondary market research where the business will gather as much information as possible about the present tastes and preferences of its potential consumers, and the gaps filled in the business's particular industry. Secondary market research involves gathering data that has already been collected by another party, and is primarily based on information that has been founded from previous studies. One advantage of secondary market research over primary market research is that it is low-cost, thus enabling the business to be able to invest its time into other more important matters and new potential business ventures. Primary market research involves the business gathering data individually, and this can be done via various sampling methods. Other forms of primary market research include focus groups, interviews, questionnaires, etc. One advantage of primary market research over secondary market research is that it delivers much more specific results than secondary market research, and is only available to the business itself, rather than secondary research which is made globally available, as data has already been collected. New product development: Product development and testing: This stage involves creating a test product called a prototype. The prototype ensures the business that its product is functioning properly, and all the necessary arrangements are made to enhance the product as much as possible. After the prototype has been devised, the business can now use test marketing where the business introduces a product to a small group of individuals to give the company insight into the effectiveness of the product from the views of their potential customers. New product development: Feasibility study: The business will now look at the legal and financial restrictions of launching the product into the market. This is where the business will create sales forecasts, establish the price of the product, the overall costs of production and profitability estimates. The business also has to consider legal aspects in terms of safety and Intellectual Property Rights (IPR).After all these stages have been successfully run through, then the business can officially launch the product. Classification of innovation: Product innovation can be classified by degree of technical novelty and by type of novelty in terms of market. Technical product innovations include the use of new materials, the use of new intermediate products, new functional parts, the use of radically new technology and fundamental new functions. Classification by levels of novelty include new only to the firm, new to the industry in the country or to the operating market of the firm, or new to the world.Existing product development is a process of innovation where products/services are redesigned, refurbished, improved, and manufactured which can be at a lower cost. This will provide benefits to both the company and the consumer in different ways; for example, increased revenue (benefits the company) cheaper costs (benefits the company and consumer) or even benefits the environment by implementation of 'green' production methods. Measuring innovation: The Oslo Manual recommends certain guidelines for measuring innovation through the measurement of aspects in the innovation process and innovation expenditure. Measurement processes consists of collecting and systemizing qualitative and quantitative data regarding different factors of the innovation process, investment and outcome. Quantitative analysis focus on investment, impact and life cycle. Examples of key quantitative indicators include product investment, total innovation investment, product sales share that comes from innovation, manpower use, material consumption, energy consumption, time taken to reach the commercialisation phase or the expected cost recovery or payback period. Qualitative data includes benefits of the innovation, sources of information or ideas for the innovation, and diffusion or reach of innovation. Even though similar information can be obtained through quantitative methods, the guidelines argues that due to the qualitative nature of the answers, firms are inclined to provide richer and a different set of data, avoiding duplication. Vs. other forms of innovation: While the difference between different forms of innovation seems intuitively clear, it is not always obvious what kind of innovation is occurring in practice. While there are different dimensions to consider whether it is a product innovation rather than e.g. a technology or business model innovation, it is not always possible to clearly differentiate one from the other. For example, compared to a business model innovation, a product innovation often has: Lower strategic importance Lower risk, impact, and uncertainty Lower complexity More clarity about who is in charge Fewer actors and stakeholders Fewer different disciplines involved Smaller set of skills and capabilities necessary Journals: Journal of Product Innovation Management Journal of Innovation Management
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ewert Bengtsson** Ewert Bengtsson: Ewert Bengtsson of Uppsala University, Sweden is a biomedical engineer who was named a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2015 for his contributions to quantitative microscopy and biomedical image analysis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reutericyclin** Reutericyclin: Reutericyclin is a bacteriocin produced by the bacterium Lactobacillus reuteri that has potential use as a food preservative. Reutericyclin is a hydrophobic, negatively charged molecule with the molecular formula C20H31NO4. Reutericyclin disrupts the cell membrane of sensitive bacteria by acting as a proton ionophore. Reutericyclin has a broad spectrum of activity against Gram-positive bacteria, but has no effect on Gram-negative bacteria because the lipopolysaccharide (LPS) in the outer membrane of Gram-negative bacteria prevents access by hydrophobic compounds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sexual fantasy** Sexual fantasy: A sexual fantasy or erotic fantasy is an autoerotic mental image or pattern of thought that stirs a person's sexuality and can create or enhance sexual arousal. A sexual fantasy can be created by the person's imagination or memory, and may be triggered autonomously or by external stimulation such as erotic literature or pornography, a physical object, or sexual attraction to another person. Anything that may give rise to a sexual arousal may also produce a sexual fantasy, and sexual arousal may in turn give rise to fantasies. Sexual fantasy: Sexual fantasies are nearly universal, being reported in many societies across the globe. However, because of the nature of some fantasies, the actual putting of such fantasies into action is far less common, due to cultural, social, moral, and religious constraints. In some cases, even a discussion by a person of sexual fantasies is subject to social taboos and inhibitions. Some people find it convenient to act out fantasies through sexual roleplay. A person may find validation of a sexual fantasy by viewing the depiction or discussion of the fantasy in film, usually of a pornographic nature. A fantasy may be a positive or negative experience, or even both. It may be in response to a past experience and can influence future sexual behavior. A person may not wish to enact a sexual fantasy in real life, and since the process is entirely imaginary, they are not limited to acceptable or practical fantasies, which can provide information on the psychological processes behind sexual behavior. Sexual fantasy: Sexual fantasy can also pertain to a genre of literature, film or work of art. Such works may be appreciated for their aesthetics, though many people may feel uncomfortable with such works. For example, women in prison films may be described as sexual fantasies, as are pornographic films. In the case of films, the term may describe a part of the film, such as a fantasy scene or sequence. Besides pornographic films, a number of mainstream films have included sexual fantasy scenes, such as Business Is Business (1971), Amarcord (1973), American Beauty (1999) and others. In many cases, the use of fantasy scenes enables the inclusion of material into a work indicating the sexualised mental state of a character. Methodology: It is difficult to objectively identify and measure the nature of sexual fantasies, so that many studies deal with conscious fantasies when a person is awake, using one of three techniques: anonymous respondents are provided with a checklist of fantasies and asking them to indicate which ones they have experienced, how often, and in what context. This method relies on retrospective recall, which may limit its accuracy. A checklist may not be comprehensive, and as a result may be biased towards some fantasies. Methodology: anonymous respondents are asked to write, in narrative form, their sexual fantasies. This method also relies on retrospective recall. Some studies limit the number of fantasies entered (such as only the most frequent ones), and respondents may not write down all of their fantasies anyway-—they may forget infrequent fantasies, not want to write too many down, or be more subject to social desirability bias than with a checklist. Methodology: respondents record the fantasies they experience over a given period of time using checklists or diaries. This method requires a long period of time to be representative, and may be impractical.To measure the reliability of a person's reporting of fantasies, researchers may compare a person's reported sexual arousal against actual measures of arousal, using techniques such as vaginal photoplethysmography, penile strain gauges, or other tools, such as genital pulse amplitude, genital blood volume, and heart rate. A 1977 study found that males judged arousal based on blood volume far better than females, and that males and females were equal when judging arousal based on pulse amplitude measures. Additionally, females were better at judging low arousal.As with studies of sex in general, samples used in studies may be too small, not be fully random, or not fully representative of a population. This makes similarities between studies especially important. Women may be prone to underreporting the frequency of fantasy because they do not realize that they are becoming aroused, or they will not say that they are; one common problem is that they will imagine romantic imagery and become aroused, but not report the fantasy because it is not sexually explicit. Many studies are modern and are carried out in western society, which, through factors like gender roles and taboo, are not widely representative, raising the need for more studies in different societies and historical eras. With regards to age, there is very little knowledge of sexual fantasies in children aged 5 to 12, and there is a need for longitudinal studies across a life span. Sex is often a taboo topic, so conducting a truly honest and representative example can be difficult in some areas. For example, a 1997 study on South Asian gay men found that almost 75% were afraid of being "found out", which complicates studies. Purposes: The scenarios for sexual fantasies vary greatly between individuals and are influenced by personal desires and experiences, and can range from the mundane to the bizarre. Fantasies are frequently used to escape real-life sexual restraints by imagining dangerous or illegal scenarios, such as rape, castration, or kidnapping. They allow people to imagine themselves in roles they do not normally have, such as power, innocence and guilt. Fantasies have enormous influence over sexual behaviour and can be the sole cause of an orgasm. While there are several common themes in fantasies, any object or act can be eroticized. Purposes: Sexual fantasies are increasingly viewed as a necessary component to a healthy relationship. Accordingly, theorists have argued that fantasies may be used to encourage and promote sexual pleasure between partners. Researchers have additionally found a positive correlation between instances of sexual fantasising and increased orgasm, arousal, and general contentment. The relative benefits of sexual fantasies are summarised in a statement by Stroller; “sexual fantasises are a private pornography in which we rehearse over and over again needs that are nearly impossible to fulfil in actual sex”. Sexual fantasising therefore allows an individual to fulfil desires that cannot be realistically achieved. In this sense, researchers assert that fantasising about extra-marital, or multiple-partner sex is positively correlated with long-term partnerships. As such, sexual fantasies are viewed as means to combat sexual dissatisfaction.Sexual fantasising may also be used to settle relational hardships, as opposed to sexual dissatisfaction. For instance, women from disturbed marriages were found to fantasise significantly more often than happily married women. Creating hypothetical scenarios may be used as a coping mechanism, particularly by women, in handling stress and discomfort. As such, fantasies allow individuals to enter a new realm (e.g. experience a position of power, innocence, or guilt) that contrasts the source of anguish, and enhances feelings of self-worth. Purposes: The purpose and function of sexual fantasies are explained rather differently from an evolutionary perspective. Bowlby's (1969/1982) attachment theory asserts that the absence of adequate attachment figures can devastate self-esteem. It is suggested that more anxiously attached individuals use sex to attain emotional security. Accordingly, they might engage in sex through a longing for sexual intimacy, and increase the frequency of sexual behaviour under conditions that challenge the status of their relationship. Contrastingly, the avoidant attachment type is apprehensive about the intimacy posed by sexual relations, and will take active measures to avoid feelings of closeness. Patterns of sexual behaviour include emotion free sex with casual partners, engaging in sex to promote oneself, and feelings of detachment during intercourse. Sexual fantasies are likely to follow attachment-related themes. It is noted that anxious attachment individuals report significantly more instances of sexual fantasising, and portray the self as feeble, dependable and powerless. Avoidant attachment types report fantasies in which relationships are regarded as cold, unfeeling and impersonal. As such, sexual fantasies serve the primary function of fulfilling interpersonal goals through the mode of mental representation. Purposes: Evolutionary theory provides another interesting explanation as to the purpose and function of gender differences in sexual fantasies. Purposes: Research literature states that women are more likely to prioritise their own physical and emotional sensations, where men conjure images of sexual partners. Women are also more likely to fantasise about a single individual with whom they have shared history, or those whom they wish to pursue a long-term relationship. Throughout the course of time, it has proved advantageous for the male to copulate with young and fertile females. They evolved an ability to decipher “fresh features” of reproductive partners; clear skin, thick hair, fuller lips, and so forth. By comparison, females are driven to reproduce on the basis of parental investment, and a quality gene pool possessed by the male. From a female perspective, the risks of copulating with multiple male partners far outweigh any potential benefits. It is therefore unsurprising that males visualise specific physical features; its origins and purpose can be found in evolution. It also follows that where males project outwardly, viewing women and a means to obtain sexual pleasure, women have become conditioned to remain passive in this role. They do so under close scrutiny of male sexual attention, to fantasise a specific and special partner. Purposes: A person may have no desire to carry out a fantasy; people often use fantasies to help plan out future sexual encounters. Fantasies occur in all individuals and at any time of the day, although it has been suggested that they are more common among frequent daydreamers. Sexual fantasy is frequent during masturbation, although this may be truer for men than for women.During sexual contact, some people can use their fantasies to "turn off" undesirable aspects of an act. Conversely, a person may use fantasy to focus and maintain arousal, such as a man receiving fellatio ignoring a distraction. Men tend to be aware of only parts of themselves during sex-—they are more likely to focus on the physical stimulation of one area, and as such, do not see themselves as a "whole".Many couples share their fantasies to feel closer and gain more intimacy and trust, or simply to become more aroused or effect a more powerful physical response. Some couples share fantasies as a form of outercourse; this has been offered as an explanation for the rise of BDSM during the 1980s — in order to avoid contracting HIV, people turned to BDSM as a safe outlet for sexual fantasy. Couples may also act out their fantasies through sexual roleplay. Purposes: Fantasies may also be used as a part of sex therapy. They can enhance insufficiently exciting sexual acts to promote higher levels of sexual arousal and release. A 1986 study that looked at married women indicated that sexual fantasies helped them achieve arousal and orgasm. As a part of therapy, anorgasmic women are commonly encouraged to use fantasy and masturbation. Common fantasies: The incidence of sexual fantasies is nearly universal, but vary by gender, age, sexual orientation, and society. However, because of a reliance on retrospective recall, as well as response bias and taboo, there is an inherent difficulty in measuring the frequency of types of fantasies. In general, the most common fantasies for men and women are: reliving an exciting sexual experience, imagining sex with a current partner, and imagining sex with a different partner. There is no consistent difference in the popularity of these three categories of fantasies. The next most common fantasies involve oral sex, sex in a romantic location, sexual power or irresistibility, and rape. Common fantasies: According to a 2004 United States survey, the incidence of certain fantasies is higher than the actual performance. Gender differences: Origins of sexual fantasies The sexes have been found to contrast with respect to where their fantasies originate from. Men tend to fantasize about past sexual experiences, whereas women are more likely to conjure an imaginary lover or sexual encounter that they have not experienced previously. Male fantasies tend to focus more on visual imagery and explicit anatomic detail, with men being more interested in visual sexual stimulation and fantasies about casual sex encounters, regardless of sexual orientation.On the other hand, women's fantasies tend to be more focused upon mental sexual stimulation and contain more emotion and connection. Thus, women are more likely to report romantic sexual fantasies that are high in intimacy and affection, for instance associating their male partner with heroism and viewing them as chivalric rescuers. Evolutionary theory offers an explanation for this finding, such that women may be likely to show commitment to their male partner in return for his investment of resources to help raise her offspring, thus increasing offspring chance of survival. Gender differences: Types of sexual fantasies Much research has been conducted which has highlighted several gender differences in sexual fantasies. Some of the patterns which have frequently emerged include men's greater tendency to report sexual fantasies falling in the following categories: exploratory, intimate, impersonal, and sadomasochism. Exploratory fantasies include those of homosexual encounters and group sex, whilst fantasies of watching others engage in sexual intercourse and fetishism are classed as impersonal sexual fantasies. Women are also likely to report fantasies involving the same-sex partner, or those with a famous person, although both sexes have been found to prefer intimate fantasies over the other three types outlined, including fantasies of oral sex and sex outdoors. Gender differences: Frequent themes Another way the sexes differ is that men are much more likely to fantasize about having multiple sexual partners (i.e., having threesomes or orgies) compared to women and seek greater partner variation in their sexual fantasies. Evolutionary theory suggests that this may be due to men's capacity to produce many offspring at any one time by impregnating multiple females, and thus predicts that males will be much more open to the concept of multiple partnerships in order to increase reproductive success and continue their genetic line.The sexes also differ in terms of how much they fantasize about dominance and submission. Men fantasize about dominance much more frequently than submission, whereas women fantasize about submission much more frequently than dominance. Despite these differences, it is important to note that most individuals do not conform to these gendered sexual stereotypes, and that male sexuality is not innately aggressive, nor is female sexuality inherently passive, and that these stereotypes may decline with age.Sexual fantasies may instead vary as a result of individual differences, such as personality or learning experiences, and not gender per se. Indeed, it has been suggested that gender differences in sexual fantasies have actually narrowed over time, and may continue to do so, for example with regard to variety of sexual fantasy and the amount of fantasizing reported by each of the sexes. Gender differences: Age The age of first experiencing a sexual fantasy has also been found to differ between the sexes. Males are likely to report this at a younger age, typically between the years of 11 and 13, and describe these as being more explicit in content. Themes that were common to both genders regarding first sexual fantasies included sex with celebrities (such as movie stars), and also teachers. It has been noted that sexual fantasy preferences of the two genders also change as a function of age. For instance, younger men have been found to endorse more fantasies with multiple partners, a trend which declines with age, whilst homosexual fantasies increase slightly. Meanwhile, for women, fantasies with strangers and same-sex partners remain relatively stable across the lifespan. Gender differences: Paraphilic sexual fantasies Sex differences have also been found with regard to paraphilic fantasies (i.e. those which are considered to be atypical). Examples of paraphilic sexual fantasies include incest, voyeurism, transvestic fetishism, sex with animals (see zoophilia), and pedophilia. One study reported that over 60% of men admitted to a sexual fantasy involving intercourse with an underage partner, and 33% of males reported rape fantasies. Along with other sexual fantasies, it is thought that the age of occurrence for paraphilic sexual fantasies is usually before 18 years, although this has been found to vary according to the specific fantasy at hand. Gender differences: Unusual sexual fantasies are more common in men, with fantasies of urinating on their sexual partner and being urinated on being significantly higher among males. The Diagnostic and Statistical Manual of Mental Disorders Fourth Edition (DSM-IV) states that paraphilias are rarely diagnosed in women, with the exception of sexual masochism. Furthermore, sexual arousal has been found to be greater in men than in women when asked to entertain the thought of engaging in paraphilic sexual activity. It may, however, be the case that paraphilias are reported less often in women because they are under-researched in women. Paraphilic sexual fantasies in females include sexual sadism, exhibitionism, and pedophilia. Gender differences: Execution of sexual fantasies Sexual fantasies may be more likely to be executed in contemporary society due to more liberalized attitudes towards the previously taboo topic of sex, and increased awareness of the variety of sexual experiences that now exist. Women are more likely to act upon their sexual fantasies than men since it has been suggested that they fantasize about sexual activities within their range of experience, which therefore makes them more possible to act out. The link between sexual fantasies involving dominance (e.g. rape fantasies) and likelihood of displaying aggressive behavior in real life has been investigated, with connections being found in relation to sexual crimes committed by men and fantasies of sexual coercion. This may be especially more likely if the individual displays high levels of psychopathic traits. Gender differences: Theoretical frameworks Since numerous variables influence sexual fantasy, the differences between gender can be examined through multiple theoretical frameworks. Social constructionism predicts that sexual socialisation is a strong predictor of sexual fantasy and that gender differences are the result of social influences. From this perspective, it is believed that female sexuality is more malleable since it is influenced to a greater extent by cultural views and expectations regarding how women should think and behave. In contrast, evolutionary theory (also known as evolutionary psychology or sociobiology) predicts that sexual fantasy is predisposed to biological factors.[29] For example, some studies have found that women prefer fantasizing about familiar lovers, whereas male sexual fantasies involve anonymous partners.A social constructionist explanation may say that this is because women are raised to be chaste and selective with men, whereas evolutionary theory may state that ancestral women preferred the reproductive security of having one partner, such that being faithful to him will result in a greater likelihood of him investing resources in her and her offspring, an idea which is still ingrained in modern women today. Evolutionary psychology can also help to shed light on the finding that females have a higher proportion of sexual fantasies involving a male celebrity. The theory suggests that this mating strategy may have been advantageous for our female ancestors, such that affiliation with a high status male increases offspring survival rate via protection and provision. Sexual orientation: In 1979, Masters and Johnson carried out one of the first studies on sexual fantasy in homosexual men and women, though their data-collection method is unclear. Their sample consisted of 30 gay men and lesbians, and they found that the five most common fantasies for homosexual men were images of sexual anatomy (primarily the penis and buttocks), forced sexual encounters, an idyllic setting for sex, group sex, and sex with women. A 1985 study found that homosexual men preferred unspecified sexual activity with other men, oral sex, and sex with another man not previously involved. In both studies, homosexual and heterosexual men shared similar fantasies, but with genders switched. A 2006 non-representative study looked at homosexual men in India. It found that when compared to heterosexual male fantasies, homosexual males were more focused on exploratory, intimate, and impersonal fantasies. There were no differences in sadomasochistic fantasies. In general, there was little difference in the top fantasies of homosexual versus heterosexual males. At the time of the study, homosexuality was illegal.A 2005 study compared heterosexual and homosexual women in the Los Angeles metropolitan area and found some differences in the content of their fantasies. In gender-specific findings, homosexual women had more fantasies about specific parts of a woman (face, breasts, clitoris, vagina, buttocks, arms or hair), while heterosexual women had more fantasies about specific parts of a man's body (face, penis, buttocks, arms or hair). Homosexual women also had more fantasies of "delighting many women"; there was no significant difference when subjects were asked if they fantasized about delighting many men. There was no significant difference in responses to questions that were not gender-specific. Force: Rape or ravishment is a common sexual fantasy among both men and women, either generically or as an ingredient in a particular sexual scenario. The fantasy may involve the fantasist as either the one being forced or coerced into sexual activity or as the perpetrator. Some studies have found that women tend to fantasize about being forced into sex more commonly than men. A 1974 study by Hariton and Singer found that being "overpowered or forced to surrender" was the second most frequent fantasy in their survey; a 1984 study by Knafo and Jaffe ranked being overpowered as their study's most common fantasy during intercourse; and a 1988 study by Pelletier and Herold found that over half of their female respondents had fantasies of forced sex. Other studies have found the theme, but with lower frequency and popularity. However, these female fantasies in no way imply that the subject desires to be raped in reality—the fantasies often contain romantic images where the woman imagines herself being seduced, and the male that she imagines is desirable. Most importantly, the woman remains in full control of her fantasy. The fantasies do not usually involve the woman getting hurt. Conversely, some women who have been sexually victimized in the past report unwanted sexual fantasies, similar to flashbacks of their victimization. They are realistic, and the woman may recall the physical and psychological pain involved.The most frequently cited hypothesis for why women fantasize of being forced into some sexual activity is that the fantasy avoids societally induced guilt—the woman does not have to admit responsibility for her sexual desires and behavior. A 1978 study by Moreault and Follingstad was consistent with this hypothesis, and found that women with high levels of sex guilt were more likely to report fantasy themed around being overpowered, dominated, and helpless. In contrast, Pelletier and Herold used a different measure of guilt and found no correlation. Other research suggests that women who report forced sex fantasies have a more positive attitude towards sexuality, contradicting the guilt hypothesis. A 1998 study by Strassberg and Lockerd found that women who fantasized about force were generally less guilty and more erotophilic, and as a result had more frequent and more varied fantasies. Additionally, it said that force fantasies are clearly not the most common or the most frequent. Social views: Social views on sexual fantasy (and sex in general) differ throughout the world. The privacy of a person's fantasy is influenced greatly by social conditions. Because of the taboo status of sexual fantasies in many places around the world, open discussion—or even acknowledgment—is forbidden, forcing fantasies to stay private. In more lax conditions, a person may share their fantasies with close friends, significant others, or a group of people with whom the person is comfortable. Social views: The moral acceptance and formal study of sexual fantasy in Western culture is relatively new. Prior to their acceptance, sexual fantasies were seen as evil or sinful, and they were commonly seen as horrid thoughts planted into the minds of people by "agents of the devil". Even when psychologists were willing to accept and study fantasies, they showed little understanding and went so far as to diagnose sexual fantasies in females as a sign of hysteria. Prior to the early twentieth century, many experts viewed sexual fantasy (particularly in females) as abnormal. Sigmund Freud suggested that those who experienced sexual fantasies were sexually deprived or frustrated or that they lacked adequate sexual stimulation and satisfaction. Over several decades, sexual fantasies became more acceptable as notable works and compilations, such as "Morality, Sexual Facts and Fantasies", by Dr Patricia Petersen, Alfred Kinsey's Kinsey Reports, Erotic Fantasies: A Study of the Sexual Imagination by Phyllis and Eberhard Kronhausen, and Nancy Friday's My Secret Garden, were published. Today, they are regarded as natural and positive elements of one's sexuality, and are often used to enhance sexual practices, both in normal settings and in therapy. Many Christians believe that the Bible prohibits sexual fantasies about people other than one's spouse in Matthew 5:28. Others believe that St Paul includes fantasy when he condemns works of the flesh such as "immorality" or "uncleanness". Despite the Western World's relatively lax attitudes towards sexual fantasy, many people elsewhere still feel shame and guilt about their fantasies. This may contribute to personal sexual dysfunction, and regularly leads to a decline in the quality of a couple's sex life. Guilt and jealousy: While most people do not feel guilt or disgust about their sexual thoughts or fantasies, a substantial number do. In general, men and women are equally represented in samples of those who felt guilt about their fantasies. The most notable exception was found in a 1991 study that showed that women felt more guilt and disgust about their first sexual fantasies. In women, greater guilt about sex was associated with less frequent and less varied sexual fantasies, and in men, it was associated with less sexual arousal during fantasies. Women also reported more intense guilt than men; both sexes reported greater guilt if their arousal and orgasm depended on a fantasy.Studies have also been carried out to examine the direct connection between guilt and sexual fantasy, as opposed to sex and guilt. One study found that in a sample of 160 conservative Christians, 16% of men and women reported guilt after sexual fantasies, 5% were unhappy with themselves, and 45% felt that their fantasies were "morally flawed or unacceptable". Studies that examined guilt about sexual fantasy by age have unclear results—Knoth et al. (1998) and Ellis and Symons (1990) found that younger people tended to feel less guilt about their fantasies, whereas Mosher and White (1980) found the opposite.A 2006 study examined guilt and jealousy in American heterosexual married couples. It associated guilt with an individual's fantasy ("How guilty do you feel when you fantasize about...") and jealousy with the partner's fantasy ("How jealous do you feel when your partner fantasizes about..."). Higher levels of guilt were found among women, couples in the 21–29 age range, shorter relationships and marriages, Republicans, and Roman Catholics; lower levels in men, couples in the 41–76 range, longer relationships, Democrats, and Jews. Higher levels of jealousy were found in women, couples in the 21–29 range, Roman Catholics and non-Jewish religious affiliations; lower levels were found in men, couples in the 41–76 range, and Jews and the non-religious. Sexual crimes: Deviant sexual fantasies Deviant sexual fantasies are sexual fantasies which involve illegal, nonconsensual, and sadistic themes. While people with paraphilia have deviant sexual fantasies, it is important to note that deviant sexual fantasies are not atypical and/or paraphilic. DSM-5 defines paraphilia as intense and persistent atypical preferences for sexual activities or targets like spanking, whipping, binding with erotic targets like children, animals, and/or rubber etc. While DSM-5 recognizes that paraphillias don't have to be pathological, psychiatrists still find it difficult to differentiate between paraphilic interests and paraphilic disorders, because the concept of normal of sexual fantasies is subjective. It is based on factors like history, society, culture and politics. For example, masturbation, oral, anal and homosexual sex were once illegal in some American states and even considered to be paraphilic disorders in earlier DSM revisions. Sexual crimes: When a study used statistical analysis and the Wilson sex fantasy questionnaire to investigate atypical fantasies, having zoophilllic or pedophillic fantasies were found to be rare and only 7 themes including urination, crossdressing, rape etc. were considered atypical. A lot of studies have also found that “atypical” sexual fantasies are quite common as indulging in greater varieties of sexual fantasies increases sex-life satisfaction. For example, in 2011 study found that over half of the older men in Berlin had “atypical” sexual fantasies with 21.8% of them having sadistic fantasies–a prerequisite for sexual murders. Another study found that dominance and submission themes were extremely popular in pornographic searches. Sexual crimes: Sexual crimes Most research into sexual crimes involve men. Sexual crimes such as sexual homicides are quite rare because most deviant sexual fantasizers never engaged in deviant sexual behaviors and are not at risk of engaging in sexual crimes. Some have suggested that the frequency of sexual crimes is underestimated due to the narrowness of the legal definition of sexual homicides. The investigations of sexual crimes face several limitations such as the "definitions of sexual crimes, how and where the crimes are committed, incomplete or inaccurate information due to offender's motive to exaggerate, legal restrictions" and researchers' approaches (the essentialist-descriptive approach or phenomenological descriptive approach). Sexual crimes: Risk factors Deviant and sadistic sexual fantasies are believed to be the underlying risk factors for sexual crimes. 70–85% of sexual offenders extensively engage in deviant sexual fantasies, and certain themes can be attributed to types of sex crimes. For example, serial sexual murderers have more rape fantasies than non-serial sexual murderers and 82% of offenders that use a weapon engage in violent sexual fantasies. Offenders that report deviant sexual fantasies have also been found to be more dangerous than offenders that do not.Other risk factors that contribute to the likelihood of sex crimes include biological, physiological and psychological factors like mental disorders (especially paranoia and psychosis); violent history, arrests, poor academic performance, substance abuse, financial gain, unemployment, and watching pornography. However, it is usually the combination of childhood sexual abuse and deviant fantasies that facilitate the jump from sexual fantasies to sexual crimes and the nature of the crimes. For example, most rapists report both early traumatic experiences and sexually deviant fantasies and sex murderers of children reported a significantly more pre-crime childhood sexual abuse and deviant sexual fantasies than sexual murderers of women. Sexual crimes: Sadistic sexual fantasies Sadistic themes are consistently present in the sexual fantasies of offenders across various types of sexual crimes and varying risk factors. They typically involve finding victims, causing harm/pain during sexual intercourse and feelings of grandiosity/omnipotence during arousal.They occur in high prevalence alongside other paraphilic fantasies in psychopaths and individuals with dark triad traits. High narcissism correlate strongly with impersonal sexual fantasies and studies suggested that the deviant and sadistic sexual fantasies serve as a coping mechanism for narcissistic vulnerability. Higher levels of psychopathy are associated with, impersonal, unrestricted, deviant, paraphilic and wide ranges of sexual fantasies. However, it has been suggested that this is due to an increased sex drive, which correlates with paraphilic interests. Also, psychopathy increases the effect that porn has on the development of deviant fantasies such its contribution to the likelihood of engaging in rape fantasies. The effects of psychopathy go further to increase likelihood of individuals carrying out their unrestricted deviant fantasies in real life such as engaging in BDSM/sadomasochism or even rape. However, BDSM fantasies have become quite common among the general population, possibly due to its normalization by the popular Fifty Shades trilogy. The capitalization of the Fifty Shades trilogy changed the perception of BDSM from being extreme, marginalized and dangerous to being fun, fashionable, and exciting. Mainstreaming Fifty Shades has increased visibility and acceptability of BDSM and has embedded it in everyday life. Sexual crimes: Sadistic sexual fantasies and crime Sadistic sexual fantasy is one of the key factors for understanding serial killers. Their sexual crimes are "tryouts" that maintain and develop their fantasies; i.e. they commit crimes according to their fantasies, then incorporate the crimes into their fantasies to increase arousal and subsequently develop its sadistic content.A lot of sexual homicides are well planned due to extensive practice in form of sexual fantasies. The murders involve the infliction of a lot of pain and terror and this serves to satisfy the sadistic fantasy, albeit only temporarily. They start trying to replicate their fantasies more accurately with practice and will continue until they are caught as a fantasy can never be replicated with 100% accuracy. Sexual crimes: Childhood abuse plays a significant role in determining if sadistic fantasies will be tried out in real life. Most sexual offenders that suffered childhood sexual abuse reveal an early onset of rape fantasies, and sexual concerns like sexual conflict, incompetence, inhibitions, ignorance and social dysfunction. These concerns cause stress and the offender relies on their deviant fantasies as a coping mechanism for their stress. The unsuccessful resolution of the aforementioned issues causes an obsession with their fantasy world, where they feel in control. They become heavily invested in their deviant fantasies and when their fantasies start to lose their effectiveness due to desensitization or repression, they escalate and start actualizing their fantasies to relieve internal stress. They plan their crimes to feel arousal or commit violent compulsive murders. Violent compulsive crimes are impulsive and occur because resistance and restrictions that prevent violent and sadistic fantasies from being acted out, can lead to anxiety or psychosomatic manifestations. These manifestations then cause uncontrollable desires to act out one's fantasy in order to find relief.Researchers found that the sadistic contents in fantasies began appearing about 1–7 years after the start of masturbation. Due to social awkwardness, most offenders lacked the opportunity to practice their sexual skills with a desired partner or gender and this contributes significantly to their reliance on their fantasies. Eventually, their fantasies and "tryouts" become their only source of arousal.Some studies suggest that deviant sexual scripts might be learnt through social learning theory due to an early exposure via sexual molestation and reinforcements by orgasms and masturbation. However, not all sexually molested children grow up to be offenders unable to stop themselves from acting out their fantasies. MacCulloch and colleagues have suggested that the early traumatic experiences cause the early development of sadistic fantasies through sensory preconditioning and this might be the reason offenders find it too difficult to restrain themselves from trying out their sadistic fantasies in real life. While some might argue cognitive distortions as the cause of sexual crimes such as pedophilia, evidence suggests that cognitive distortions are used to justify actions after caught and do not motivate them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Service address** Service address: A service address is an address which can be used as an alternative to a residential address for the purpose of receiving post in the United Kingdom. Service address providers often scan and digitise mail received for the recipient to view online. Service addresses have grown in popularity since 2009 as a method of keeping a residential address off public records and providing better protection for those who operate a business from home. Business: The Companies Act 2006 introduced a relaxation on the type of address a company can provide to Companies House which is then made available to the public. The Act confirmed that from October 2009 a company director is allowed to provide a service address which will be kept on public record, along with their usual residential address which is kept privately. Business: The residential address provided to Companies House can now be protected and only provided to certain approved bodies, including HM Revenue & Customs, the Police and Credit Reference Agencies. This allows company directors of sensitive companies to not have their address publicised by Companies House. Business: In order for a company to comply with the regulation, the service address chosen must be in the same judicial area where the company is registered. For example, if a company is registered in England & Wales the service address must also be within England & Wales. The service address can act as the registered office for the company. Although the relaxation in law allows a service address to be used, a PO Box or DX number can still not be used by a director. Business: The legislation also provides for the Registrar of Companies to ban the use of a service address and place the usual residential address on the public record, if the service address is found to be ineffective. These types of addresses can provide extra protection for owners of businesses with sensitive interests, where company directors and their properties may be at risk from protesters and activists. Personal use: While initially used as a way for company directors to protect their residential address, service addresses are also used by many for personal use. Personal use: An expat can use a service address as a way of continuing to receive post in their home country. It is particularly useful for expats who have sold their property and no longer have a permanent residential address to ensure that post is still viewed by the recipient which arrives after leaving the home country. It is a quicker method of viewing post than other redirection services
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**InterActual Player** InterActual Player: InterActual Player, known originally as PC Friendly, was a media player for Microsoft Windows and Mac OS X, included on some DVDs with movie files. In addition to providing DVD playback control it makes available extra material on some DVDs, including commentaries, pop-up notes, synchronized screenplays and games. It requires existing DVD player software, which it embeds in the interface for playing the actual DVDs. Details: InterActual Player software automatically displays an installation dialog when a user inserts a DVD containing it into the DVD-ROM Drive. If the user chooses to install InterActual Player it becomes the default DVD player and creates a shortcut on the desktop and a link to the InterActual website. It also asks the user to supply information voluntarily and allow usage data to be sent over the Internet. InterActual Player can also be accessed by the internet. Details: As of January 2017, the service has been permanently shut down.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Milk (programming language)** Milk (programming language): Milk is a programming language "that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DARPA TIPSTER Program** DARPA TIPSTER Program: The DARPA TIPSTER Text program was started in 1991 by the Defense Advanced Research Projects Agency (DARPA). It is a 9-year multi-million dollar initiative, which sought to improve HLT for the handling of multilingual corpora that are utilized within the intelligence process. It involved a cluster of joint projects by the government, academia, and private sector.The program supported research to improve informational retrieval and extraction software and worked to deploy these improved technologies to government users. This technology, which was of particular interest to defense and intelligence analysts who must review increasingly large amounts of text. The program had several phases. The first entailed the development of algorithms for information retrieval and extraction while the second phase developed an architecture.The program was considered successful so that it was commercialized under the National Institute of Standards and Technology. An evaluation noted that the third phase of the TIPSTER program, which involved the development of the architecture called GATE (General Architecture for Text Engineering) did not achieve its intended goals due to its short life span as well as the inability of the government to enforce standards imposed by the TIPSTER software architecture.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Code-excited linear prediction** Code-excited linear prediction: Code-excited linear prediction (CELP) is a linear predictive speech coding algorithm originally proposed by Manfred R. Schroeder and Bishnu S. Atal in 1985. At the time, it provided significantly better quality than existing low bit-rate algorithms, such as residual-excited linear prediction (RELP) and linear predictive coding (LPC) vocoders (e.g., FS-1015). Along with its variants, such as algebraic CELP, relaxed CELP, low-delay CELP and vector sum excited linear prediction, it is currently the most widely used speech coding algorithm. It is also used in MPEG-4 Audio speech coding. CELP is commonly used as a generic term for a class of algorithms and not for a particular codec. Background: The CELP algorithm is based on four main ideas: Using the source-filter model of speech production through linear prediction (LP) (see the textbook "speech coding algorithm"); Using an adaptive and a fixed codebook as the input (excitation) of the LP model; Performing a search in closed-loop in a "perceptually weighted domain". Applying vector quantization (VQ)The original algorithm as simulated in 1983 by Schroeder and Atal required 150 seconds to encode 1 second of speech when run on a Cray-1 supercomputer. Since then, more efficient ways of implementing the codebooks and improvements in computing capabilities have made it possible to run the algorithm in embedded devices, such as mobile phones. CELP decoder: Before exploring the complex encoding process of CELP we introduce the decoder here. Figure 1 describes a generic CELP decoder. The excitation is produced by summing the contributions from fixed (a.k.a. stochastic or innovation) and adaptive (a.k.a. pitch) codebooks: e[n]=ef[n]+ea[n] where ef[n] is the fixed (a.k.a. stochastic or innovation) codebook contribution and ea[n] is the adaptive (pitch) codebook contribution. The fixed codebook is a vector quantization dictionary that is (implicitly or explicitly) hard-coded into the codec. This codebook can be algebraic (ACELP) or be stored explicitly (e.g. Speex). The entries in the adaptive codebook consist of delayed versions of the excitation. This makes it possible to efficiently code periodic signals, such as voiced sounds. CELP decoder: The filter that shapes the excitation has an all-pole model of the form 1/A(z) , where A(z) is called the prediction filter and is obtained using linear prediction (Levinson–Durbin algorithm). An all-pole filter is used because it is a good representation of the human vocal tract and because it is easy to compute. CELP encoder: The main principle behind CELP is called analysis-by-synthesis (AbS) and means that the encoding (analysis) is performed by perceptually optimizing the decoded (synthesis) signal in a closed loop. In theory, the best CELP stream would be produced by trying all possible bit combinations and selecting the one that produces the best-sounding decoded signal. This is obviously not possible in practice for two reasons: the required complexity is beyond any currently available hardware and the “best sounding” selection criterion implies a human listener. CELP encoder: In order to achieve real-time encoding using limited computing resources, the CELP search is broken down into smaller, more manageable, sequential searches using a simple perceptual weighting function. Typically, the encoding is performed in the following order: Linear prediction coefficients (LPC) are computed and quantized, usually as line spectral pairs (LSPs). The adaptive (pitch) codebook is searched and its contribution removed. The fixed (innovation) codebook is searched. CELP encoder: Noise weighting Most (if not all) modern audio codecs attempt to shape the coding noise so that it appears mostly in the frequency regions where the ear cannot detect it. For example, the ear is more tolerant to noise in parts of the spectrum that are louder and vice versa. That's why instead of minimizing the simple quadratic error, CELP minimizes the error for the perceptually weighted domain. The weighting filter W(z) is typically derived from the LPC filter by the use of bandwidth expansion: W(z)=A(z/γ1)A(z/γ2) where γ1>γ2
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Large-scale coastal behaviour** Large-scale coastal behaviour: Large-scale coastal behaviour is an attempt to model the morphodynamics of coastal change at time and space scales appropriate to management and prediction. Temporally this is at the decade to century scale, spatially at the scale of tens of kilometers. It was developed by de Vriend. Modelling large-scale coastal behaviour involves some level of parameterisation rather than simply upscaling from process or downscaling from the geological scale. It attempts to recognise patterns occurring at these scales. Cowell and Thom (2005) recognise the need to admit uncertainty in large-scale coastal behaviour given incomplete process knowledge.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neural network quantum states** Neural network quantum states: Neural Network Quantum States (NQS or NNQS) is a general class of variational quantum states parameterized in terms of an artificial neural network. It was first introduced in 2017 by the physicists Giuseppe Carleo and Matthias Troyer to approximate wave functions of many-body quantum systems. Given a many-body quantum state |Ψ⟩ comprising N degrees of freedom and a choice of associated quantum numbers s1…sN , then an NQS parameterizes the wave-function amplitudes where F(s1…sN;W) is an artificial neural network of parameters (weights) W , N input variables ( s1…sN ) and one complex-valued output corresponding to the wave-function amplitude. This variational form is used in conjunction with specific stochastic learning approaches to approximate quantum states of interest. Learning the Ground-State Wave Function: One common application of NQS is to find an approximate representation of the ground state wave function of a given Hamiltonian H^ . The learning procedure in this case consists in finding the best neural-network weights that minimize the variational energy Since, for a general artificial neural network, computing the expectation value is an exponentially costly operation in N , stochastic techniques based, for example, on the Monte Carlo method are used to estimate E(W) , analogously to what is done in Variational Monte Carlo, see for example for a review. More specifically, a set of M samples S(1),S(2)…S(M) , with S(i)=s1(i)…sN(i) , is generated such that they are uniformly distributed according to the Born probability density P(S)∝|F(s1…sN;W)|2 . Then it can be shown that the sample mean of the so-called "local energy" Eloc(S)=⟨S|H^|Ψ⟩/⟨S|Ψ⟩ is a statistical estimate of the quantum expectation value E(W) , i.e. Learning the Ground-State Wave Function: Similarly, it can be shown that the gradient of the energy with respect to the network weights W is also approximated by a sample mean where log ⁡F(S(i);W)∂Wk and can be efficiently computed, in deep networks through backpropagation. The stochastic approximation of the gradients is then used to minimize the energy E(W) typically using a stochastic gradient descent approach. When the neural-network parameters are updated at each step of the learning procedure, a new set of samples S(i) is generated, in an iterative procedure similar to what done in unsupervised learning. Connection with Tensor Networks: Neural-Network representations of quantum wave functions share some similarities with variational quantum states based on tensor networks. For example, connections with matrix product states have been established. These studies have shown that NQS support volume law scaling for the entropy of entanglement. In general, given a NQS with fully-connected weights, it corresponds, in the worse case, to a matrix product state of exponentially large bond dimension in N
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rhumb line** Rhumb line: In navigation, a rhumb line, rhumb (), or loxodrome is an arc crossing all meridians of longitude at the same angle, that is, a path with constant bearing as measured relative to true north. Introduction: The effect of following a rhumb line course on the surface of a globe was first discussed by the Portuguese mathematician Pedro Nunes in 1537, in his Treatise in Defense of the Marine Chart, with further mathematical development by Thomas Harriot in the 1590s. Introduction: A rhumb line can be contrasted with a great circle, which is the path of shortest distance between two points on the surface of a sphere. On a great circle, the bearing to the destination point does not remain constant. If one were to drive a car along a great circle one would hold the steering wheel fixed, but to follow a rhumb line one would have to turn the wheel, turning it more sharply as the poles are approached. In other words, a great circle is locally "straight" with zero geodesic curvature, whereas a rhumb line has non-zero geodesic curvature. Introduction: Meridians of longitude and parallels of latitude provide special cases of the rhumb line, where their angles of intersection are respectively 0° and 90°. On a north–south passage the rhumb line course coincides with a great circle, as it does on an east–west passage along the equator. Introduction: On a Mercator projection map, any rhumb line is a straight line; a rhumb line can be drawn on such a map between any two points on Earth without going off the edge of the map. But theoretically a loxodrome can extend beyond the right edge of the map, where it then continues at the left edge with the same slope (assuming that the map covers exactly 360 degrees of longitude). Introduction: Rhumb lines which cut meridians at oblique angles are loxodromic curves which spiral towards the poles. On a Mercator projection the north and south poles occur at infinity and are therefore never shown. However the full loxodrome on an infinitely high map would consist of infinitely many line segments between the two edges. On a stereographic projection map, a loxodrome is an equiangular spiral whose center is the north or south pole. Introduction: All loxodromes spiral from one pole to the other. Near the poles, they are close to being logarithmic spirals (which they are exactly on a stereographic projection, see below), so they wind around each pole an infinite number of times but reach the pole in a finite distance. The pole-to-pole length of a loxodrome (assuming a perfect sphere) is the length of the meridian divided by the cosine of the bearing away from true north. Loxodromes are not defined at the poles. Introduction: Three views of a pole-to-pole loxodrome Etymology and historical description: The word loxodrome comes from Ancient Greek λοξός loxós: "oblique" + δρόμος drómos: "running" (from δραμεῖν drameîn: "to run"). The word rhumb may come from Spanish or Portuguese rumbo/rumo ("course" or "direction") and Greek ῥόμβος rhómbos, from rhémbein. Etymology and historical description: The 1878 edition of The Globe Encyclopaedia of Universal Information describes a loxodrome line as: Loxodrom′ic Line is a curve which cuts every member of a system of lines of curvature of a given surface at the same angle. A ship sailing towards the same point of the compass describes such a line which cuts all the meridians at the same angle. In Mercator's Projection (q.v.) the Loxodromic lines are evidently straight. Etymology and historical description: A misunderstanding could arise because the term "rhumb" had no precise meaning when it came into use. It applied equally well to the windrose lines as it did to loxodromes because the term only applied "locally" and only meant whatever a sailor did in order to sail with constant bearing, with all the imprecision that that implies. Therefore, "rhumb" was applicable to the straight lines on portolans when portolans were in use, as well as always applicable to straight lines on Mercator charts. For short distances portolan "rhumbs" do not meaningfully differ from Mercator rhumbs, but these days "rhumb" is synonymous with the mathematically precise "loxodrome" because it has been made synonymous retrospectively. Etymology and historical description: As Leo Bagrow states: "the word ('Rhumbline') is wrongly applied to the sea-charts of this period, since a loxodrome gives an accurate course only when the chart is drawn on a suitable projection. Cartometric investigation has revealed that no projection was used in the early charts, for which we therefore retain the name 'portolan'." Mathematical description: For a sphere of radius 1, the azimuthal angle λ, the polar angle −π/2 ≤ φ ≤ π/2 (defined here to correspond to latitude), and Cartesian unit vectors i, j, and k can be used to write the radius vector r as cos cos sin cos sin ⁡φ)k. Orthogonal unit vectors in the azimuthal and polar directions of the sphere can be written sec sin cos cos sin sin sin cos ⁡φ)k, which have the scalar products λ^⋅φ^=λ^⋅r=φ^⋅r=0. λ̂ for constant φ traces out a parallel of latitude, while φ̂ for constant λ traces out a meridian of longitude, and together they generate a plane tangent to the sphere. The unit vector sin cos ⁡β)φ^ has a constant angle β with the unit vector φ̂ for any λ and φ, since their scalar product is cos ⁡β. Mathematical description: A loxodrome is defined as a curve on the sphere that has a constant angle β with all meridians of longitude, and therefore must be parallel to the unit vector β̂. As a result, a differential length ds along the loxodrome will produce a differential displacement sin cos cos sin cos cos sin cos tan sec tan gd gd gd cot gd −1⁡φ0) where gd and gd −1 are the Gudermannian function and its inverse, gd arctan sinh ⁡ψ), gd arsinh tan ⁡φ), and arsinh is the inverse hyperbolic sine. Mathematical description: With this relationship between λ and φ, the radius vector becomes a parametric function of one variable, tracing out the loxodrome on the sphere: cos sech sin sech tanh ⁡ψ)k, where cot gd gd −1⁡φ is the isometric latitude.In the Rhumb line, as the latitude tends to the poles, φ → ±π/2, sin φ → ±1, the isometric latitude arsinh(tan φ) → ± ∞, and longitude λ increases without bound, circling the sphere ever so fast in a spiral towards the pole, while tending to a finite total arc length Δs given by sec ⁡β| Connection to the Mercator projection: Let λ be the longitude of a point on the sphere, and φ its latitude. Then, if we define the map coordinates of the Mercator projection as gd arsinh tan ⁡φ), a loxodrome with constant bearing β from true north will be a straight line, since (using the expression in the previous section) y=mx with a slope cot ⁡β. Connection to the Mercator projection: Finding the loxodromes between two given points can be done graphically on a Mercator map, or by solving a nonlinear system of two equations in the two unknowns m = cot β and λ0. There are infinitely many solutions; the shortest one is that which covers the actual longitude difference, i.e. does not make extra revolutions, and does not go "the wrong way around". Connection to the Mercator projection: The distance between two points Δs, measured along a loxodrome, is simply the absolute value of the secant of the bearing (azimuth) times the north–south distance (except for circles of latitude for which the distance becomes infinite): sec ⁡β| where R is one of the earth average radii. Application: Its use in navigation is directly linked to the style, or projection of certain navigational maps. A rhumb line appears as a straight line on a Mercator projection map.The name is derived from Old French or Spanish respectively: "rumb" or "rumbo", a line on the chart which intersects all meridians at the same angle. On a plane surface this would be the shortest distance between two points. Over the Earth's surface at low latitudes or over short distances it can be used for plotting the course of a vehicle, aircraft or ship. Over longer distances and/or at higher latitudes the great circle route is significantly shorter than the rhumb line between the same two points. However the inconvenience of having to continuously change bearings while travelling a great circle route makes rhumb line navigation appealing in certain instances.The point can be illustrated with an east–west passage over 90 degrees of longitude along the equator, for which the great circle and rhumb line distances are the same, at 10,000 kilometres (5,400 nautical miles). At 20 degrees north the great circle distance is 9,254 km (4,997 nmi) while the rhumb line distance is 9,397 km (5,074 nmi), about 1.5% further. But at 60 degrees north the great circle distance is 4,602 km (2,485 nmi) while the rhumb line is 5,000 km (2,700 nmi), a difference of 8.5%. A more extreme case is the air route between New York City and Hong Kong, for which the rhumb line path is 18,000 km (9,700 nmi). The great circle route over the North Pole is 13,000 km (7,000 nmi), or 5+1⁄2 hours less flying time at a typical cruising speed. Application: Some old maps in the Mercator projection have grids composed of lines of latitude and longitude but also show rhumb lines which are oriented directly towards north, at a right angle from the north, or at some angle from the north which is some simple rational fraction of a right angle. These rhumb lines would be drawn so that they would converge at certain points of the map: lines going in every direction would converge at each of these points. See compass rose. Such maps would necessarily have been in the Mercator projection therefore not all old maps would have been capable of showing rhumb line markings. Application: The radial lines on a compass rose are also called rhumbs. The expression "sailing on a rhumb" was used in the 16th–19th centuries to indicate a particular compass heading.Early navigators in the time before the invention of the marine chronometer used rhumb line courses on long ocean passages, because the ship's latitude could be established accurately by sightings of the Sun or stars but there was no accurate way to determine the longitude. The ship would sail north or south until the latitude of the destination was reached, and the ship would then sail east or west along the rhumb line (actually a parallel, which is a special case of the rhumb line), maintaining a constant latitude and recording regular estimates of the distance sailed until evidence of land was sighted. Generalizations: On the Riemann sphere The surface of the Earth can be understood mathematically as a Riemann sphere, that is, as a projection of the sphere to the complex plane. In this case, loxodromes can be understood as certain classes of Möbius transformations. Generalizations: Spheroid The formulation above can be easily extended to a spheroid. The course of the rhumb line is found merely by using the ellipsoidal isometric latitude. In formulas above on this page, substitute the conformal latitude on the ellipsoid for the latitude on the sphere. Similarly, distances are found by multiplying the ellipsoidal meridian arc length by the secant of the azimuth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diminished rhombicosidodecahedron** Diminished rhombicosidodecahedron: In geometry, the diminished rhombicosidodecahedron is one of the Johnson solids (J76). It can be constructed as a rhombicosidodecahedron with one pentagonal cupola removed. Diminished rhombicosidodecahedron: A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.Related Johnson solids are: J80: parabidiminished rhombicosidodecahedron with two opposing cupolae removed, and J81: metabidiminished rhombicosidodecahedron with two non-opposing cupolae removed, and J83: tridiminished rhombicosidodecahedron with three cupola removed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Truncated pentahexagonal tiling** Truncated pentahexagonal tiling: In geometry, the truncated tetrahexagonal tiling is a semiregular tiling of the hyperbolic plane. There are one square, one decagon, and one dodecagon on each vertex. It has Schläfli symbol of t0,1,2{6,5}. Its name is somewhat misleading: literal geometric truncation of pentahexagonal tiling produces rectangles instead of squares. Symmetry: There are four small index subgroup from [6,5] by mirror removal and alternation. In these images fundamental domains are alternately colored black and white, and mirrors exist on the boundaries between colors. Related polyhedra and tilings: From a Wythoff construction there are fourteen hyperbolic uniform tilings that can be based from the regular order-5 hexagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 7 forms with full [6,5] symmetry, and 3 with subsymmetry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hydrodealkylation** Hydrodealkylation: Hydrodealkylation is a chemical reaction that often involves reacting an aromatic hydrocarbon, such as toluene, in the presence of hydrogen gas to form a simpler aromatic hydrocarbon devoid of functional groups. An example is the conversion of 1,2,4-trimethylbenzene to xylene. This chemical process usually occurs at high temperature, at high pressure, or in the presence of a catalyst. These are predominantly transition metals, such as chromium or molybdenum.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Punnet** Punnet: A punnet is a small box or square basket for the gathering, transport and sale of fruit and vegetables, typically for small berries susceptible to bruising, spoiling and squashing that are therefore best kept in small rigid containers. Punnets serve also as a rough measure for a quantity of irregular sized fruits. Etymology: The word is largely confined to Commonwealth countries (but not Canada) and is of uncertain origin, but is thought to be a diminutive of 'pun,' a British dialect word for pound, from the days in which such containers were used as a unit of measurement. The Oxford Dictionary of National Biography, parenthetically in its entry for geneticist R. C. Punnett (1875–1967), credits "a strawberry growing ancestor [who] devised the wooden basket known as a 'punnet.'" History and description: Prior form In the late eighteenth century, strawberries and some soft fruit were sold in pottles, conical woodchip baskets (see illustration, right), the tapering shape being thought to reduce damage to fruit at the bottom. The pottle used in England and Scotland at that time contained nominally one Scottish pint. They were stacked, fifty or sixty together, into square hampers for transport to the market, placed upon a woman's head on a small cushion and over longer distances in a light carriage of frame work hung on springs.The Saturday Magazine in 1834 records 'pottle baskets' being made by women and children in their homes for six pence a dozen by steeping the cut wood in water, and splitting it into strips of dimensions needed for each part of the basket. The most skilful weavers formed the upright supports of the basket, fixing them in their place by weaving the bottom part. Children wove the sides with pliable strips of fir or willow. History and description: Development Pottles were replaced in the mid-1800s by the more practical rectangular punnet. The terms 'pottle' and 'punnet' were often used interchangeably. As reported in an 1879 issue of The Gentleman's Magazine, the conical pottle had given way to the punnet, being mainly manufactured in Brentford of deal, or the more preferred willow, by hundreds of women and children. History and description: Purpose A 1852 publication lists other produce being sold in punnets in British markets, including sea kale, mushrooms, small salad and tomatoes.Punnets are used for collecting berries as well as for selling them, thus reducing handling of the fragile fruits and the likely damage that it could cause. The process is recorded in a 1948 poem by New Zealand author Mabel Christmas-Harvey; North America In North America, commercial strawberry horticulture began around 1820, and the fruits were packed in the same manner as that approved by English gardeners; in 1821 it was recommended that Massachusetts strawberry growers carry berries to the Boston markets in "pottles, that is, in inverted cones of basket work.” The English punnet used in the strawberry trade of New York City between 1815 and 1850 was a round shallow basket of woven wickerwork without handles. A handled punnet became more popular in the New York market, as related in the Proceedings of the New Jersey Horticultural Society by Charles W. Idell, who resided in Hoboken and managed a produce market at the foot of Barclay Street, New York: The first strawberries marketed in New York were wild ones from Bergen County, N. J. The negroes there the first to pick this fruit for the New York market and invented those quaint oId fashioned splint baskets with handles. The baskets were strung on poles and thus peddled through the city. History and description: Manufacture A 1903 work describes the construction of punnets; "Strawberry punnets or baskets as used by fruiterers are made of thin strips of wood, well soaked before use. The bottom and uprights are comprised of six pieces of 1/16” wood; the bottom and side pieces may be of ash and the lacings, which are 1/32” thick and 1” wide, may be of pine." By 1969, punnets in the United Kingdom were being made out of thinly lathed poplar wood peelers, using a semi-mechanical system. While factory workers still had to interlace the laths, metal staples were used to fix the strips. History and description: Present-day forms Contemporary punnets are generally made in a variety of dimensions of semi-rigid, transparent, lightweight PET plastic with lockable lids, or of clamshell design, and with vents. Their advantage is that they permit visual examination by the consumer but discourage physical contact with the merchandise at point of sale.As early as 1911, cardboard punnets with wire handles were being used, and increasingly, moulded pulp and corrugated fiberboard are being used, as they are perceived to be more sustainable materials. Decorative punnets are often made of felt and seen in flower and craft arrangements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Binding site** Binding site: In biochemistry and molecular biology, a binding site is a region on a macromolecule such as a protein that binds to another molecule with specificity. The binding partner of the macromolecule is often referred to as a ligand. Ligands may include other proteins (resulting in a protein-protein interaction), enzyme substrates, second messengers, hormones, or allosteric modulators. The binding event is often, but not always, accompanied by a conformational change that alters the protein's function. Binding to protein binding sites is most often reversible (transient and non-covalent), but can also be covalent reversible or irreversible. Function: Binding of a ligand to a binding site on protein often triggers a change in conformation in the protein and results in altered cellular function. Hence binding site on protein are critical parts of signal transduction pathways. Types of ligands include neurotransmitters, toxins, neuropeptides, and steroid hormones. Binding sites incur functional changes in a number of contexts, including enzyme catalysis, molecular pathway signaling, homeostatic regulation, and physiological function. Electric charge, steric shape and geometry of the site selectively allow for highly specific ligands to bind, activating a particular cascade of cellular interactions the protein is responsible for. Function: Catalysis Enzymes incur catalysis by binding more strongly to transition states than substrates and products. At the catalytic binding site, several different interactions may act upon the substrate. These range from electric catalysis, acid and base catalysis, covalent catalysis, and metal ion catalysis. These interactions decrease the activation energy of a chemical reaction by providing favorable interactions to stabilize the high energy molecule. Enzyme binding allows for closer proximity and exclusion of substances irrelevant to the reaction. Side reactions are also discouraged by this specific binding.Types of enzymes that can perform these actions include oxidoreductases, transferases, hydrolases, lyases, isomerases, and ligases.For instance, the transferase hexokinase catalyzes the phosphorylation of glucose to make glucose-6-phosphate. Active site residues of hexokinase allow for stabilization of the glucose molecule in the active site and spur the onset of an alternative pathway of favorable interactions, decreasing the activation energy. Function: Inhibition Protein inhibition by inhibitor binding may induce obstruction in pathway regulation, homeostatic regulation and physiological function. Competitive inhibitors compete with substrate to bind to free enzymes at active sites and thus impede the production of the enzyme-substrate complex upon binding. For example, carbon monoxide poisoning is caused by the competitive binding of carbon monoxide as opposed to oxygen in hemoglobin. Function: Uncompetitive inhibitors, alternatively, bind concurrently with substrate at active sites. Upon binding to an enzyme substrate (ES) complex, an enzyme substrate inhibitor (ESI) complex is formed. Similar to competitive inhibitors, the rate at product formation is decreased also.Lastly, mixed inhibitors are able to bind to both the free enzyme and the enzyme-substrate complex. However, in contrast to competitive and uncompetitive inhibitors, mixed inhibitors bind to the allosteric site. Allosteric binding induces conformational changes that may increase the protein's affinity for substrate. This phenomenon is called positive modulation. Conversely, allosteric binding that decreases the protein's affinity for substrate is negative modulation. Types: Active site At the active site, a substrate binds to an enzyme to induce a chemical reaction. Substrates, transition states, and products can bind to the active site, as well as any competitive inhibitors. For example, in the context of protein function, the binding of calcium to troponin in muscle cells can induce a conformational change in troponin. This allows for tropomyosin to expose the actin-myosin binding site to which the myosin head binds to form a cross-bridge and induce a muscle contraction.In the context of the blood, an example of competitive binding is carbon monoxide which competes with oxygen for the active site on heme. Carbon monoxide's high affinity may outcompete oxygen in the presence of low oxygen concentration. In these circumstances, the binding of carbon monoxide induces a conformation change that discourages heme from binding to oxygen, resulting in carbon monoxide poisoning. Types: Allosteric site At the regulatory site, the binding of a ligand may elicit amplified or inhibited protein function. The binding of a ligand to an allosteric site of a multimeric enzyme often induces positive cooperativity, that is the binding of one substrate induces a favorable conformation change and increases the enzyme's likelihood to bind to a second substrate. Regulatory site ligands can involve homotropic and heterotropic ligands, in which single or multiple types of molecule affects enzyme activity respectively.Enzymes that are highly regulated are often essential in metabolic pathways. For example, phosphofructokinase (PFK), which phosphorylates fructose in glycolysis, is largely regulated by ATP. Its regulation in glycolysis is imperative because it is the committing and rate limiting step of the pathway. PFK also controls the amount of glucose designated to form ATP through the catabolic pathway. Therefore, at sufficient levels of ATP, PFK is allosterically inhibited by ATP. This regulation efficiently conserves glucose reserves, which may be needed for other pathways. Citrate, an intermediate of the citric acid cycle, also works as an allosteric regulator of PFK. Types: Single- and multi-chain binding sites Binding sites can be characterized also by their structural features. Single-chain sites (of “monodesmic” ligands, μόνος: single, δεσμός: binding) are formed by a single protein chain, while multi-chain sites (of "polydesmic” ligands, πολοί: many) are frequent in protein complexes, and are formed by ligands that bind more than one protein chain, typically in or near protein interfaces. Recent research shows that binding site structure has profound consequences for the biology of protein complexes (evolution of function, allostery). Types: Cryptic binding sites Cryptic binding sites are the binding sites that are transiently formed in an apo form or that are induced by ligand binding. Considering the cryptic binding sites increases the size of the potentially “druggable” human proteome from ~40% to ~78% of disease-associated proteins. The binding sites have been investigated by: support vector machine applied to "CryptoSite" data set, Extension of "CryptoSite" data set, long timescale molecular dynamics simulation with Markov state model and with biophysical experiments, and cryptic-site index that is based on relative accessible surface area. Binding curves: Binding curves describe the binding behavior of ligand to a protein. Curves can be characterized by their shape, sigmoidal or hyperbolic, which reflect whether or not the protein exhibits cooperative or noncooperative binding behavior respectively. Typically, the x-axis describes the concentration of ligand and the y-axis describes the fractional saturation of ligands bound to all available binding sites. The Michaelis Menten equation is usually used when determining the shape of the curve. The Michaelis Menten equation is derived based on steady-state conditions and accounts for the enzyme reactions taking place in a solution. However, when the reaction takes place while the enzyme is bound to a substrate, the kinetics play out differently.Modeling with binding curves are useful when evaluating the binding affinities of oxygen to hemoglobin and myoglobin in the blood. Hemoglobin, which has four heme groups, exhibits cooperative binding. This means that the binding of oxygen to a heme group on hemoglobin induces a favorable conformation change that allows for increased binding favorability of oxygen for the next heme groups. In these circumstances, the binding curve of hemoglobin will be sigmoidal due to its increased binding favorability for oxygen. Since myoglobin has only one heme group, it exhibits noncooperative binding which is hyperbolic on a binding curve. Applications: Biochemical differences between different organisms and humans are useful for drug development. For instance, penicillin kills bacteria by inhibiting the bacterial enzyme DD-transpeptidase, destroying the development of the bacterial cell wall and inducing cell death. Thus, the study of binding sites is relevant to many fields of research, including cancer mechanisms, drug formulation, and physiological regulation. The formulation of an inhibitor to mute a protein's function is a common form of pharmaceutical therapy. Applications: In the scope of cancer, ligands that are edited to have a similar appearance to the natural ligand are used to inhibit tumor growth. For example, Methotrexate, a chemotherapeutic, acts as a competitive inhibitor at the dihydrofolate reductase active site. This interaction inhibits the synthesis of tetrahydrofolate, shutting off production of DNA, RNA and proteins. Inhibition of this function represses neoplastic growth and improves severe psoriasis and adult rheumatoid arthritis.In cardiovascular illnesses, drugs such as beta blockers are used to treat patients with hypertension. Beta blockers (β-Blockers) are antihypertensive agents that block the binding of the hormones adrenaline and noradrenaline to β1 and β2 receptors in the heart and blood vessels. These receptors normally mediate the sympathetic "fight or flight" response, causing constriction of the blood vessels.Competitive inhibitors are also largely found commercially. Botulinum toxin, known commercially as Botox, is a neurotoxin that causes flaccid paralysis in the muscle due to binding to acetylcholine dependent nerves. This interaction inhibits muscle contractions, giving the appearance of smooth muscle. Prediction: A number of computational tools have been developed for the prediction of the location of binding sites on proteins. These can be broadly classified into sequence based or structure based. Sequence based methods rely on the assumption that the sequences of functionally conserved portions of proteins such as binding site are conserved. Structure based methods require the 3D structure of the protein. These methods in turn can be subdivided into template and pocket based methods. Template based methods search for 3D similarities between the target protein and proteins with known binding sites. The pocket based methods search for concave surfaces or buried pockets in the target protein that possess features such as hydrophobicity and hydrogen bonding capacity that would allow them to bind ligands with high affinity. Even though the term pocket is used here, similar methods can be used to predict binding sites used in protein-protein interactions that are usually more planar, not in pockets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bulletin of Materials Science** Bulletin of Materials Science: The Bulletin of Materials Science is a bimonthly peer-viewed scientific journal that publishes original research articles, review articles and rapid communications in all areas of materials science. It is published by Springer Science+Business Media on behalf of the Indian Academy of Sciences in collaboration with the Materials Research Society of India and the Indian National Science Academy. The editor-in-chief is Prof. Giridhar U Kulkarni (JNCASR). Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.8.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Software development process** Software development process: In software engineering, a software development process is a process of planning and managing software development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improve design and/or product management. It is also known as a software development life cycle (SDLC). The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application.Most modern development processes can be vaguely described as agile. Other methodologies include waterfall, prototyping, iterative and incremental development, spiral development, rapid application development, and extreme programming. Software development process: A life-cycle "model" is sometimes considered a more general term for a category of methodologies and a software development "process" a more specific term to refer to a specific process chosen by a specific organization. For example, there are many specific software development processes that fit the spiral life-cycle model. The field is often considered a subset of the systems development life cycle. History: The software development methodology (also known as SDM) framework didn't emerge until the 1960s. According to Elliott (2004), the systems development life cycle (SDLC) can be considered to be the oldest formalized methodology framework for building information systems. The main idea of the SDLC has been "to pursue the development of information systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle––from the inception of the idea to delivery of the final system––to be carried out rigidly and sequentially" within the context of the framework being applied. The main target of this methodology framework in the 1960s was "to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines". History: Requirements Gathering and Analysis: The first phase of the custom software development process involves understanding the client's requirements and objectives. This stage typically involves engaging in thorough discussions and conducting interviews with stakeholders to identify the desired features, functionalities, and overall scope of the software. The development team works closely with the client to analyze existing systems and workflows, determine technical feasibility, and define project milestones. History: Planning and Design: Once the requirements are understood, the custom software development team proceeds to create a comprehensive project plan. This plan outlines the development roadmap, including timelines, resource allocation, and deliverables. The software architecture and design are also established during this phase. User interface (UI) and user experience (UX) design elements are considered to ensure the software's usability, intuitiveness, and visual appeal. History: Development: With the planning and design in place, the development team begins the coding process. This phase involves writing, testing, and debugging the software code. Agile methodologies, such as Scrum or Kanban, are often employed to promote flexibility, collaboration, and iterative development. Regular communication between the development team and the client ensures transparency and enables quick feedback and adjustments. History: Testing and Quality Assurance: To ensure the software's reliability, performance, and security, rigorous testing and quality assurance (QA) processes are carried out. Different testing techniques, including unit testing, integration testing, system testing, and user acceptance testing, are employed to identify and rectify any issues or bugs. QA activities aim to validate the software against the predefined requirements, ensuring that it functions as intended. History: Deployment and Implementation: Once the software passes the testing phase, it is ready for deployment and implementation. The development team assists the client in setting up the software environment, migrating data if necessary, and configuring the system. User training and documentation are also provided to ensure a smooth transition and enable users to maximize the software's potential. Maintenance and Support: After the software is deployed, ongoing maintenance and support become crucial to address any issues, enhance performance, and incorporate future enhancements. Regular updates, bug fixes, and security patches are released to keep the software up-to-date and secure. This phase also involves providing technical support to end-users and addressing their queries or concerns. History: Methodologies, processes, and frameworks range from specific prescriptive steps that can be used directly by an organization in day-to-day work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of a specific project or group. In some cases, a "sponsor" or "maintenance" organization distributes an official set of documents that describe the process. Specific examples include: 1970sStructured programming since 1969 Cap Gemini SDM, originally from PANDATA, the first English translation was published in 1974. SDM stands for System Development Methodology1980sStructured systems analysis and design method (SSADM) from 1980 onwards Information Requirement Analysis/Soft systems methodology1990sObject-oriented programming (OOP) developed in the early 1960s and became a dominant programming approach during the mid-1990s Rapid application development (RAD), since 1991 Dynamic systems development method (DSDM), since 1994 Scrum, since 1995 Team software process, since 1998 Rational Unified Process (RUP), maintained by IBM since 1998 Extreme programming, since 19992000sAgile Unified Process (AUP) maintained since 2005 by Scott Ambler Disciplined agile delivery (DAD) Supersedes AUP2010s Scaled Agile Framework (SAFe) Large-Scale Scrum (LeSS) DevOpsIt is notable that since DSDM in 1994, all of the methodologies on the above list except RUP have been agile methodologies - yet many organizations, especially governments, still use pre-agile processes (often waterfall or similar). Software process and software quality are closely interrelated; some unexpected facets and effects have been observed in practice Among these, another software development process has been established in open source. The adoption of these best practices known and established processes within the confines of a company is called inner source. Prototyping: Software prototyping is about creating prototypes, i.e. incomplete versions of the software program being developed. The basic principles are: Prototyping is not a standalone, complete development methodology, but rather an approach to try out particular features in the context of a full methodology (such as incremental, spiral, or rapid application development (RAD)). Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process. The client is involved throughout the development process, which increases the likelihood of client acceptance of the final implementation. While some prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system.A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problems, but this is true for all software methodologies. Methodologies: Agile development "Agile software development" refers to a group of software development frameworks based on iterative development, where requirements and solutions evolve via collaboration between self-organizing cross-functional teams. The term was coined in the year 2001 when the Agile Manifesto was formulated. Agile software development uses iterative development as a basis but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes fundamentally incorporate iteration and the continuous feedback that it provides to successively refine and deliver a software system. Methodologies: The Agile model also includes the following software development processes: Dynamic systems development method (DSDM) Kanban Scrum Crystal Atern Lean software development Continuous integration Continuous integration is the practice of merging all developer working copies to a shared mainline several times a day. Grady Booch first named and proposed CI in his 1991 method, although he did not advocate integrating several times a day. Extreme programming (XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day. Methodologies: Incremental development Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process. Methodologies: There are three main variants of incremental development: A series of mini-Waterfalls are performed, where all phases of the Waterfall are completed for a small part of a system, before proceeding to the next increment, or Overall requirements are defined before proceeding to evolutionary, mini-Waterfall development of individual increments of a system, or The initial software concept, requirements analysis, and design of architecture and system core are defined via Waterfall, followed by incremental implementation, which culminates in installing the final version, a working system. Methodologies: Rapid application development Rapid application development (RAD) is a software development methodology, which favors iterative development and the rapid construction of prototypes instead of large amounts of up-front planning. The "planning" of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster, and makes it easier to change requirements. Methodologies: The rapid development process starts with the development of preliminary data models and business process models using structured techniques. In the next stage, requirements are verified using prototyping, eventually to refine the data and process models. These stages are repeated iteratively; further development results in "a combined business requirements and technical design statement to be used for constructing new systems".The term was first used to describe a software development process introduced by James Martin in 1991. According to Whitten (2003), it is a merger of various structured techniques, especially data-driven information technology engineering, with prototyping techniques to accelerate software systems development.The basic principles of rapid application development are: Key objective is for fast development and delivery of a high quality system at a relatively low investment cost. Methodologies: Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process. Aims to produce high quality systems quickly, primarily via iterative Prototyping (at any stage of development), active user involvement, and computerized development tools. These tools may include Graphical User Interface (GUI) builders, Computer Aided Software Engineering (CASE) tools, Database Management Systems (DBMS), fourth-generation programming languages, code generators, and object-oriented techniques. Key emphasis is on fulfilling the business need, while technological or engineering excellence is of lesser importance. Project control involves prioritizing development and defining delivery deadlines or “timeboxes”. If the project starts to slip, emphasis is on reducing requirements to fit the timebox, not in increasing the deadline. Generally includes joint application design (JAD), where users are intensely involved in system design, via consensus building in either structured workshops, or electronically facilitated interaction. Active user involvement is imperative. Iteratively produces production software, as opposed to a throwaway prototype. Produces documentation necessary to facilitate future development and maintenance. Standard systems analysis and design methods can be fitted into this framework. Methodologies: Waterfall development The waterfall model is a sequential development approach, in which development is seen as flowing steadily downwards (like a waterfall) through several phases, typically: Requirements analysis resulting in a software requirements specification Software design Implementation Testing Integration, if there are multiple subsystems Deployment (or Installation) MaintenanceThe first formal description of the method is often cited as an article published by Winston W. Royce in 1970, although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.The basic principles are: The Project is divided into sequential phases, with some overlap and splash back acceptable between phases. Methodologies: Emphasis is on planning, time schedules, target dates, budgets, and implementation of an entire system at one time. Methodologies: Tight control is maintained over the life of the project via extensive written documentation, formal reviews, and approval/signoff by the user and information technology management occurring at the end of most phases before beginning the next phase. Written documentation is an explicit deliverable of each phase.The waterfall model is a traditional engineering approach applied to software engineering. A strict waterfall approach discourages revisiting and revising any prior phase once it is complete. This "inflexibility" in a pure waterfall model has been a source of criticism by supporters of other more "flexible" models. It has been widely blamed for several large-scale government projects running over budget, over time and sometimes failing to deliver on requirements due to the Big Design Up Front approach. Except when contractually required, the waterfall model has been largely superseded by more flexible and versatile methodologies developed specifically for software development. See Criticism of Waterfall model. Methodologies: Spiral development In 1988, Barry Boehm published a formal software system development "spiral model," which combines some key aspects of the waterfall model and rapid prototyping methodologies, in an effort to combine advantages of top-down and bottom-up concepts. It provided emphasis in a key area many felt had been neglected by other methodologies: deliberate iterative risk analysis, particularly suited to large-scale complex systems. Methodologies: The basic principles are: Focus is on risk assessment and on minimizing project risk by breaking a project into smaller segments and providing more ease-of-change during the development process, as well as providing the opportunity to evaluate risks and weigh consideration of project continuation throughout the life cycle. Methodologies: "Each cycle involves a progression through the same sequence of steps, for each part of the product and for each of its levels of elaboration, from an overall concept-of-operation document down to the coding of each individual program." Each trip around the spiral traverses four basic quadrants: (1) determine objectives, alternatives, and constraints of the iteration, and (2) evaluate alternatives; Identify and resolve risks; (3) develop and verify deliverables from the iteration; and (4) plan the next iteration. Methodologies: Begin each cycle with an identification of stakeholders and their "win conditions", and end each cycle with review and commitment. Methodologies: Shape Up Shape Up is a software development approach introduced by Basecamp in 2018. It is a set of principles and techniques that Basecamp developed internally to overcome the problem of projects dragging on with no clear end. Its primary target audience is remote teams. Shape Up has no estimation and velocity tracking, backlogs, or sprints, unlike Waterfall, Agile, or Scrum. Instead, those concepts are replaced with appetite, betting, and cycles. As of 2022, besides Basecamp, notable organizations that have adopted Shape Up include UserVoice and Block. Methodologies: Cycles Through trials and errors, Basecamp found that the ideal cycle length is 6 weeks. This 6 week period is long enough to build a meaningful feature and still short enough to induce a sense of urgency. Methodologies: Shaping Shaping is the process of preparing work before being handed over to designers and engineers. Shaped work spells out the solution's main UI elements, identifies rabbit holes, and outlines clear scope boundaries. It is meant to be rough and to leave finer details for builders (designers and engineers) to solve, allowing the builders to exercise their creativity and make trade-offs. Shaped work is documented in the form of a pitch using an online document solution that supports commenting, allowing team members to contribute technical information asynchronously. Such comments are crucial for uncovering hidden surprises that may derail the project. Methodologies: Before a cycle begins, stakeholders hold a betting table, where pitches are reviewed. For each pitch, a decision is made to either bet on it or drop it. Appetite The way Shape Up determines how much time is allocated to a project is diametrically opposed to other methodologies. Shape Up starts with an appetite (for example, 6 weeks) and ends with a solution design that can be delivered within this constraint. The appetite becomes a hard deadline for the project's builders. Building Shape Up is a two-track system where shapers and builders work in parallel. Work that is being shaped in the current cycle may be given to designers and engineers to build in a future cycle. Methodologies: Recognizing the technical uncertainties that come with building, progress is tracked using a chart that visualizes the metaphor of the hill, aptly named the hill chart. The uphill phase is where builders are still working out their approach, while the downhill is where unknowns have been eliminated. Builders proactively and asynchronously self-report progress using an interactive online hill chart on Basecamp or Jira, shifting focus from done or not-done statuses to unknown or solved problems. The use of hill chart replaces the process of reporting linear statuses in scrum or Kanban standup. Methodologies: Advanced methodologies Other high-level software project methodologies include: Behavior-driven development and business process management. Chaos model - The main rule always resolve the most important issue first. Methodologies: Incremental funding methodology - an iterative approach Lightweight methodology - a general term for methods that only have a few rules and practices Structured systems analysis and design method - a specific version of waterfall Slow programming, as part of the larger Slow Movement, emphasizes careful and gradual work without (or minimal) time pressures. Slow programming aims to avoid bugs and overly quick release schedules. Methodologies: V-Model (software development) - an extension of the waterfall model Unified Process (UP) is an iterative software development methodology framework, based on Unified Modeling Language (UML). UP organizes the development of software into four phases, each consisting of one or more executable iterations of the software at that stage of development: inception, elaboration, construction, and guidelines. Many tools and products exist to facilitate UP implementation. One of the more popular versions of UP is the Rational Unified Process (RUP). Methodologies: Big Bang methodology - an approach for small or undefined projects, generally consisting of little to no planning with high risk. Process meta-models: Some "process models" are abstract descriptions for evaluating, comparing, and improving the specific process adopted by an organization. ISO/IEC 12207 is the international standard describing the method to select, implement, and monitor the life cycle for software. The Capability Maturity Model Integration (CMMI) is one of the leading models and is based on best practices. Independent assessments grade organizations on how well they follow their defined processes, not on the quality of those processes or the software produced. CMMI has replaced CMM. Process meta-models: ISO 9000 describes standards for a formally organized process to manufacture a product and the methods of managing and monitoring progress. Although the standard was originally created for the manufacturing sector, ISO 9000 standards have been applied to software development as well. Like CMMI, certification with ISO 9000 does not guarantee the quality of the end result, only that formalized business processes have been followed. Process meta-models: ISO/IEC 15504 Information technology—Process assessment is also known as Software Process Improvement Capability Determination (SPICE), is a "framework for the assessment of software processes". This standard is aimed at setting out a clear model for process comparison. SPICE is used much like CMMI. It models processes to manage, control, guide and monitors software development. This model is then used to measure what a development organization or project team actually does during software development. This information is analyzed to identify weaknesses and drive improvement. It also identifies strengths that can be continued or integrated into common practice for that organization or team. Process meta-models: ISO/IEC 24744 Software Engineering—Metamodel for Development Methodologies, is a power type-based metamodel for software development methodologies. SPEM 2.0 by the Object Management Group. Soft systems methodology - a general method for improving management processes. Method engineering - a general method for improving information system processes. In practice: A variety of such frameworks have evolved over the years, each with its own recognized strengths and weaknesses. One software development methodology framework is not necessarily suitable for use by all projects. Each of the available methodology frameworks is best suited to specific kinds of projects, based on various technical, organizational, project, and team considerations.Software development organizations implement process methodologies to ease the process of development. Sometimes, contractors may require methodologies employed, an example is the U.S. defense industry, which requires a rating based on process models to obtain contracts. The international standard for describing the method of selecting, implementing, and monitoring the life cycle for software is ISO/IEC 12207. In practice: A decades-long goal has been to find repeatable, predictable processes that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of designing software. Others apply project management techniques to designing software. Large numbers of software projects do not meet their expectations in terms of functionality, cost, or delivery schedule - see List of failed and overbudget custom software projects for some notable examples. In practice: Organizations may create a Software Engineering Process Group (SEPG), which is the focal point for process improvement. Composed of line practitioners who have varied skills, the group is at the center of the collaborative effort of everyone in the organization who is involved with software engineering process improvement. A particular development team may also agree to program environment details, such as which integrated development environment is used one or more dominant programming paradigms, programming style rules, or choice of specific software libraries or software frameworks. These details are generally not dictated by the choice of model or general methodology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MyOneLogin** MyOneLogin: myOneLogin was a Secure Cloud Identity Services platform launched by TriCipher in 2008. It allows users access to many applications with a single secure login, addressing Identity and Access Management challenges. All services are fully integrated and deliver on-demand with no need to deploy hardware, software, or modifying applications. TriCipher was founded in 2000 and is headquartered in Los Gatos, California. TriCipher was acquired by VMware, Inc. in September 2010.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kaolin deposits of the Charentes Basin** Kaolin deposits of the Charentes Basin: The Kaolin deposits of the Charentes Basin in France are clay deposits formed sedimentarily and then confined by other geological structures. Overview: The geological unit called Charentes basin is composed of Eocene and Oligocene deposits, laid above karstic limestone formations of the Campanian, in the north of the Aquitaine Basin. The Charentes basin is named after the departments of Charente and Charente-Maritime. The kaolin clays of Charentes belong to this mainly continental formation often referred as siderolithic, of which the principal outcrop is situated in the South of the Charente-Maritime department, 56 kilometres (35 mi) going north-east from Bordeaux city. The quarries are scattered along a 32 km (20 mi) long, 11 km (6.8 mi) wide, north - south band. Overview: The clay concentrations of economic value are composed of a succession of clays, sands and pebbles. This torrential-stream deposit, close to enlaced rivers, laid to the deposition of sandy-clayey materials, with a variable iron content, coming from a lateritic weathering, of the French “Massif Central” granites. The presence of numerous lignite rich levels indicates that the deposit was performed in the presence of abundant organic matter, leading to important pedogenetic and diagenetic possibilities of evolutions. These chemical and mineralogical evolutions (dissolution–crystallization) allow the neo-formation of kaolin and gibbsite, as well as iron sulfide.At their basement, highly enlaced and with channel shapes, those deposits often fill karstic depressions, leading to the formation of clay wells. The juxtaposition of features are sometimes without explanations using the deposition laws, probably in relation with post-sedimentary strain phenomena, eventually linked to substratum collapse. In the upper part of the series, the deposits are more regular, with lateral extensions up to several hundreds of meters. Overview: Those complex geometries, with structures smaller than 20 meters, lead to particularly difficult recognition, estimation and exploitation phases. To this complex geometry, one should notice important lithology variations. The AGS company uses no less than 24 description codes and 8 colour codes, for its samples description. Those classes are subdivided to take into account the grade in organic matter, iron, titanium, potassium, the colour, and the aptitude to flow. Geometry of the retaining structures: The uncertainty in estimating the tonnage of mineral resources or ore reserves depends on a number of factors, and the uncertainty of definition of the deposit boundaries is one of them. In deposits with sharp contacts, the geometry may be relatively simple, nevertheless, there is always uncertainty caused by lack of information and large drill hole grid. Generally, these boundaries are determined by mineral grade rather than geological properties: deposit boundaries are chosen based on the cut-off grade. Changing the important factor of cut-off grade, the boundaries of the deposit can be extended or contracted. For this reason, even for the deposits with sharp boundaries, a clear definition of the cut-off grade and distinction between ore and gangue due to dilution during mining, the presence of intermediate layer and the limitation of mining in a selective way are essential. However, in the case of the exploitation of soft materials, extraction can be done more selectively and it would be easier to take into account the geological and geometrical limits. Geometry of the retaining structures: On the other hand, sometimes the uncertainty on the estimation of grades is bigger than the uncertainty on the boundaries definition. Estimation is then performed inside predefined boundaries. One can imagine that the anisotropy and structural complexity of the deposit are due to its geometrical form, while the geometrical dimension of the deposit help us to guess about its economical value.Geometrical features can appear in variographic studies and usually they affect, or hide, grade distribution structures. The presence of a series of nearly homogeneous kaolin areas, linked together in zones, creates a mosaic effect. This phenomenon is due to the existence of periodical settling regimes of the rivers. The size of these zones can affect the form of the variogram and increase the nugget effect due to high differences of values in the edge of the zones. A hole effect is one of the other known phenomena caused by the presence of two or more separated lenses with low difference in grade and shape. The distance between these lenses can thus be estimated. Transformation during and after sedimentation: Thiry has mentioned that the actual geological setting of kaolin depositions cannot be explained with only transportation and sedimentation cycles. He also stated that the mineralogical sequences cannot be interpreted without local geochemical transformations. Kulbicki has proved the existence of vermicular minerals (kaolinite and dickite) incompatible with normal sedimentary sequences. Transformation during and after sedimentation: Influence of the organic materials Lignite formations are relatively frequent in Charentes clay deposits. Their thickness changes between some decimeters in lenses, to metric scale in continuous forms. These organic materials had some influences on kaolin deposited layers. Some of the observed influences are as follow: In gathered samples close to these organic materials, clays generally do not contain mica minerals, and especially in the neighborhood of Cuisian lignite, kaolinite is very well-ordered and the clay does not contain swelling clays with hydrazine. Occurrence of gibbsite is always associated with these well-ordered kaolinites. Normally occurrence of hyper-aluminous clays due to the existence of gibbsite is one of the interesting subjects in the history of these kaolins. This causes many discussions about the origin of this mineral. The existence of gibbsite has been mentioned in the studies of Languine and Halm (1951), Caillere and Jourdain (1956), Kulbickie (1956), Dubreuilh et al. (1984) and Delineau (1994). Sandy overburden and intermediate sands: Generally, kaolin deposits have been covered with colored sequences of sand. In some quarries, we can observe red, green and some times black sands. The black color might be due to the existence of pyrite and organic materials. Sometimes fossil woods (floated branches and trunks of trees) can be found and with the coarse size of pebbles (several millimeters) are evidence of a high energy transportation. This type of sand can have some influences on the leaching by mineral and organic acids produced by pyrite and organic materials, of the lower kaolin deposits. Thiry has found that generally these kaolins contain rather well-ordered kaolinite. Obviously, the level of crystallization can control technical properties of kaolinite as well as the structural impurities. Sandy overburden and intermediate sands: The high energy current can interrupt the continuity of the settled layers of kaolin and reduce the simplicity of the estimation methods. Gibbsite: Gibbsite is not stable in presence of quartz and it will be changed into kaolinite minerals, so gibbsite has formed after the deposition and we can call it neo-formation gibbsite. Now, the main question is about gibbsite formation in the middle of kaolin series. Due to the pH of leaching, a dissolution of Al2O3 or SiO2 can occur (podzol or laterite profile) The first theory tries to describe this with podzol profiles: it assumes the leaching of silica from minerals and accordingly the gibbsite formation from leached kaolin. We thus should find the hyper-aluminous materials, containing gibbsite in the lower series of kaolin. On the other hand, a second theory proposes the procedure of aluminium leaching in a very acid medium, in deposited organic materials (lignite) with clay. The organic materials can accelerate the solubilization and transportation of aluminium ions with intervention of organic complex. proposed the following scenarios for this dissolved aluminium. Gibbsite: Dissolved aluminium can be transported with complex to a less acidic medium. 1- if there is any quartz in this medium, it can react, and we obtain well-ordered kaolinite minerals 2- In absence of quartz, aluminium will precipitate as a hydroxide mineral: gibbsite.This theory alone cannot explain what is observed in-situ in the some samples of the “BD” deposit, where gibbsite was found in sandy layers containing quartz.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Richard S. Kayne** Richard S. Kayne: Richard Stanley Kayne is Professor of Linguistics in the Linguistics Department at New York University. Richard S. Kayne: Born in 1944, after receiving an A.B. in mathematics from Columbia College, New York City in 1964, he studied linguistics at the Massachusetts Institute of Technology, receiving his Ph.D. in 1969. He then taught at the University of Paris VIII (1969–1986), MIT (1986–1988) and the City University of New York (1988–1997), becoming Professor at New York University in 1997.He has made prominent contributions to the study of the syntax of English and the Romance languages within the framework of transformational grammar. His theory of Antisymmetry has become part of the canon of the Minimalist syntax literature. Publications: Movement and Silence, Oxford University Press, New York, 2005 (with Thomas Leu & Raffaella Zanuttini) Lasting Insights and Questions: An Annotated Syntax Reader, Wiley/Blackwell, Malde, Mass., 2014 Kayne, Richard S. (1994). The Antisymmetry of Syntax (Linguistic Inquiry Monograph 25). MIT Press.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reality principle** Reality principle: In Freudian psychology and psychoanalysis, the reality principle (German: Realitätsprinzip) is the ability of the mind to assess the reality of the external world, and to act upon it accordingly, as opposed to acting according to the pleasure principle. The reality principle is the governing principle of the actions taken by the ego, after its slow development from a "pleasure-ego" into a "reality-ego". History: Freud argued that “an ego thus educated has become ‘reasonable’; it no longer lets itself be governed by the pleasure principle, but obeys the reality principle, which also, at bottom, seeks to obtain pleasure, but pleasure which is assured through taking account of reality, even though it is pleasure postponed and diminished”.In his introductory lectures of 1915, at the University of Vienna, Freud popularized the concept of the unconscious as the largest and most influential part of the mind, including those drives, instincts and motives humans are often forced to deny except in disguised form. In the 23rd lecture, Freud discussed the conflict between the realm of "Phantasy" and the reality principle, comparing the former to a nature reserve. He argued however that “there is a path that leads back from phantasy to reality - the path, that is, of art”.Jonathan Lear has argued that there was in fact an ethical dimension to Freud's concept of the reality principle, in that it was opposed to a neurotically distorted world-view. Development: In infancy and early childhood, the Id governs behavior predominantly by obeying the pleasure principle. Maturity is the slow process of learning to endure the pain of deferred gratification as and when reality requires it – a process Freud saw as fostered by education and educators. The result is the mature mind's ability to avoid instant gratification in favor of long-term satisfaction. Development: In order to do so, the reality principle does not ignore the id, but strives instead to satisfy its desires in balanced and socially appropriate ways, through awareness of and adjustment to environmental demands. The manner in which it moderates the pleasure principle and assures satisfaction of instinctual needs is by weighing the costs and benefits of an action before deciding to act upon or ignore an impulse. The reality principle forces the mind to consider the risks, requirements and outcomes of various decisions. The ego does not strive to eradicate urges, but instead it temporarily halts the discharge of the id's energy until a more suitable, safe and realistic time and place can be found. This necessary process of delay is accomplished through the so-called secondary process. Development: An example of the reality principle at work is a person who is dieting, but chooses not to give into hunger cravings. He or she knows that satisfying their unhealthy cravings, and thus satisfying the pleasure principle, provides only short-term empty satisfaction that thwarts the objective of the diet.While some of Freud's ideas may be faulty and others not easily testable, he was a peerless observer of the human condition, and enough of what he proposed, particularly concerning the reality principle, manifests itself in daily life. Neurotic rebellion and phantasy: Rebellion against the constraints of the reality principle, in favour of a belief in infantile omnipotence, appears as a feature of all neurotic behavior - something perhaps seen most overtly in the actions of gamblers.Psychosis can be seen as the result of the suspension of the reality principle, while sleep and dreaming offer a 'normal' everyday example of its decommissioning.Susan Isaacs argued however that reality thinking in fact depended on the support of unconscious phantasy, rather than being opposed to it. Jacques Lacan similarly maintained that the field of reality required the support of the imaginary world of phantasy for its maintenance. Even the ego psychologists have come to see the perception of reality as taking place through the medium of a greater or lesser veil of infantile fantasy. Consolidation of the reality principle: The reality principle increases its scope in the wake of puberty, expanding the range and maturity of the choices the individual makes. Adolescents are no longer children who must succumb to every need, but must balance what is pleasurable with what is real, even if maintaining this balance happens to be disagreeable. Consolidation of the reality principle: A further change in the reality principle from adolescence to adulthood can be a critical transition in its consolidation; but the impact of certain traumatic experiences may prove to be detrimental from within the unconscious. In the new reality principle, the individual must find themselves to be represented as a strong presence within their own mind and making reasoned decisions, instead of being merely perceived. It is the culmination of the way in which an adolescent learns to experience oneself in the context of their external reality. Vs. pleasure principle: Both the reality principle and pleasure principle pursue personal gratification, but the crucial difference between the two is that the reality principle is more focused on the long-term and is more goal-oriented while the pleasure principle disregards everything except for the immediate fulfillment of its desires. Vs. pleasure principle: The pleasure principle The reality principle and pleasure principle are two competing concepts established by Freud. The pleasure principle is the psychoanalytic concept based on the pleasure drive of the id in which people seek pleasure and avoid suffering in order to satisfy their biological and psychological needs. As people mature, the id's pleasure-seeking is modified by the reality principle. As it succeeds in establishing its dominance as a regulatory principle over the id, the search for satisfaction does not take the most direct routes, but instead postpones attainment of its goal in accordance with conditions imposed by the outside world, or in other words, deferred gratification. These two concepts can be viewed in psychological terms or processes, with the pleasure principle being considered the primary process that is moderated by the secondary process, or the reality principle. From an economic standpoint, the reality principle corresponds to a transformation of free energy into bound energy. Vs. pleasure principle: Impulse control Freud defines impulses as products of two competing forces: the pleasure principle and the reality principle. These two forces clash because impulses encourage action without any premeditated thought or deliberation and little regard to consequences, compromising the role of the reality principle. Impulses are often difficult for the mind to overcome because they hold anticipated pleasurable experiences. Freud emphasizes the importance of the development of impulse control because it is socially necessary and human civilization would fail without it. If an individual lacks sufficient impulse control, it represents a defect of repression that may lead to severe psychosocial problems (Kipnis 1971; Reich 1925; Winshie 1977). Vs. pleasure principle: Development of the reality principle The ability to control impulses and delay gratification is one of the hallmarks of a mature personality and the result of a thriving reality principle. Throughout childhood, children learn how to control their urges and behave in ways that are socially appropriate. Researchers have found that children who are better at delaying gratification may have better defined egos, because they tend to be more concerned with things such as social appropriateness and responsibility. Most adults have developed the capacity for the reality principle in their ego. They have learned to override the constant and immediate gratification demands of the id. Vs. pleasure principle: In human development, the transition in dominance from the pleasure principle to the reality principle is one of the most important advances in the development of the ego. The transition is rarely smooth and can lead to interpersonal conflict and ambivalence. If the reality principle fails to develop, a different dynamic takes its place. The super-ego asserts its authority, inflicting guilt on the individual because they do not have the ability to placate both reason and pleasure. The ego becomes trapped in between the “should” of the id and the “should not” of the superego. A person who lives as a slave to their immediate desires and consistently feels regret and guilt afterwards will lead an unhappy and persistently unfulfilled existence. It is not hard to find examples of adults who live this way, such as the alcoholic who drinks then feels guilty for doing so and they go on to perpetuate the vicious cycle. Vs. pleasure principle: Split ego At the failure of the ego to embrace its developing role within the reality principle, it remains under the control of the pleasure principle. This results in a split ego, a condition in which the two principles clash much more severely than when under the temptation of an impulse. The control of the pleasure principle persists as strongly as it does because as the child's self-representation begins to differentiate from the object representation of the mother, they begin to experience depression at the loss of what the mother provides. Yet, at the same time the mother continues to encourage such behavior in the child instead of allowing them to mature. This behavior enforces clinging and denial which promotes the persistence of the pleasure principle in an attempt to avoid the pain of separation or subsequent depression. The pleasure principle denies the reality of separation of mother and child while the reality principle still attempts to pursue it. This path of development creates a break between the growing child's feelings and the reality of his or her behavior as they enter the real world. Vs. pleasure principle: Strengthening the reality principle From a Freudian standpoint, one means of strengthening the reality principle within the ego would be to attain control over the id. Through maturity and a better sense of self, individuals can find the strength to gradually develop the reality principle and learn to defer pleasure by making more rational and controlled choices. In a traditional psychoanalytic model, this could take several years of restraint, and even so, many people will make the choice to achieve instant gratification over delayed gratification.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**K2-58** K2-58: K2-58 (also designated as EPIC 206026904) is G-type main-sequence star in the constellation of Aquarius, approximately 596 light-years from Solar System. The star is metal-rich, having 155% of Solar abundance of elements heavier than helium. The star is located in the region allowing to see Venus transiting the Sun for hypothetical observer located in K2-58 system. Planetary system: The planetary system has three confirmed exoplanets (named as K2-58 b, K2-58 c, K2-58 d), discovered in 2016.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metameme** Metameme: In the field of memetics, a metameme (or meta-meme) is defined as a meme about a meme. A metaphor or the idea of memetic engineering are, thus, metamemes. The concept of memes has been referred to as "The Metameme". Some other metamemes of interest include the meme tolerance and memeplexes. Initial definitions of meta memes and the lingo surrounding the phenomenon have seen a recent overhaul, as a new perspective is beginning to emerge due to heightened interest from researchers and companies alike. Measuring social evolution: Metamemes may be used to measure the evolution of a given society. It has been proposed that the degree of consciousness a society has about the very memes that form it is correlated with how evolved that society is. The difficulties associated with measuring the "metamemetic content" of a given society, however, render that proposition impractical. This can be viewed (to some extent) as a memetic approach to the American sociologist Gerhard Lenski's view that the more information a given society has, the more advanced it is.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cascade refrigeration** Cascade refrigeration: A cascade refrigeration cycle is a multi-stage thermodynamic cycle. An example two-stage process is shown at right. (Bottom on mobile) The cascade cycle is often employed for devices such as ULT freezers.In a cascade refrigeration system, two or more vapor-compression cycles with different refrigerants are used. The evaporation-condensation temperatures of each cycle are sequentially lower with some overlap to cover the total temperature drop desired, with refrigerants selected to work efficiently in the temperature range they cover. The low temperature system removes heat from the space to be cooled using an evaporator, and transfers it to a heat exchanger that is cooled by the evaporation of the refrigerant of the high temperature system. Alternatively, a liquid to liquid or similar heat exchanger may be used instead. The high temperature system transfers heat to a conventional condenser that carries the entire heat output of the system and may be passively, fan, or water-cooled. Cascade refrigeration: Cascade cycles may be separated by either being sealed in separated loops, or in what is referred to as an "auto-cascade" where the gases are compressed as a mixture but separated as one refrigerant condenses into a liquid while the other continues as a gas through the rest of the cycle. Although an auto-cascade introduces several constraints on the design and operating conditions of the system that may reduce the efficiency it is often used in small systems due to only requiring a single compressor, or in cryogenic systems as it reduces the need for high efficiency heat exchangers to prevent the compressors leaking heat into the cryogenic cycles. Both types can be used in the same system, generally with the separate cycles being the first stage(s) and the auto-cascade being the last stage. Cascade refrigeration: Peltier coolers may also be cascaded into a multi-stage system to achieve lower temperatures. Here the hot side of the first Peltier cooler is cooled by the cold side of the second Peltier cooler, which is larger in size, whose hot side is in turn cooled by the cold side of an even larger Peltier cooler, and so on. Efficiency drops very rapidly as more stages are added but for very small heat loads down to near-cryogenic temperatures this can often be an effective solution due to being compact and low cost, such as in mid-range thermographic cameras. A two stage Peltier cooler can achieve around -30°C, -75°C with three stages, -85°C with four stages, -100°C with six stages, and -123°C with seven stages. Refrigeration power and efficiency are low but Peltier coolers can be small, for small cooling loads resulting in overall low power consumption for a Peltier cooler with three stages. For a Peltier cooler with seven stages, power consumption can be 65 W with a cooling capacity of 80 mW.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OpenEMR** OpenEMR: OpenEMR is a medical practice management software which also supports Electronic Medical Records (EMR). It is ONC Complete Ambulatory EHR certified and features fully integrated electronic medical records, practice management for a medical practice, scheduling, and electronic billing. The server side is written in PHP and can be employed in conjunction with a LAMP "stack", though any operating system with PHP support is supported. OpenEMR is free and open-source software subject to the terms of the GNU General Public License (GPL). It is actively internationalized and localized in multiple languages, and free support is available in online forums around the world. At the time of this writing, commercial support is offered by over 30 vendors in over 10 countries. Features: ONC Complete Ambulatory EHR Certified Patient Demographics Patient Scheduling Electronic Medical Records Prescriptions ePrescribing -requires OpenEMR specific integration by a third party such as what is provided by: WENO Exchange, NewCrop, and Allscripts EPCS (ePrescribe controlled substances) - requires OpenEMR specific integration provided by a third party such as what is provided by WENO Exchange, NewCrop, and Allscripts Medical Billing Clinical Decision Rules Patient Portal Reports Advantages and benefits of free and open-source software Security Multilanguage Support Adoption: In the US, it has been estimated that there are more than 5,000 installations of OpenEMR in physician offices and other small healthcare facilities serving more than 30 million patients. Internationally, it has been estimated that OpenEMR is installed in over 15,000 healthcare facilities, translating into more than 45,000 practitioners using the system which are serving greater than 90 million patients. The Peace Corps plan to incorporate OpenEMR into their EHR system. Siaya District Hospital, a 220-bed hospital in rural Kenya, is using OpenEMR. HP India is planning to utilize OpenEMR for their Mobile Health Centre Project. There are also articles describing single clinician deployments and a free clinic deployment. Internationally, it is known that there are practitioners in Pakistan, Puerto Rico, Australia, Sweden, the Netherlands, Israel, India, Malaysia, Nepal, Indonesia, Bermuda, Armenia, Kenya, and Greece that are either testing or actively using OpenEMR for use as a free electronic medical records program in the respective languages. Awards: OpenEMR has received a Bossie Award in the "Best Open Source Applications" category in both 2012 and 2013. Development: The official OpenEMR code repository was migrated from CVS to git on 20 October 2010. The project's main code repository is on GitHub. There are also official mirrored code repositories on SourceForge, Google Code, Gitorious, Bitbucket, Assembla, CodePlex and Repo.or.cz. OEMR OEMR is a nonprofit entity that was organized in July, 2010 to support the OpenEMR project. OEMR is the entity that holds the ONC EHR Certifications with ICSA and InfoGard Labs. Development: Certification OpenEMR versions 4.1.0 (released on 9/23/2011), 4.1.1 (released on 8/31/2012) and 4.1.2 (released on 8/17/2013) have 2011 ONC Complete Ambulatory EHR Certification by ICSA Labs.OpenEMR version 4.2.0 (released 12/28/2014), 4.2.1 (released 3/25/2016) and 4.2.2 (released on 5/19/2016) have 2014 ONC Modular Ambulatory EHR Certification by InfoGard Laboratories.OpenEMR version 5.0.0 (released 2/15/2017), 5.0.1 (released 4/23/2018), 5.0.2 (released 8/4/2019) has 2014 ONC Complete Ambulatory EHR Certification by InfoGard Laboratories.The OEMR organization is a non-profit entity that manages/provides the ONC certifications. History: OpenEMR was originally developed by Synitech and version 1.0 was released in June 2001 as MP Pro (MedicalPractice Professional). Much of the code was then reworked to comply with the Health Insurance Portability and Accountability Act (HIPAA) and to improve security, and the product was reintroduced as OpenEMR version 1.3 a year later, in July 2002. On 13 August 2002 OpenEMR was released to the public under the GNU General Public License (GPL), i.e. it became a free and open-source project and was registered on SourceForge. The project evolved through version 2.0 and the Pennington Firm (Pennfirm) took over as its primary maintainer in 2003. Walt Pennington transferred the OpenEMR software repository to SourceForge in March 2005. Mr. Pennington also established Rod Roark, Andres Paglayan and James Perry, Jr. as administrators of the project. Walt Pennington, Andres Paglayan and James Perry eventually took other directions and were replaced by Brady Miller in August 2009. Robert Down became an administrator of the project in March 2017. Matthew Vita was an administrator of the project from July 2017 until February 2020. Jerry Padgett became an administrator of the project in June 2019. Stephen Waite became an administrator of the project in February 2020. Stephen Nielson became an administrator of the project in January 2022. So at this time Rod Roark, Brady Miller, Robert Down, Jerry Padgett, Stephen Waite, and Stephen Nielson are the project's co-administrators.In 2018 Project Insecurity found almost 30 security flaws in the system, which were all responsibly addressed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fus3** Fus3: Fus3 is a MAPK protein involved in the mating decision of yeast. The dissociation of Fus3 from scaffold protein Ste5 results in the switch-like mating decision observed in yeast. During this process, Fus3 competes with a phosphatase Ptc1, attempting to phosphorylate 4 key phosphorylation sites on Ste5. When all 4 sites on Ste5 have been dephosphorylated by Ptc1, Fus3 dissociates from Ste5 and trans locates to the nucleus.One regulator of Fus3 is Ste5. Ste5 causes autophosphorylation of one of two locations modulated by the MAPK kinase Ste7 (the main activator of Fus3). This single phosphorylation causes Fus3 to phosphorylate Ste5 leading to a decrease in signal. However, Ste5 also selectively catalytically unlocks Fus3 for phosphorylation by Ste7. Both the catalytic domain on Ste5 as well as Ste7 must be present in order to activate Fus3, which helps to explain why Fus3 is only activated during the mating pathway, and remains inactive in other situations which use Ste7. When binding to Ste5 is disrupted, Fus3 behaves like its homologue Kss1, and the cells no longer respond to a gradient or mate efficiently with distant partners.Fus3 is activated by Ste7 and its substrates include Ste12, Far1, Bni1, Sst2, Tec1, and Ste5. It can be localized to the cytoplasm, the mating projection tip, the nucleus, and the mitochondrion.Fus3 also serves to phosphorylate repressor proteins Rst1 and Rst1 (Dig1 and Dig2 respectively) which results in promotion of Ste12 dependent transcription of mating-specific genes. It also activates Far1 which goes on to inhibit the cyclins CLN1 and CLN2 leading to cell-cycle arrest. Kss1: Kss1, a functional homologue of Fus3, is not involved in the production of a shmoo. Kss mutants behave similarly to wild-type yeast cells with respect to their ability to shmoo. It has been shown that while Fus3 and Kss1 are functionally redundant, the substrates of the Fus3 protein may or may not be shared with Kss1. Instead, it has been found that the function of Fus3 is to regulate mating, while the function of Kss1 is to regulate filamentation and invasion. In the absence of Fus3, there can be errors in pathway communication which can result in Kss1 being activated by the mating pheromone.Kss1 does not exhibit the ultrasensitivity that Fus3 does, but instead is activated rapidly and has a graded dose-response profile. Biological Processes: Fus3 is involved in the following biological processes: Cell cycle arrest Invasive growth in response to glucose limitation Negative regulation of MAPK cascade Negative regulation of transposition Pheromone-dependent signal transduction involved in conjugation with cellular fusion Positive regulation of protein export from the nucleus Protein phosphorylationThe closest human homolog to Fus3 is MAPK1 (ERK2)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Overdetermined system** Overdetermined system: In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns. An overdetermined system is almost always inconsistent (it has no solution) when constructed with random coefficients. However, an overdetermined system will have solutions in some cases, for example if some equation occurs several times in the system, or if some equations are linear combinations of the others. Overdetermined system: The terminology can be described in terms of the concept of constraint counting. Each unknown can be seen as an available degree of freedom. Each equation introduced into the system can be viewed as a constraint that restricts one degree of freedom. Therefore, the critical case occurs when the number of equations and the number of free variables are equal. For every variable giving a degree of freedom, there exists a corresponding constraint. The overdetermined case occurs when the system has been overconstrained — that is, when the equations outnumber the unknowns. In contrast, the underdetermined case occurs when the system has been underconstrained — that is, when the number of equations is fewer than the number of unknowns. Such systems usually have an infinite number of solutions. Overdetermined linear systems of equations: An example in two dimensions Consider the system of 3 equations and 2 unknowns (X and Y), which is overdetermined because 3 > 2, and which corresponds to Diagram #1: There is one solution for each pair of linear equations: for the first and second equations (0.2, −1.4), for the first and third (−2/3, 1/3), and for the second and third (1.5, 2.5). However, there is no solution that satisfies all three simultaneously. Diagrams #2 and 3 show other configurations that are inconsistent because no point is on all of the lines. Systems of this variety are deemed inconsistent. Overdetermined linear systems of equations: The only cases where the overdetermined system does in fact have a solution are demonstrated in Diagrams #4, 5, and 6. These exceptions can occur only when the overdetermined system contains enough linearly dependent equations that the number of independent equations does not exceed the number of unknowns. Linear dependence means that some equations can be obtained from linearly combining other equations. For example, Y = X + 1 and 2Y = 2X + 2 are linearly dependent equations because the second one can be obtained by taking twice the first one. Overdetermined linear systems of equations: Matrix form Any system of linear equations can be written as a matrix equation. Overdetermined linear systems of equations: The previous system of equations (in Diagram #1) can be written as follows: Notice that the rows of the coefficient matrix (corresponding to equations) outnumber the columns (corresponding to unknowns), meaning that the system is overdetermined. The rank of this matrix is 2, which corresponds to the number of dependent variables in the system. A linear system is consistent if and only if the coefficient matrix has the same rank as its augmented matrix (the coefficient matrix with an extra column added, that column being the column vector of constants). The augmented matrix has rank 3, so the system is inconsistent. The nullity is 0, which means that the null space contains only the zero vector and thus has no basis. Overdetermined linear systems of equations: In linear algebra the concepts of row space, column space and null space are important for determining the properties of matrices. The informal discussion of constraints and degrees of freedom above relates directly to these more formal concepts. Homogeneous case The homogeneous case (in which all constant terms are zero) is always consistent (because there is a trivial, all-zero solution). There are two cases, depending on the number of linearly dependent equations: either there is just the trivial solution, or there is the trivial solution plus an infinite set of other solutions. Overdetermined linear systems of equations: Consider the system of linear equations: Li = 0 for 1 ≤ i ≤ M, and variables X1, X2, ..., XN, where each Li is a weighted sum of the Xis. Then X1 = X2 = ⋯ = XN = 0 is always a solution. When M < N the system is underdetermined and there are always an infinitude of further solutions. In fact the dimension of the space of solutions is always at least N − M. Overdetermined linear systems of equations: For M ≥ N, there may be no solution other than all values being 0. There will be an infinitude of other solutions only when the system of equations has enough dependencies (linearly dependent equations) that the number of independent equations is at most N − 1. But with M ≥ N the number of independent equations could be as high as N, in which case the trivial solution is the only one. Overdetermined linear systems of equations: Non-homogeneous case In systems of linear equations, Li=ci for 1 ≤ i ≤ M, in variables X1, X2, ..., XN the equations are sometimes linearly dependent; in fact the number of linearly independent equations cannot exceed N+1. We have the following possible cases for an overdetermined system with N unknowns and M equations (M>N). M = N+1 and all M equations are linearly independent. This case yields no solution. Example: x = 1, x = 2. M > N but only K equations (K < M and K ≤ N+1) are linearly independent. There exist three possible sub-cases of this: K = N+1. This case yields no solutions. Example: 2x = 2, x = 1, x = 2. Overdetermined linear systems of equations: K = N. This case yields either a single solution or no solution, the latter occurring when the coefficient vector of one equation can be replicated by a weighted sum of the coefficient vectors of the other equations but that weighted sum applied to the constant terms of the other equations does not replicate the one equation's constant term. Example with one solution: 2x = 2, x = 1. Example with no solution: 2x + 2y = 2, x + y = 1, x + y = 3. Overdetermined linear systems of equations: K < N. This case yields either infinitely many solutions or no solution, the latter occurring as in the previous sub-case. Example with infinitely many solutions: 3x + 3y = 3, 2x + 2y = 2, x + y = 1. Example with no solution: 3x + 3y + 3z = 3, 2x + 2y + 2z = 2, x + y + z = 1, x + y + z = 4.These results may be easier to understand by putting the augmented matrix of the coefficients of the system in row echelon form by using Gaussian elimination. This row echelon form is the augmented matrix of a system of equations that is equivalent to the given system (it has exactly the same solutions). The number of independent equations in the original system is the number of non-zero rows in the echelon form. The system is inconsistent (no solution) if and only if the last non-zero row in echelon form has only one non-zero entry that is in the last column (giving an equation 0 = c where c is a non-zero constant). Otherwise, there is exactly one solution when the number of non-zero rows in echelon form is equal to the number of unknowns, and there are infinitely many solutions when the number of non-zero rows is lower than the number of variables. Overdetermined linear systems of equations: Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there are an infinitude of solutions. Overdetermined linear systems of equations: Exact solutions All exact solutions can be obtained, or it can be shown that none exist, using matrix algebra. See System of linear equations#Matrix solution. Overdetermined linear systems of equations: Approximate solutions The method of ordinary least squares can be used to find an approximate solution to overdetermined systems. For the system Ax=b, the least squares formula is obtained from the problem the solution of which can be written with the normal equations, where T indicates a matrix transpose, provided (ATA)−1 exists (that is, provided A has full column rank). With this formula an approximate solution is found when no exact solution exists, and it gives an exact solution when one does exist. However, to achieve good numerical accuracy, using the QR factorization of A to solve the least squares problem is preferred. Overdetermined nonlinear systems of equations: In finite dimensional spaces, a system of equations can be written or represented in the form of {f1(x1,…,xn)=0⋮⋮⋮fm(x1,…,xn)=0 or in the form of f(x)=0 with and 0=[0⋮0] where x=(x1,…,xn) is a point in Rn or Cn and f1,…,fm are real or complex functions. The system is overdetermined if m>n . In contrast, the system is an underdetermined system if m<n .As an effective method for solving overdetermined systems, the Gauss-Newton iteration locally quadratically converges to solutions at which the Jacobian matrices of f(x) are injective. In general use: The concept can also be applied to more general systems of equations, such as systems of polynomial equations or partial differential equations. In the case of the systems of polynomial equations, it may happen that an overdetermined system has a solution, but that no one equation is a consequence of the others and that, when removing any equation, the new system has more solutions. For example, (x−1)(x−2)=0,(x−1)(x−3)=0 has the single solution x=1, but each equation by itself has two solutions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Visual communication** Visual communication: Visual communication is the use of visual elements to convey ideas and information which include (but are not limited to) signs, typography, drawing, graphic design, illustration, industrial design, advertising, animation, and electronic resources. Humans have used visual communication since prehistoric times. Within modern culture, there are several types of characteristics when it comes to visual elements, they consist of objects, models, graphs, diagrams, maps, and photographs. Outside the different types of characteristics and elements, there are seven components of visual communication: color, shape, tones, texture, figure-ground, balance, and hierarchy.Each of these characteristics, elements, and components play an important role in daily lives. Visual communication holds a specific purpose in aspects such as social media, culture, politics, economics, and science. In considering these different aspects, visual elements present various uses and how they convey information. Whether it is advertisements, teaching and learning, or speeches and presentations, they all involve visual aids that communicate a message. In reference to the visual aids, the following are the most common: chalkboard or whiteboard, poster board, handouts, video excerpts, projection equipment, and computer-assisted presentations. Overview: The debate about the nature of visual communication dates back thousands of years. Visual communication relies on a collection of activities, communicating ideas, attitudes, and values via visual resources, i.e. text, graphics, or video. The evaluation of a good visual communication design is mainly based on measuring comprehension by the audience, not on personal aesthetic and/or artistic preference as there are no universally agreed-upon principles of aesthetics. Visual communication by e-mail, a textual medium, is commonly expressed with ASCII art, emoticons, and embedded digital images. Visual communication has become one of the most important approaches using which people communicate and share information.The term 'visual presentation' is used to refer to the actual presentation of information through a visible medium such as text or images. Recent research in the field has focused on web design and graphically-oriented usability. Overview: Important figures Aldous Huxley is regarded as one of the most prominent explorers of visual communication and sight-related theories. Becoming near-blind in his teen years as the result of an illness influenced his approach, and his work includes important novels on the dehumanizing aspects of scientific progress, most famously Brave New World and The Art of Seeing. He described "seeing" as being the sum of sensing, selecting, and perceiving. One of his most famous quotes is "The more you see, the more you know." Max Wertheimer is said to be the father of Gestalt psychology. Gestalt means form or shape in German, and the study of Gestalt psychology show emphasis in simplicity, as its properties group visuals by similarity in shape or color, continuity, and proximity. Additional laws include closure and figure-ground principles in studied images is also intensively taught. Overview: Image analysis Visual communication contains image aspects. The interpretation of images is subjective and to understand the depth of meaning, or multiple meanings, communicated in an image requires image analysis. Images can be analyzed though many perspectives, for example these six major perspectives presented by Paul Martin Lester: Personal, Historical, Technical, Ethical, Cultural, and Critical. Overview: Personal perspective: When a viewer has an opinion about an image based on their personal thoughts. Personal response depends on the viewer's thoughts and values, individually. However, this might sometimes conflict with cultural values. Also when a viewer has viewed an image with a personal perspective, it is hard to change the view of the image on the viewer, even though the image can be seen in other ways. Overview: Historical perspective: An image's view can be arising from the history of the use of media. Through times sort images have been changed, because the use of different (new) media. For example: The result of using the computer to edit images (e.g. Photoshop) is quite different when comparing images that are made and edited by craft. Technical perspective: When the view of an image is influenced by the use of lights, position and the presentation of the image. The right use of light, position and presentation of the image can improve the view of the image. It makes the image looks better than the reality. Ethical perspective: From this perspective, the maker of the image, the viewer and the image itself must be responsible morally and ethically to the image. This perspective is also categorized in six categories: categorical imperative, utilitarianism, hedonism, golden mean, golden rule, and veil of ignorance. Cultural perspective: Symbolization is an important definition for this perspective. Cultural perspective involves identity of symbols. The uses of words that are related with the image, the use of heroes in the image, etc. are the symbolization of the image. The cultural perspective can also be seen as the semiotic perspective. Critical perspective: The view of images in the critical perspective is when the viewers criticize the images, but the critics have been made in interests of the society, although an individual makes the critics. This way this perspective differs from the personal perspective. Overview: Visual aid media: Simple to advanced Chalkboard or whiteboard: Chalkboards and whiteboards are very useful visual aids, particularly when more advanced types of media are available. They are cheap and also allow for much flexibility. The use of chalkboards or whiteboards is convenient, but they are not a perfect visual aid. Often, using this medium as an aid can create confusion or boredom. Particularly if a student who is not familiar with how to properly use visual aids attempts to draw on a board while they are speaking, they detract time and attention from their actual speech. Overview: Poster board: A poster is a very simple and easy visual aid. Posters can display charts, graphs, pictures, or illustrations. The biggest drawback of using a poster as a visual aid is that often a poster can appear unprofessional. Since a poster board paper is relatively flimsy, often the paper will bend or fall over. The best way to present a poster is to hang it up or tape it to a wall. Overview: Handouts: Handouts can also display charts, graphs, pictures, or illustrations. An important aspect of the use of a handout is that a person can keep a handout with them long after the presentation is over. This can help the person better remember what was discussed. Passing out handouts, however, can be extremely distracting. Once a handout is given out, it might potentially be difficult to bring back your audience's attention. The person who receives the handout might be tempted to read what is on the paper, which will keep them from listening to what the speaker is saying. If using a handout, the speaker distributes the hand out right before you reference it. Distributing handouts is acceptable in a lecture that is an hour or two, but in a short lecture of five to ten minutes, a handout should not be used. Overview: Video excerpts: A video can be a great visual aid and attention grabber, however, a video is not a replacement for an actual speech. There are several potential drawbacks to playing a video during a speech or lecture. First, if a video is playing that includes audio, the speaker will not be able to talk. Also, if the video is very exciting and interesting, it can make what the speaker is saying appear boring and uninteresting. The key to showing a video during a presentation is to make sure to transition smoothly into the video and to only show very short clips. Overview: Projection equipment: There are several types of projectors. These include slide projectors, overhead projectors, and computer projectors. Slide projectors are the oldest form of projector, and are no longer used. Overhead projectors are still used but are somewhat inconvenient to use. In order to use an overhead projector, a transparency must be made of whatever is being projected onto the screen. This takes time and costs money. Computer projectors are the most technologically advanced projectors. When using a computer projector, pictures and slides are easily taken right from a computer either online or from a saved file and are blown up and shown on a large screen. Though computer projectors are technologically advanced, they are not always completely reliable because technological breakdowns are not uncommon of the computers of today. Overview: Computer-assisted presentations: Presentations through presentation software can be an extremely useful visual aid, especially for longer presentations. For five- to ten-minute presentations, it is probably not worth the time or effort to put together a deck of slides. For longer presentations, however, they can be a great way to keep the audience engaged and keep the speaker on track. A potential drawback of using them is that it usually takes a lot of time and energy to put together. There is also the possibility of a computer malfunction, which can mess up the flow of a presentation. Components: Components of visualization make communicating information more intriguing and compelling. The following components are the foundation for communicating visually. Hierarchy is an important principle because it assists the audience in processing the information by allowing them to follow through the visuals piece by piece. When having a focal point on a visual aid (i.e. Website, Social Media, Poster, etc...), it can serve as a starting point for the audience to guide them. In order to achieve hierarchy, we must take into account the other components: Color, Shape, Tones, Texture, Figure-Ground, Balance.Colors is the first and most important component when communicating through visuals. Colors displays an in-depth connection between emotions and experiences. Additive and subtractive color models help in visually communicating aesthetically please information. Additive color model, also known as RGB color (Red, Green, Blue) goes from dark to light colors, while subtractive color model is the opposite. The subtractive color model includes the primary CMYK colors (Cyan, Magenta, Yellow, Black) which go from light to dark. Shape is the next fundamental component that assists in creating a symbol that builds a connection with the audience. There are two categories that shapes can fall under: Organic or Biomorphic shapes, and Geometric or Rectilinear shapes. Organic or biomorphic shapes are shapes that depict natural materials (which include curvy lines), while Geometric or Rectilinear shapes are shapes that are created by man (including triangles, rectangles, ovals, and circles).Tone refers to the difference of color intensity, meaning more light or dark. The purpose of achieving a certain tone is to put a spotlight on a graphical presentation and emphasize the information. Similarly, texture can enhance the viewers optics and creates a more personal feel compared to a corporate feel. Texture refers to the surface of an object, whether is it 2-D or 3-D, that can amplify a user's content.Figure-ground is the relationship between a figure and the background. In other words, it is the relationship between shapes, objects, types, etc. and the space it is in. We can look at figure as the positive space, and ground as the negative space. In comparison, positive space is the objects that hold dominance visually, while negative space (as mentioned previously) is the background. In addition to creating a strong contrast in color, texture, and tone, figure-ground can highlight different figures. As for balance, it is important to have symmetrical or asymmetrical balance in visual communication. Symmetrical balance holds a stable composition and is proper in conveying informative visual communication. As for asymmetrical balance, the balance of visuals is weighted more to one side. For instance, color is more weighted to one color than the other, while in a symmetrical balance all colors are equally weighted. Prominence and motive: Social media Social media is one of the most effective ways to communicate. The incorporation of text and images deliver messages quicker and more simplistic through social media platforms. A potential drawback can be there is limited access due to the internet access requirement and certain limitations to the number of characters and image size. Despite the potential drawback, there has been a shift towards more visual images with the rise of YouTube, Instagram, and Snapchat. In the rise of these platforms, Facebook and Twitter, have followed suit and integrated more visual images into their platform outside the use of written posts. It can be stated that visual images are used in two ways: as additional clarification for spoken or written text, or to create individual meaning (usually incorporating ambiguous meanings). These meanings can assist in creating casual friendships through interactions and either show or fabricate reality. These major platforms are becoming focused on visual images by growing a multi-modal platform with users having the ability to edit or adjust their pictures or videos these platforms. When analyzing the relationship between visual communication and social media, four themes arise: Emerging genres and practices: The sharing of various visual elements allow for the creation of genres, or new arrangements of socially accepted visual elements (i.e. photographs or GIFs) based on the platforms. These emerging genres are used as self-expression of identity, to feel a sense of belonging of different sub-group of the online community. Prominence and motive: Identity construction: Similar to genres, users will use visuals through social media to express their identities. Visual elements can change in meaning over a period of time by the person who shared it, which means that visual elements can be dynamic. This makes visuals uncontrollable since the person may not identify as that specific identity, but rather someone who has evolved. Prominence and motive: Everyday public/private vernacular practices: This theme presents the difficulty of deciphering what is considered public, or private. Users can post the privacy from their own home, however, their post is interacting with users from the online public. Prominence and motive: Transmedia circulation, appropriation, and control: Transmedia circulation refers to visual elements being circulated through different types of media. Visual elements, such as images can be taken from one platform, edited, and posted to another platform without recognizing where it originally came from. The concept of appropriation and ownership can be brought into question, making aware the idea that if a user can appropriate another person work, then that user's work can appropriated, as well. Prominence and motive: Culture Members of different cultures can participate in the exchange of visual imagery based on the idea of universal understandings. The term visual culture allows for all cultures to feel equal, making it the inclusive aspect of every life. When considering visual culture in communication, it is shaped by the values amongst all cultures, especially regarding the concepts of high and low-context. Cultures that are generally more high-context will rely heavily on visual elements that have an implied and implicit meaning. However, cultures that are low-context will rely on visual elements that have a direct meaning and rely more on the textual explanations. Prominence and motive: Politics Visual communication in politics have become a primary sense of communication, while dialogue and text have become a secondary sense. This may be due to the increased use of televisions, as viewers become more dependent on visuals. Sound bite has become a popular and perfected art among all political figures. Despite it being a favored mode of showcasing a political figure's agenda, it has shown that 25.1% of news coverage displayed image bites - instead of voices, there are images and short videos. Visuals are deemed an essential function in political communication, and behind these visuals are 10 functions for why political figures use them. These functions include: Argument function: Although images do not indicate any words being said, this function conveys the idea that images can have an association between objects or ideas. Visuals in politics can make arguments about the different aspects of a political figure's character or intentions. When introducing visual imagery with sound, the targeted audience can clarify ambiguous messages that a political figure has said in interviews or news stories. Prominence and motive: Agenda setting function: Under this function, it is important that political figures produce newsworthy pictures that will allow for their message to gain coverage. The reason for this is due to the agenda-setting theory, where importance of public agenda is taken into consideration when the media determines the importance of a certain story or issue. With that said, if politicians do not provide an interesting and attention-grabbing picture, there will likely be no news coverage. A way for a politician to gain news coverage, is to provide exclusivity for what the media can capture from a certain event. Despite not having the ability to control whether they receive coverage, they can control if the media gets an interesting and eye-catching visual. Prominence and motive: Dramatization function: Similar to agenda setting, the dramatization function targets a specific policy that a political figure wants to advocate for. This function can be seen when Michelle Obama promoted nutrition by hosting a media event of her planting a vegetable garden, or Martin Luther King Jr. producing visuals from his 1963 campaign for racial injustice. In some cases, these images are used as icons for social movements. Prominence and motive: Emotional function: Visuals can be used as a way to provoke an emotional response. A study that was performed found that motion pictures and video has more of an emotional impact than still images. On the other hand, research has suggested that the logic and rationality of a viewer is not barred by emotion. In fact, logic and emotion are interrelated meaning that images not only can have emotional arousal, but also influence viewers to think logically. Prominence and motive: Image-building function: Imagery gives a viewer a first impression of a candidate when they are running for office. These visuals give voters a sense of who they will be voting for during the elections, regarding their background, personality, or demeanor. They can create their image by appearing be family-oriented, religiously involved, or showing a commonality with the disadvantaged community. Prominence and motive: Identification function: Through the identification function, visuals can create an identification between political figures and audiences. In other words, the audience may perceive a type of similarity with the political figure. When a voter finds a similarity with a candidate they are more likely to vote for them. This is the same when a voter notices a candidate who does not have any perceived similarities, then they are less likely to vote for them. Prominence and motive: Documentation function: Similar to a stamp on a passport that indicated you have been to a certain country, photographs of a political figure can document that an event had happened and they were there. By documenting an event that occurred, there is evidence and proof for argumentative claims. If a political figure claims one thing, then there is evidence to either back it up or disprove it. Prominence and motive: Societal symbol function: This function is used in visuals when political figures use iconic symbols to draw on emotional power. For instance, political figures will stand with American flags, be photographed with military personnel, or even attending a sport. These three areas of societal symbols hold a strong sense of patriotism. In comparison, congressional candidates may be pictured with former or current presidents to gain an implied endorsement. Many places like the Statue of Liberty, Mount Rushmore, or the Tomb of the Unknown Soldier can be seen and iconic, societal symbols that hold a sense of emotional power. Prominence and motive: Transportation function: The transportation function of using visuals is to transporter the viewer to a different time or place. Visuals can figuratively bring viewers to the past or to an idealized future. Political figures will use this tactic as a way to appeal to the emotional side of their audience and get them to visually relate to the argument that is at hand. Prominence and motive: Ambiguity function: Visuals can be used to interpret different meanings without having to add any words. By not adding any words, visuals are normally used for controversial arguments. On the basis that visual claims can be controversial, they are held to a less strict standard compared to other symbols. Prominence and motive: Economic Economics has been built on the foundation of visual elements, such as graphs and charts. Similar to the other aspects of why visual elements are used, graphs are used by economists to clarify complex ideas. Graphs simplify the process of visualizing trends that happen over time. Along the same lines, graphs are able to assist in determining a relationship between two or more variables. The relationship can determine if there is a positive correlation or negative correlation between the variables. A graph that economists rely heavily on is a time-series graph, which measures a particular variable over a period of time. The graph includes time being on the X-axis, while a changing variable is on the Y-axis. Prominence and motive: Science and medicine Science and medicine has shown a need for visual communication to assist in explaining to non-scientific readers. From Bohr's atomic model to NASA's photographs of Earth, these visual elements have served as tools in furthering the understand of science and medicine. More specifically, elements like graphs and slides portray both data and scientific concepts. Patterns that are revealed by those graphs are then used in association with the data to determine a meaningful correlation. As for photographs, they can be useful for physicians to rely on in figuring out visible signs of diseases and illnesses.However, using visual elements can have a negative effect on the understanding of information. Two major obstacles for non-scientific readers is: 1.) the lack of integration of visual elements in every day scientific language, and 2.) incorrectly identifying the targeted audience and not adjusting to their level of understanding. To tackle these obstacles, one solution is for science communicators must place the user at the center of the design, which is called User-Centered Design. This design focuses on strictly the user and how they can interact with the visual element with minimum stress, but maximum level of efficiency. Another solution could be implemented at the source, which is university-based programs. In these programs, universities need to introduce visual literacy to those in science communication, helping in producing graduates who can accurately interpret, analyze, evaluate, and design visual elements that further the understanding of science and medicine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Volinanserin** Volinanserin: Volinanserin (INN) (developmental code name MDL-100,907) is a highly selective 5-HT2A receptor antagonist that is frequently used in scientific research to investigate the function of the 5-HT2A receptor. It was also tested in clinical trials as a potential antipsychotic, antidepressant, and treatment for insomnia but was never marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded