text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Brodmann area 30**
Brodmann area 30:
Brodmann area 30, also known as agranular retrolimbic area 30, is a subdivision of the cytoarchitecturally defined retrosplenial region of the cerebral cortex. In the human it is located in the isthmus of cingulate gyrus. Cytoarchitecturally it is bounded internally by the granular retrolimbic area 29, dorsally by the ventral posterior cingulate area 23 and ventrolaterally by the ectorhinal area 36 (Brodmann-1909). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Attitude change**
Attitude change:
Attitudes are associated beliefs and behaviors towards some object. They are not stable, and because of the communication and behavior of other people, are subject to change by social influences, as well as by the individual's motivation to maintain cognitive consistency when cognitive dissonance occurs—when two attitudes or attitude and behavior conflict. Attitudes and attitude objects are functions of affective and cognitive components. It has been suggested that the inter-structural composition of an associative network can be altered by the activation of a single node. Thus, by activating an affective or emotional node, attitude change may be possible, though affective and cognitive components tend to be intertwined.
Bases:
There are three bases for attitude change: compliance, identification, and internalization. These three processes represent the different levels of attitude change.
Bases:
Compliance Compliance refers to a change in behavior based on consequences, such as an individual's hopes to gain rewards or avoid punishment from another group or person. The individual does not necessarily experience changes in beliefs or evaluations of an attitude object, but rather is influenced by the social outcomes of adopting a change in behavior. The individual is also often aware that he or she is being urged to respond in a certain way.
Bases:
Compliance was demonstrated through a series of laboratory experiments known as the Asch experiments. Experiments led by Solomon Asch of Swarthmore College asked groups of students to participate in a "vision test". In reality, all but one of the participants were confederates of the experimenter, and the study was really about how the remaining student would react to the confederates' behavior. Participants were asked to pick, out of three line options, the line that is the same length as a sample and were asked to give the answer out loud. Unbeknown to the participants, Asch had placed a number of confederates to deliberately give the wrong answer before the participant. The results showed that 75% of responses were in line with majority influence and were the same answers the confederates picked. Variations in the experiments showed that compliance rates increased as the number of confederates increased, and the plateau was reached with around 15 confederates. The likelihood of compliance dropped with minority opposition, even if only one confederate gave the correct answer. The basis for compliance is founded on the fundamental idea that people want to be accurate and right.
Bases:
Identification Identification explains one's change of beliefs and affect in order to be similar to someone one admires or likes. In this case, the individual adopts the new attitude, not due to the specific content of the attitude object, but because it is associated with the desired relationship. Often, children's attitudes on race, or their political party affiliations are adopted from their parents' attitudes and beliefs.
Bases:
Internalization Internalization refers to the change in beliefs and affect when one finds the content of the attitude to be intrinsically rewarding, and thus leads to actual change in beliefs or evaluations of an attitude object. The new attitude or behavior is consistent with the individual's value system, and tends to be merged with the individual's existing values and beliefs. Therefore, behaviors adopted through internalization are due to the content of the attitude object.The expectancy-value theory is based on internalization of attitude change. This model states that the behavior towards some object is a function of an individual's intent, which is a function of one's overall attitude towards the action.
Emotion-based:
Emotion plays a major role in persuasion, social influence, and attitude change. Much of attitude research has emphasised the importance of affective or emotion components. Emotion works hand-in-hand with the cognitive process, or the way we think, about an issue or situation. Emotional appeals are commonly found in advertising, health campaigns and political messages. Recent examples include no-smoking health campaigns (see tobacco advertising) and political campaigns emphasizing the fear of terrorism. Attitude change based on emotions can be seen vividly in serial killers who are faced with major stress. There is considerable empirical support for the idea that emotions in the form of fear arousal, empathy, or a positive mood can enhance attitude change under certain conditions.Important factors that influence the impact of emotional appeals include self-efficacy, attitude accessibility, issue involvement, and message/source features. Attitudes that are central to one's being are highly resistant to change while others that are less fixed may change with new experiences or information. A new attitude (e.g. to time-keeping or absenteeism or quality) may challenge existing beliefs or norms so creating a feeling of psychological discomfort known as cognitive dissonance. It is difficult to measure attitude change since attitudes may only be inferred and there might be significant divergence between those publicly declared and privately held. Self-efficacy is a perception of one's own human agency; in other words, it is the perception of our own ability to deal with a situation. It is an important variable in emotional appeal messages because it dictates a person's ability to deal with both the emotion and the situation. For example, if a person is not self-efficacious about their ability to impact the global environment, they are not likely to change their attitude or behaviour about global warming.
Emotion-based:
Affective forecasting, otherwise known as intuition or the prediction of emotion, also impacts attitude change. Research suggests that predicting emotions is an important component of decision making, in addition to the cognitive processes. How we feel about an outcome may override purely cognitive rationales.
Emotion-based:
In terms of research methodology, the challenge for researchers is measuring emotion and subsequent impacts on attitude. Since we cannot see into the brain, various models and measurement tools have been constructed to obtain emotion and attitude information. Measures may include the use of physiological cues like facial expressions, vocal changes, and other body rate measures. For instance, fear is associated with raised eyebrows, increased heart rate and increased body tension. Other methods include concept or network mapping, and using primes or word cues.
Dual models: depth of processing:
Many dual process models are used to explain the affective (emotional) and cognitive processing and interpretations of messages, as well as the different depths of attitude change. These include the heuristic-systematic model of information processing and the elaboration likelihood model.
Dual models: depth of processing:
Heuristic-systematic model of information processing The heuristic-systematic model of information processing describes two depths in the processing of attitude change, systematic processing and heuristic processing. In this model information is either processed in a high-involvement and high-effort systematic way, or information is processed through shortcuts known as heuristics. For example, emotions are affect-based heuristics, in which feelings and gut-feeling reactions are often used as shortcuts.
Dual models: depth of processing:
Systematic processing Systematic processing occurs when individuals are motivated and have high cognition to process a message. Individuals using systematic processing are motivated to pay attention and have the cognitive ability to think deeply about a message; they are persuaded by the content of the message, such as the strength or logic of the argument. Motivation can be determined by many factors, such as how personally relevant the topic is, and cognitive ability can be determined by how knowledgeable an individual is on the message topic, or whether or not there is a distraction in the room. Individuals who receive a message through systematic processing usually internalize the message, resulting in a longer and more stable attitude change.
Dual models: depth of processing:
According to the heuristic-systematic model of information processing, people are motivated to use systematic processing when they want to achieve a "desired level of confidence" in their judgments. There are factors that have been found to increase the use of systematic processing; these factors are associated with either decreasing an individual's actual confidence or increasing an individual's perceived confidence. These factors may include framing persuasive messages in an unexpected manner; self-relevancy of the message.
Dual models: depth of processing:
Systematic processing has been shown to be beneficial in social influence settings. Systematic reasoning has been shown to be successful in producing more valid solutions during group discussions and greater solution accuracy. Shestowsky's (1998) research in dyad discussions revealed that the individual in the dyad who had high motivation and high need in cognition had the greater impact on group decisions.
Dual models: depth of processing:
Heuristic processing Heuristic processing occurs when individuals have low motivation and/or low cognitive ability to process a message. Instead of focusing on the argument of the message, recipients using heuristic processing focus on more readily accessible information and other unrelated cues, such as the authority or attractiveness of the speaker. Individuals who process a message through heuristic processing do not internalize the message, and thus any attitude change resulting from the persuasive message is temporary and unstable.
Dual models: depth of processing:
For example, people are more likely to grant favors if reasons are provided. A study shows that when people said, "Excuse me, I have five pages to xerox. May I use the copier?" they received a positive response of 60%. The statement, "Excuse me, I have five pages to xerox. I am in a rush. May I use the copier?" produced a 95% success rate.Heuristic processing examples include social proof, reciprocity, authority, and liking.
Dual models: depth of processing:
Social proof is the means by which we utilize other people's behaviors in order to form our own beliefs. Our attitudes toward following the majority change when a situation appears uncertain or ambiguous to us, when the source is an expert, or when the source is similar to us. In a study conducted by Sherif, he discovered the power of crowds when he worked with experimenters who looked up in the middle of New York City. As the number of the precipitating group increased, the percentage of passers-by who looked up increased as well.
Dual models: depth of processing:
Reciprocity is returning a favor. People are more likely to return a favor if they have a positive attitude towards the other party. Reciprocities also develop interdependence and societal bonds.
Dual models: depth of processing:
Authority plays a role in attitude change in situations where there are superior-inferior relationships. We are more likely to become obedient to authorities when the authority's expertise is perceived as high and when we anticipate receiving rewards. A famous study that constitutes the difference in attitude change is the Milgram experiment, where people changed their attitude to "shocking their partner" more when they followed authorities whereas the subjects themselves would have not done so otherwise.
Dual models: depth of processing:
Liking has shown that if one likes another party, one is more inclined to carry out a favor. The attitude changes are based on whether an individual likes an idea or person, and if he or she does not like the other party, he/she may not carry out the favor or do so out of obligation. Liking can influence one's opinions through factors such as physical attractiveness, similarities, compliments, contact and cooperation.
Dual models: depth of processing:
Elaboration likelihood model The elaboration likelihood model is similar in concept to and shares many ideas with other dual processing models, such as the heuristic-systematic model of information processing. In the elaboration likelihood model, cognitive processing is the central route and affective/emotion processing is often associated with the peripheral route. The central route pertains to an elaborate cognitive processing of information while the peripheral route relies on cues or feelings. The ELM suggests that true attitude change only happens through the central processing route that incorporates both cognitive and affective components as opposed to the more heuristics-based peripheral route. This suggests that motivation through emotion alone will not result in an attitude change.
Cognitive dissonance theory:
Cognitive dissonance, a theory originally developed by Festinger (1957), is the idea that people experience a sense of guilt or uneasiness when two linked cognitions are inconsistent, such as when there are two conflicting attitudes about a topic, or inconsistencies between one's attitude and behavior on a certain topic. The basic idea of the Cognitive Dissonance Theory relating to attitude change, is that people are motivated to reduce dissonance which can be achieved through changing their attitudes and beliefs. Cooper & Fazio's (1984) have also added that cognitive dissonance does not arise from any simple cognitive inconsistency, but rather results from freely chosen behavior that may bring about negative consequences. These negative consequences may be threats to the consistency, stability, predictability, competence, moral goodness of the self-concept, or violation of general self-integrity.Research has suggested multiple routes that cognitive dissonance can be reduced. Self-affirmation has been shown to reduce dissonance, however it is not always the mode of choice when trying to reduce dissonance. When multiple routes are available, it has been found that people prefer to reduce dissonance by directly altering their attitudes and behaviors rather than through self-affirmation. People who have high levels of self-esteem, who are postulated to possess abilities to reduce dissonance by focusing on positive aspects of the self, have also been found to prefer modifying cognitions, such as attitudes and beliefs, over self-affirmation. A simple example of cognitive dissonance resulting in attitude change would be when a heavy smoker learns that his sister died young from lung cancer due to heavy smoking as well, this individual experiences conflicting cognitions: the desire to smoke, and the knowledge that smoking could lead to death and a desire not to die. In order to reduce dissonance, this smoker could change his behavior (i.e. stop smoking), change his attitude about smoking (i.e. smoking is harmful), or retain his original attitude about smoking and modify his new cognition to be consistent with the first one--"I also work out so smoking won't be harmful to me". Thus, attitude change is achieved when individuals experience feelings of uneasiness or guilt due to cognitive dissonance, and actively reduce the dissonance through changing their attitude, beliefs, or behavior relating in order to achieve consistency with the inconsistent cognitions.
Sorts of studies:
Carl Hovland and his band of persuasion researchers learned a great deal during World War 2 and later at Yale about the process of attitude change.
High-credibility sources lead to more attitude change immediately following the communication act, but a sleeper effect occurs in which the source is forgotten after a period of time.
Mild fear appeals lead to more attitude change than strong fear appeals. Propagandists had often used fear appeals. Hoveland's evidence about the effect of such appeals suggested that a source should be cautious in using fear appeals, because strong fear messages may interfere with the intended persuasion attempt.
Sorts of studies:
Belief rationalization The process of how people change their own attitudes has been studied for years. Belief rationalization has been recognized as an important aspect to understand this process. The stability of people's past attitudes can be influenced if they hold beliefs that are inconsistent with their own behaviors. The influence of past behavior on current attitudes is stable when little information conflicts with the behavior. Alternatively, people's attitudes may lean more radically toward the prior behavior if the conflict makes it difficult to ignore, and forces them to rationalize their past behavior.Attitudes are often restructured at the time people are asked to report them. As a result, inconsistencies between the information that enters into the reconstruction and the original attitudes can produce changes in prior attitudes, whereas consistency between these elements often elicits stability in prior attitudes. Individuals need to resolve the conflict between their own behaviors and the subsequent beliefs. However, people usually align themselves with their attitudes and beliefs instead of their behaviors. More importantly, this process of resolving people's cognitive conflicts that emerges cuts across both self-perception and dissonance even when the associated effect may only be strong in changing prior attitudes Comparative processing Human judgment is comparative in nature. Departing from identifying people's need to justify their own beliefs in the context of their own behaviors, psychologists also believe that people have the need to carefully evaluate new messages on the basis of whether these messages support or contradict with prior messages, regardless of whether they can recall the prior messages after they reach a conclusion. This comparative processing mechanism is built on "information-integration theory" and "social judgement theory". Both of these theories have served to model people's attitude change in judging the new information while they have not adequately explained the influential factors that motivate people to integrate the information.
Sorts of studies:
More recent work in the area of persuasion has further explored this "comparative processing" from the perspective of focusing on comparing between different sets of information on one single issue or object instead of simply making comparisons among different issues or objects. As previous research demonstrated, analyzing information on one target product may trigger less impact of comparative information than comparing this product with the same product under competing brands.When people compare different sets of information on one single issue or object, the effect of people's effort to compare new information with prior information seemed to correlate with the perceived strength of the new, strong information when considered jointly with the initial information. Comparison processes can be enhanced when prior evaluations, associated information, or both are accessible. People will simply construct a current judgment based on the new information or adjust the prior judgment when they are not able to retrieve the information from prior messages. The impact this comparative process can have on people's attitude change is mediated by changes in the strength of new information perceived by receivers. The effects of comparison on judgment change were mediated by changes in the perceived strength of the information. These findings above have wide range of applications in social marketing, political communication, and health promotion. For example, designing an advertisement that is counteractive against an existing attitude towards a behavior or policy is perhaps most effective if the advertisement uses the same format, characters, or music of ads associated with the initial attitudes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phozon**
Phozon:
Phozon is an arcade game that was released by Namco in 1983 only in Japan. It is based on the science of chemistry, and was also the first game from the company that had been confined to Japan since Kaitei Takara Sagashi in 1980.
Gameplay:
The player must take control of the Chemic, a small black atom with red spikes which must adhere itself to passing Moleks (which come in four different colors: cyan, green, pink and yellow) in order to duplicate the patterns shown in the center of the screen; if a Molek adheres itself to the Chemic incorrectly, the player can press the button to disconnect the most recently connected Molek. A stage is completed by correctly replicating the Molek formation shown on the center of the screen. The yellow counter on the bottom of the screen signalizes how many Moleks are remaining, which decreases as more Moleks appear on screen. If the bar empties and the player has not replicated the Molek formation shown on the center of the screen, the round starts back from beginning.
Gameplay:
The singular enemy in the game is the Atomic; a malevolent clump of balls which moves randomly around the screen, and will kill the Chemic if it comes in contact with it, costing a life. The Chemic can counter-attack by adhering itself to a Power Molek (which are slightly larger than the regular Moleks, and first appear in the game's second world). Once the Chemic has adhered itself to one, the adhered Moleks will spin around rapidly, and their speed will decrease to denote the nearing of the Power Molek's ending time limit.
Gameplay:
The Atomic occasionally initiates attacks to destroy the Chemic, which include splitting up and reforming in order to cover more ground, and shooting Alpha-Rays and Beta-Rays which destroy some of the Chemic's connected Moleks. There are total of eighteen unique patterns which must be duplicated in the game, and every fourth stage is a "challenging stage" where the Chemic can fire yellow Moleks in four directions at the Atomic.
Reception:
In Japan, Game Machine listed Phozon on their December 15, 1983 issue as being the second most-popular arcade game at the time.In North America, the game was demonstrated at the Amusement & Music Operators Association (AMOA) show in October 1983, but was not licensed for release in the region. Gene Lewin of Play Meter magazine gave it a favorable review, calling it "a very colorful and challenging game with a different twist" based on chemistry.
Legacy:
Phozon was re-released as part of Namco Museum Volume 3 for the Sony PlayStation along with Dig Dug, Ms. Pac-Man, Pole Position II and other Namco games. Another port was released for the iOS and Android mobile device, as part of the Namco Arcade application. Ports of the game for the Nintendo Switch and PlayStation 4 were released by Hamster as part of the Arcade Archives series on November 25, 2021. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polyhedral symbol**
Polyhedral symbol:
The polyhedral symbol is sometimes used in coordination chemistry to indicate the approximate geometry of the coordinating atoms around the central atom. One or more italicised letters indicate the geometry, e.g. TP-3 which is followed by a number that gives the coordination number of the central atom. The polyhedral symbol can be used in naming of compounds, in which case it is followed by the configuration index.
Configuration index:
The first step in determining the configuration index is to assign a priority number to each coordinating ligand according to the Cahn-Ingold-Prelog priority rules, (CIP rules). The preferred ligand takes the lowest priority number. For example, of the ligands acetonitrile, chloride ion and pyridine thepriority number assigned are chloride, 1; acetonitrile,2; pyridene,3. Each coordination type has a different procedure for specifying the configuration index and these are outlined in below.
Configuration index:
T shaped (TS-3) The configuration index is a single digit which is defined as the priority number of the ligand on the stem of the "T".
Seesaw (SS- 4) The configuration index has two digits which are the priority numbers of the ligands separated by the largest angle. The lowest priority number of the pair is quoted first.
Square planar (SP-4) The configuration index is a single digit which is the priority number of the ligand trans to the highest priority ligand. (If there are two possibilities the principle of trans difference is applied). As an example, (acetonitrile)dichlorido(pyridine)platinum(II) complex where the Cl ligands may be trans or cis to one another.
The ligand priority numbers are, applying the CIP rules: two chlorides of priority number 1 acetonitrile priority 2 pyridine priority 3In the trans case the configuration index is 1 giving the name(SP-4-1)-(acetonitrile)dichlorido(pyridine)platinum(II).
In the cis case both of the organic ligands are trans to a chloride so to choose the trans difference is considered and the greater is between 1 and three therefore the name is (SP-4-3)-(acetonitrile)dichlorido(pyridine)platinum(II).
Configuration index:
Octahedral (OC-6) The configuration index has two digits. The first digit is the priority number of the ligand trans to the highest priority ligand. This pair is then used to define the reference axis of the octahedron. The second digit is the priority number of the ligand trans to the highest priority ligand in the plane perpendicular to the reference axis.
Configuration index:
Square pyramidal (SPY-4) The configuration index is a single digit which is the priority number of the ligand trans to the ligand of lowest priority in the plane perpendicular to the 4 fold axis. (If there is more than one choice then the highest numerical value second digit is taken.) NB this procedure gives the same result as SP-4, however in this case the polyhedral symbol specifies that the complex is non-planar.
Configuration index:
Square pyramidal (SPY-5) There are two digits. The first digit is the priority number of the ligand on the fourfold (C4) axis of the idealised pyramid the second digit is the priority number of the ligand trans to ligand of lowest priority in the plane perpendicular to the 4 fold axis. (If there is more than one choice then the highest numerical value second digit is taken.) Trigonal bipyramidal (TBPY-5) The configuration index consists of two digits which are the priority numbers of the ligands on the threefold rotation axis. The lowest numerical value is cited first.
Configuration index:
Other bipyramidal structures (PBPY-7, HBPY-8 and HBPY-9) The configuration index consists of two segments separated by a hyphen. The first segment consists of two digits which are the priority numbers of the ligands on the five, six or sevenfold rotation axis. The lowest numerical value is cited first.
The second segment consists of 5, 6 or 7 digits respectively. The lowest priority number is the first digit followed by the digits of the other atoms in the plane. The clockwise and anticlockwise sequences are compared and the one that yields the lowest numerical sequence is chosen. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Structured concurrency**
Structured concurrency:
Structured concurrency is a programming paradigm aimed at improving the clarity, quality, and development time of a computer program by using a structured approach to concurrent programming.
Structured concurrency:
The core concept is the encapsulation of concurrent threads of execution (here encompassing kernel and userland threads and processes) by way of control flow constructs that have clear entry and exit points and that ensure all spawned threads have completed before exit. Such encapsulation allows errors in concurrent threads to be propagated to the control structure's parent scope and managed by the native error handling mechanisms of each particular computer language. It allows control flow to remain readily evident by the structure of the source code despite the presence of concurrency. To be effective, this model must be applied consistently throughout all levels of the program – otherwise concurrent threads may leak out, become orphaned, or fail to have runtime errors correctly propagated.
Structured concurrency:
Structured concurrency is analogous to structured programming, which introduced control flow constructs that encapsulated sequential statements and subroutines.
History:
The fork–join model from the 1960s, embodied by multiprocessing tools like OpenMP, is an early example of a system ensuring all threads have completed before exit. However, Smith argues that this model is not true structured concurrency as the programming language is unaware of the joining behavior, and is thus unable to enforce safety.The concept was formulated in 2016 by Martin Sústrik (creator of ZeroMQ) with his C library libdill, with goroutines as a starting point. It was further refined in 2017 by Nathaniel J. Smith, who introduced a "nursery pattern" in his Python implementation called Trio. Meanwhile, Roman Elizarov independently came upon the same ideas while developing an experimental coroutine library for the Kotlin language, which later became a standard library.In 2021, Swift adopted structured concurrency. Later that year, a draft proposal was published to add structured concurrency to Java.
Variations:
A major point of variation is how an error in one member of a concurrent thread tree is handled. Simple implementations will merely wait until the children and siblings of the failing thread run to completion before propagating the error to the parent scope. However, that could take an indefinite amount of time. The alternative is to employ a general cancellation mechanism (typically a cooperative scheme allowing program invariants to be honored) to terminate the children and sibling threads in an expedient manner. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fossil collecting**
Fossil collecting:
Fossil collecting (sometimes, in a non-scientific sense, fossil hunting) is the collection of fossils for scientific study, hobby, or profit. Fossil collecting, as practiced by amateurs, is the predecessor of modern paleontology and many still collect fossils and study fossils as amateurs. Professionals and amateurs alike collect fossils for their scientific value. A commercial trade in fossils has also long existed, with some of this being practised illegally.
Process:
Locating fossils Rock type Fossils are generally found in sedimentary rock with differentiated strata representing a succession of deposited material. The occurrence of fossil bearing material depends on environmental factors before and after the time of preservation. After death, the first preserving factor is a rapid burial in water bodies or terrestrial sediment which would help in preserving the specimen. These rocks types are usually termed clastic rock, and are further subdivided into fine, medium and coarse grained material. While fossils can be found in all grain types, more detailed specimens are found in the fine grained material. A second type of burial is the non-clastic rock, form where the rock is made up of the precipitation of compacted fossil material, types of rock include limestone and coal. The third fossil bearing material is the evaporates, which precipitate out of concentrated dissolved salts to form nodular deposits, examples include rock salt and phosphate concentrations. The evaporates are usually associated with gastropod, algae, vertebrate, and trace fossils. Fossils are not to be found in areas of igneous rock (except in some beds between lava flows). In rocks which have undergone metamorphism, fossils are generally so distorted that they are difficult to recognize or have been destroyed completely.
Process:
Preservation After burial various factors are at work to endanger the current fossil's preserved state. Chemical alteration would change the mineral composition of the fossil, but generally not its appearance, lithification would distort its appearance, the fossil itself may be fully or partially dissolved leaving only a fossil mold.
Exposure Areas where sedimentary rocks are being eroded include exposed mountainous areas, river banks and beds, wave washed sea cliffs, and engineering features like quarries and road cuts. Coal mining operations often yield excellent fossil plants, but the best ones are to be found not in the coal itself but in the associated sedimentary rock deposits called coal measures.
Process:
Wave-washed sea cliffs and foreshore exposures are often good places to search for fossils, but always be aware of the state of the tides in the area. Never take chances by climbing high cliffs of crumbling rock or clay (many have died attempting it). Dried up natural lake beds and caves in the form of pitfall traps frequently also have high concentrations of fossils (e.g., Cuddie Springs and Naracoorte Caves in Australia).
Process:
Generally in appearance, a fossil will be either a different colour to the surrounding rock, because of the different mineral content, will have a defining shape and texture or a combination of both. A fossil can also be extracted from its geological environment, having similar characteristics in colour naturally embed from the sedimentary formation (surrounding rock) it was found within.
Process:
Collecting techniques The techniques used to collect fossils vary depending on the sediment or rock in which the fossils are to be found. For collecting in rock a geological hammer, a variety of cold chisels and a mallet are used to split and break rocks to reveal fossils. Since the rock is deposited in layers, these layers may be split apart to reveal fossils. For soft sediments and unconsolidated deposits, such as sands, silts, and clays, a spade, flat-bladed trowel, and stiff brushes are used. Sieves in a variety of mesh sizes are used to separate fossils from sands and gravels. Sieving is a rougher technique for collecting fossils and can destroy fragile ones. Sometimes, water is run through a sieve to help remove silt and sand. This technique is called wet sieving Fossils tend to be very fragile and are generally not extracted entirely from the surrounding rock (the matrix) in the field. Cloth, cotton, small boxes and aluminum foil are frequently used to protect fossils being transported. Occasionally, large fragile specimens may need to be protected and supported using a jacket of plaster before their removal from the rock. If a fossil is to be left in situ, a cast may be produced, using plaster of paris or latex. While not preserving every detail, such a cast is inexpensive, easier to transport, causes less damage to the environment, and leaves the fossil in place for others. Fossilized tracks are frequently documented with casts. Subtle fossils which are preserved solely as impressions in sandy layers, such as the Ediacaran fossils, are also usually documented by means of a cast, which shows detail more clearly than the rock itself.
Process:
Preparation and cleaning Sometimes, for smaller fossils, a stiff brush may simply be used to dust off and clean the fossil. For larger fossils, a chisel can be used to remove large bits of dirt, however, you run the risk of damaging the fossil.
Running water can cause some types of fossils to either dislodge from the rock, or even crumble and break apart, for they are very fragile. Dental tools are sometimes used to remove small amounts of rock from the fossil.
Process:
Documentation A knowledge of the precise location a fossil is essential if the fossil is to have any scientific value. Details of the parent rock strata, the location of the find, and other fossil material associated with the find help scientists to place the fossil in context, in terms of the time, location and situation in which the organism lived. Data logs, photographs, and sketches may accompany detailed field notes to assist in the locating of a fossiliferous outcrop. Individual fossils are ideally cataloged with a locality number and a unique specimen number. This allows a collection to be easily searched and specimens located. Catologing of collections is almost universal in large institutions like museums.
Collecting ethics:
To collect fossils, there are various legal realities that must be observed. Permission should be sought before collection begins on private land. Hammering the rocks in national parks and other areas of natural beauty is often discouraged and in most cases is illegal.
Collecting ethics:
The first expressly worded fossil-collecting code was published from the museum-home of pioneering geologist Hugh Miller at Cromarty, on the Highland east coast of Scotland, 11 April 2008. It was introduced by Michael Russell, Minister for Environment, Scotland, as part of celebrations honouring the bicentennial of the founding of the Geological Society of London. The code supplements an existing draft drawn up by English Nature.The code advises fossil collectors to seek permission from landowners, to collect responsibly, record details, seek advice on finding an unusual fossil and label the specimens and care for them. Its principles establish a framework of advice on best practices in the collection, identification, conservation and storage of fossil specimens.
Collecting ethics:
The non-binding code of ethics for this field was drawn up by Scottish Natural Heritage (SNH) following many months of consultation with fossil collectors, landowners, palaeontological researchers, and staff of Scotland's museums.
Collecting ethics:
Fossil trade Fossil trading is the practice of buying and selling fossils. This is illegal when it comes to stolen fossils, and some important scientific specimens are sold to collectors, rather than given or obtained by museums and institutes of study. Much focus has been put on the illegal fossil dealing in China, where many specimens have been stolen. The fossil trade of Morocco has also been the focus of international attention. The trade is lucrative, and many celebrities collect fossils.The Society of Vertebrate Paleontology (SVP), an international association of professional and amateur vertebrate paleontologists, believes that scientifically important fossils—especially but not exclusively those found on public lands—should be held in perpetuity in the public trust, preferably in a museum or research institution, where they can benefit the scientific community as a whole as well as future generations. In the United States, Paleontological Resources Preservation Act. S. 546 and H. R. 2416 were introduced in the US Congress with SVP's full support.
Collecting ethics:
Many commercial fossil collectors and dealers believe that such policies are a breach of their rights. The argument has also been put forth that there are too few professional paleontologists to collect and preserve fossils currently exposed to the elements, and that it is therefore essential that private citizens be allowed to collect them for the sake of their preservation. Eric Scott, the Curator of Paleontology for the San Bernardino County Museum, argues that 1) private citizens and amateur (not for profit) collectors can and do participate frequently in the permitted recovery and preservation of significant vertebrate fossils, and 2) preservation of significant fossils does not require or mandate sale of those fossils.According to the ethics by-law of SVP, "The barter, sale, or purchase of scientifically significant vertebrate fossils is not condoned, unless it brings them into or keeps them within a public trust."Some fossil trade is not for collecting but due to the use of certain fossils in traditional medicine mainly in East Asia but also in Europe and other places.
Societies and clubs:
Many include fossil collectors. Lapidary clubs also include fossil collectors. In addition, paleontological societies and fossil clubs exist. There is some overlap between fossil collecting, mineral collecting, and amateur geology.
Notable fossil collectors:
Mary Anning Robert Bakker Edward Drinker Cope Phil Currie Othniel Charles Marsh Paul Sereno Charles Hazelius Sternberg Peter Larson Stan Wood Fisk Holbrook Day Triebold Paleontology Incorporated | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**QNX**
QNX:
QNX ( or ) is a commercial Unix-like real-time operating system, aimed primarily at the embedded systems market. QNX was one of the first commercially successful microkernel operating systems.The product was originally developed in the early 1980s by Canadian company Quantum Software Systems, later renamed QNX Software Systems.
As of 2022, it is used in a variety of devices including cars, medical devices, program logic controllers, robots, trains, and more.
History:
Gordon Bell and Dan Dodge, both students at the University of Waterloo in 1980, took a course in real-time operating systems, in which the students constructed a basic real-time microkernel and user programs. Both were convinced there was a commercial need for such a system, and moved to the high-tech planned community Kanata, Ontario, to start Quantum Software Systems that year. In 1982, the first version of QUNIX was released for the Intel 8088 CPU. In 1984, Quantum Software Systems renamed QUNIX to QNX in an effort to avoid any trademark infringement challenges.
History:
One of the first widespread uses of the QNX real-time OS (RTOS) was in the nonembedded world when it was selected as the operating system for the Ontario education system's own computer design, the Unisys ICON. Over the years QNX was used mostly for larger projects, as its 44k kernel was too large to fit inside the one-chip computers of the era. The system garnered a reputation for reliability and became used in running machinery in many industrial applications.
History:
In the late-1980s, Quantum realized that the market was rapidly moving towards the Portable Operating System Interface (POSIX) model and decided to rewrite the kernel to be much more compatible at a low level. The result was QNX 4. During this time Patrick Hayden, while working as an intern, along with Robin Burgener (a full-time employee at the time), developed a new windowing system. This patented concept was developed into the embeddable graphical user interface (GUI) named the QNX Photon microGUI. QNX also provided a version of the X Window System.
History:
To demonstrate the OS's capability and relatively small size, in the late 1990s QNX released a demo image that included the POSIX-compliant QNX 4 OS, a full graphical user interface, graphical text editor, TCP/IP networking, web browser and web server that all fit on a bootable 1.44 MB floppy disk for the 386 PC.Toward the end of the 1990s, the company, then named QNX Software Systems, began work on a new version of QNX, designed from the ground up to be symmetric multiprocessing (SMP) capable, and to support all current POSIX application programming interfaces (APIs) and any new POSIX APIs that could be anticipated while still retaining the microkernel architecture. This resulted in QNX Neutrino, released in 2001.
History:
Along with the Neutrino kernel, QNX Software Systems became a founding member of the Eclipse (integrated development environment) consortium. The company released a suite of Eclipse plug-ins packaged with the Eclipse workbench in 2002, and named QNX Momentics Tool Suite.
History:
In 2004, the company announced it had been sold to Harman International Industries. Before this acquisition, QNX software was already widely used in the automotive industry for telematics systems. Since the purchase by Harman, QNX software has been designed into over 200 different automobile makes and models, in telematics systems, and in infotainment and navigation units. The QNX CAR Application Platform was running in over 20 million vehicles as of mid-2011. The company has since released several middleware products including the QNX Aviage Multimedia Suite, the QNX Aviage Acoustic Processing Suite and the QNX HMI Suite.
History:
The microkernels of Cisco Systems' IOS-XR (ultra high availability IOS, introduced 2004) and IOS Software Modularity (introduced 2006) are based on QNX.
History:
In September 2007, QNX Software Systems announced the availability of some of its source code.On April 9, 2010, Research In Motion (later renamed to BlackBerry Limited) announced they would acquire QNX Software Systems from Harman International Industries. On the same day, QNX source code access was restricted from the public and hobbyists.In September 2010, the company announced a tablet computer, the BlackBerry PlayBook, and a new operating system BlackBerry Tablet OS based on QNX to run on the tablet.On October 18, 2011, Research In Motion announced "BBX", which was later renamed BlackBerry 10, in December 2011. Blackberry 10 devices build upon the BlackBerry PlayBook QNX based operating system for touch devices, but adapt the user interface for smartphones using the Qt based Cascades Native User-Interface framework.
History:
At the Geneva Motor Show, Apple demonstrated CarPlay which provides an iOS-like user interface to head units in compatible vehicles. Once configured by the automaker, QNX can be programmed to hand off its display and some functions to an Apple CarPlay device.On December 11, 2014, Ford Motor Company stated that it would replace Microsoft Auto with QNX.In January 2017, QNX announced the upcoming release of its SDP 7.0, with support for Intel and ARM 32- and 64-bit platforms, and support for C++14. It was released in March 2017.
Technology:
As a microkernel-based OS, QNX is based on the idea of running most of the operating system kernel in the form of a number of small tasks, named Resource Managers. This differs from the more traditional monolithic kernel, in which the operating system kernel is one very large program composed of a huge number of parts, with special abilities. In the case of QNX, the use of a microkernel allows users (developers) to turn off any functions they do not need without having to change the OS. Instead, such services will simply not run.
Technology:
The QNX kernel, procnto (also name of the binary executable program for the QNX Neutrino ('nto') process ('proc') itself), contains only CPU scheduling, interprocess communication, interrupt redirection and timers. Everything else runs as a user process, including a special process known as proc which performs process creation and memory management by operating in conjunction with the microkernel. This is made possible by two key mechanisms: subroutine-call type interprocess communication, and a boot loader which can load an image containing the kernel and any desired set of user programs and shared libraries. There are no device drivers in the kernel. The network stack is based on NetBSD code. Along with its support for its own, native, device drivers, QNX supports its legacy, io-net manager server, and the network drivers ported from NetBSD.QNX interprocess communication consists of sending a message from one process to another and waiting for a reply. This is a single operation, called MsgSend. The message is copied, by the kernel, from the address space of the sending process to that of the receiving process. If the receiving process is waiting for the message, control of the CPU is transferred at the same time, without a pass through the CPU scheduler. Thus, sending a message to another process and waiting for a reply does not result in "losing one's turn" for the CPU. This tight integration between message passing and CPU scheduling is one of the key mechanisms that makes QNX message passing broadly usable. Most Unix and Linux interprocess communication mechanisms lack this tight integration, although a user space implementation of QNX-type messaging for Linux does exist. Mishandling of this subtle issue is a primary reason for the disappointing performance of some other microkernel systems such as early versions of Mach. The recipient process need not be on the same physical machine.
Technology:
All I/O operations, file system operations, and network operations were meant to work through this mechanism, and the data transferred was copied during message passing. Later versions of QNX reduce the number of separate processes and integrate the network stack and other function blocks into single applications for performance reasons.
Message handling is prioritized by thread priority. Since I/O requests are performed using message passing, high priority threads receive I/O service before low priority threads, an essential feature in a hard real-time system.
Technology:
The boot loader is the other key component of the minimal microkernel system. Because user programs can be built into the boot image, the set of device drivers and support libraries needed for startup need not be, and are not, in the kernel. Even such functions as program loading are not in the kernel, but instead are in shared user-space libraries loaded as part of the boot image. It is possible to put an entire boot image into ROM, which is used for diskless embedded systems.
Technology:
Neutrino supports symmetric multiprocessing and processor affinity, called bound multiprocessing (BMP) in QNX terminology. BMP is used to improve cache hitting and to ease the migration of non-SMP safe applications to multi-processor computers.
Technology:
Neutrino supports strict priority-preemptive scheduling and adaptive partition scheduling (APS). APS guarantees minimum CPU percentages to selected groups of threads, even though others may have higher priority. The adaptive partition scheduler is still strictly priority-preemptive when the system is underloaded. It can also be configured to run a selected set of critical threads strictly real time, even when the system is overloaded.
Technology:
The QNX operating system also contained a web browser known as 'Voyager'.Due to its microkernel architecture QNX is also a distributed operating system. Dan Dodge and Peter van der Veen hold U.S. Patent 6,697,876: Distributed kernel operating system based on the QNX operating system's distributed processing features known commercially as Transparent Distributed Processing. This allows the QNX kernels on separate devices to access each other's system services using effectively the same communication mechanism as is used to access local services.
Uses:
The BlackBerry PlayBook tablet computer designed by BlackBerry uses a version of QNX as the primary operating system. The BlackBerry 10 operating system is also based on QNX.
Uses:
QNX is also used in car infotainment systems with many major car makers offering variants that include an embedded QNX architecture. It is supported by popular SSL/TLS libraries such as wolfSSL.In recent years QNX has been used in automated drive or ADAS systems for automotive projects that require a functional safety certification. QNX provides this with its QNX OS for Safety products.QNX Neutrino (2001) has been ported to a number of platforms and now runs on practically any modern central processing unit (CPU) family that is used in the embedded market. This includes the PowerPC, x86, MIPS, SH-4, and the closely interrelated of ARM, StrongARM, and XScale.
Licensing:
QNX offers a license for noncommercial and academic users.
Community:
OpenQNX is a QNX Community Portal established and run independently. An IRC channel and Newsgroups access via web is available. Diverse industries are represented by the developers on the site.
Foundry27 is a web-based QNX community established by the company. It serves as a hub to QNX Neutrino development where developers can register, choose the license, and get the source code and related toolkit of the RTOS. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hypocretin (orexin) receptor 2**
Hypocretin (orexin) receptor 2:
Orexin receptor type 2 (Ox2R or OX2), also known as hypocretin receptor type 2 (HcrtR2), is a protein that in humans is encoded by the HCRTR2 gene.
Structure:
The structure of the receptor has been solved to 2.5 Å resolution as a fusion protein bound to suvorexant using lipid-mediated crystallization.
Function:
OX2 is a G-protein coupled receptor expressed exclusively in the brain. It has 64% identity with OX1. OX2 binds both orexin A and orexin B neuropeptides. OX2 is involved in the central feedback mechanism that regulates feeding behaviour. Mice with enhanced OX2 signaling are resistant to high-fat diet-induced obesity.This receptor is activated by Hipocretin, which is a wake-promoting hypothalamic neuropeptide that acts as a critical regulator of sleep in animals as Zebrafish or Mammals. This protein has mutations in Astyanax mexicanus that reduces the sleep needs of the cavefish.
Ligands:
Agonists Danavorexton (TAK-925) – selective OX2 receptor agonist Firazorexton – selective OX2 receptor agonist Orexins – dual OX1 and OX2 receptor agonists Orexin-A – approximately equipotent at the OX1 and OX2 receptors Orexin-B – approximately 5- to 10-fold selectivity for the OX2 receptor over the OX1 receptor SB-668875 – selective OX2 receptor agonist Suntinorexton – selective OX2 receptor agonist TAK-861 – selective OX2 receptor agonist TAK-994 – selective OX2 receptor agonist Antagonists Almorexant - Dual OX1 and OX2 antagonist Daridorexant (nemorexant) - Dual OX1 and OX2 antagonist EMPA - Selective OX2 antagonist Filorexant - Dual OX1 and OX2 antagonist JNJ-10397049 (600x selective for OX2 over OX1) Lemborexant - Dual OX1 and OX2 antagonist MK-1064 - Selective OX2 antagonist MK-8133 - Selective OX2 antagonist SB-649,868 - Dual OX1 and OX2 antagonist Seltorexant - Selective OX2 antagonist Suvorexant - Dual OX1 and OX2 antagonist TCS-OX2-29 - Selective OX2 antagonist (3,4-dimethoxyphenoxy)alkylamino acetamides Compound 1m - Selective OX2 antagonist | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multiple sexual ornaments**
Multiple sexual ornaments:
Many species have multiple sexual ornaments, whereby females select mating partners using several cues instead of only one cue. Whereas this phenomenon is self-evident and hence long recognized, adaptive explanations of why females use several instead of only one signal have been formulated relatively recently. Several hypotheses exist, but mutually exclusive tests are still lacking.
Hypotheses:
There are several hypotheses that attempt to explain why a male would have multiple sexual ornaments.
Multiple messages hypothesis The multiple message hypothesis states that different ornaments signal different properties of an individual's overall quality. Models support the possibility that this hypothesis is evolutionarily stable but empirical tests are lacking.
Hypotheses:
Some ornaments represent long-term or short-term changes in overall condition. Elegant plumes in a bird or antlers in a deer grown once a year could signal the overall condition of an animal during the long period of growth; this is thus an example of a long-term change. Secondary characters like the inflatable bare patches of skin on a grouse species or the colorful patches of skin in a primate species could represent short-term changes.
Hypotheses:
Redundant signals hypothesis The redundant signal hypothesis, also known as back-up signal hypothesis, states that each character can only best show partial representation of overall condition. If each ornament reflected the male's quality with a certain error, then mate choice based on a single trait would lead a female to select a male in poor condition rather than one in great condition. Thus, a female ought to look at multiple sexual traits of a male if she wants to get an overall view of the male's quality. The redundant signals hypothesis differs from the multiple messages hypothesis, as the latter predicts that different signals reflect the same aspect of mate quality, whereas the former predicts that different signals reflect different aspects. There has been some empirical support of this hypothesis. However, the majority of studies showed no correlation, suggesting the redundant signals are less common in indicating mate quality compare to other hypothesis like multiple messages hypothesis.
Hypotheses:
Unreliable signals hypothesis The unreliable signal hypothesis suggests that some signals are unreliable indicators of overall male quality. Therefore, a female should look at multiple traits because one trait could be misleading. There is some support for this hypothesis.
Hypotheses:
Sexual interference hypothesis The sexual interference hypothesis proposes that additional male signals evolve to hinder female mate choice by interfering with the propagation and reception of other males' sexual signals. Females respond by evolving the ability to glean meaningful information from signals despite males' attempts at obfuscation. In turn, males respond by improving their interference signals and producing new signals that are not so easily blocked. This iterative co-evolutionary process increases the costs of assessment for females and the costs of signal production for males, and leads to temporary equilibria of honest advertising via multiple signals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Migralepsy**
Migralepsy:
Migralepsy is a rare condition in which a migraine is followed, within an hour period, by an epileptic seizure. Because of the similarities in signs, symptoms, and treatments of both conditions, such as the neurological basis, the psychological issues, and the autonomic distress that is created from them, they individually increase the likelihood of causing the other. However, also because of the sameness, they are often misdiagnosed for each other, as migralepsy rarely occurs.
Signs and symptoms:
General symptoms of migralepsy are: Flashes of light Geometric or animate forms Visual hallucinations Vomiting Headache Blindness Loss of consciousness Convulsions
Cause:
The connection between migraines and epileptic seizures is currently being researched and not much is known. Patients have been shown to have had migraines long before developing epileptic symptoms, creating the possibility of severe cases of migraines creating epilepsy. However, not every migraine may be accompanied by a seizure and sometimes the seizures happen without any migraine involvement. Due to this, finding the origin of migralepsy is difficult and enveloped somewhere in the overlap between both conditions. Some patients have shown that their relatives had migraines as well and even some from migralepsy, forming the possibility that migralepsy is genetic in origin and forms only rarely as both, generally resulting in only one condition or the other.
Diagnosis:
Because epileptic seizures may occur with a side effect that resembles migraine aura, it is complicated to diagnose whether a patient is having a normal epileptic episode or if it is a true migraine that is then being followed by a seizure, which would be a true sign of migralepsy. Many neurological symptoms can only be expressed by the patient, who can confuse different feelings, especially when the symptoms of a migraine are extremely similar to that of a seizure. Thus, many physicians are reluctant to consider migralepsy to be a true condition, considering its rarity, and those that do believe in it are prone to over-diagnose it, leading to more problems in terms of finding the truth of the condition.However, it has been found that EEG scans have been able to differentiate between migraine auras and auras related to epilepsy. It has generally been seen that EEG scans are not as helpful in determining facets of migraines as they are with epilepsy. Though they are able to work in determining the starting and ending points of migraines and the overlap of epileptic episodes during or after them, even if the scans are still lacking in considerable necessary data and confusing results. EEG scans have been able to observe seizures that occur in between the aura and headache phase of migraines and such occurrences have been termed intercalated seizures.
Treatment:
Since migralepsy is, for all intents and purposes, a combination of migraines and epilepsy, the medication for the conditions supplied individually can be combined jointly in order to lessen the effects of both. It is also helpful that many antiepileptic drugs also work as antimigraines, lessening the number of medications that must be taken. Thus, while neither can be cured, they can be treated so that they occur less frequently and allow a patient to live a relatively normal life. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dirty drug**
Dirty drug:
In pharmacology, a dirty drug is an informal term for drugs that may bind to many different molecular targets or receptors in the body, and so tend to have a wide range of effects and possibly adverse drug reactions. Today, pharmaceutical companies try to make new drugs as selective as possible to minimise binding to antitargets and hence reduce the occurrence of side effects and risk of adverse reactions.
Dirty drug:
Examples of compounds often cited as "dirty drugs" include tramadol, chlorpromazine, olanzapine, dextromethorphan, ibogaine, and ethanol, all of which bind to multiple receptors or influence multiple receptor systems. There may be instances of advantages to drugs that exhibit multi-receptor activity such as the anti-addictive drug ibogaine that acts within a broad range of neurohormonal systems where activity is also exhibited by drugs commonly associated with addiction including opioids, nicotine, and alcohol. Similarly chlorpromazine is primarily used as an antipsychotic, but its strong serotonin receptor blocking effects make it useful for treating serotonergic crisis such as serotonin syndrome. Dextromethorphan for its part is widely used as a cough medication, but its other actions have led to trials for several conditions such as its use as an adjunct to analgesia, and a potential anti-addictive drug, as well as its occasional recreational use as a dissociative.Kanamycin is an aminoglycoside antibiotic which induces deafness through blockage of the outer hair cells of the cochlea; yet it has many other effects, weakening for instance the collagen and DNA biosynthesis. It acts by inhibiting the synthesis of proteins in susceptible organisms. Kanamycin requires close clinical supervision because of its potential toxicity and adverse side effects to the auditory and vestibular branches of the eighth cranial nerve and to the renal tubules.Clozapine and latrepirdine are examples of drugs used in the treatment of CNS disorders that have a superior efficacy precisely because of their "multifarious" broadspectrum mode of activity. Likewise, in cancer chemotherapeutics, it has been recognized that drugs active at more than one target have a higher probability of being efficacious.The anti-histamine and anti-cholinergic effects of atypical and low potency typical antipsychotics, such as the aforementioned clozapine and chlorpromazine, can also mediate against potentially distressing movement disorders such as extrapyramidal symptoms and akathisia associated with dopamine antagonism. In fact, clozapine may even help treat movement problems associated with Parkinson's disease.Examples of "promiscuous" cancer drugs include: Sutent, Sorafenib, Zactima, and AG-013736.In the field of drugs used to treat depression, the nonselective MAOIs and the TCAs are sometimes believed to have an efficacy that is superior to the SSRIs. SSRIs are usually nevertheless picked as the first-line choice of agent and not the (less-selective) MAOIs and TCAs for several reasons. Firstly, SSRIs are safer in overdose than TCAs. Secondly, MAOIs can cause serious side effects when mixed with certain foods, including life-threatening hypertensive crisis. MAOIs and TCAs generally have more side effects than SSRIs. TCAs in particular have anticholinergic side effects such as constipation and blurred vision, whereas SSRIs have fewer anticholinergic side effects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Asymmetrical tonic neck reflex**
Asymmetrical tonic neck reflex:
The asymmetrical tonic neck reflex (ATNR) is a primitive reflex found in newborn humans that normally vanishes around 6 months of age.
Asymmetrical tonic neck reflex:
It is also known as the bow and arrow or "fencing reflex" because of the characteristic position of the infant's arms and head, which resembles that of a fencer. When the face is turned to one side, the arm and leg on that side extend, and the arm and leg on the opposite side flex. It is more likely to be seen in premature infants than full-term babies. It is rare in newborns but can be stimulated from infants to up to 3 months old. It is believed to help develop hand-eye coordination and help with awareness of both sides of the body. The presence of the ATNR, as well as other primitive reflexes, such as the tonic labyrinthine reflex (TLR), beyond the first six months of life may indicate that the child has developmental delays, at which point the reflex is atypical or abnormal. For example, in children with cerebral palsy, the reflexes may persist and even be more pronounced. As abnormal reflexes, both the ATNR and the TLR can cause problems for the growing child. The ATNR and TLR both hinder functional activities such as rolling, bringing the hands together, or even bringing the hands to the mouth. Over time, both the ATNR and TLR can cause serious damage to the growing child's joints and bones. The ATNR can cause the spine to curve (scoliosis). Both the ATNR and TLR can cause subluxation of the femoral head or dislocation of the femoral head as it completely moves out of the hip socket. When abnormal reflexes persist in a child, evidence suggests early intervention involving extensive physical therapy as the most beneficial course of treatment.
Asymmetrical tonic neck reflex:
The fencing response occurs in adults as a result of mechanical forces applied to the head, typically associated with contact sports. The fencing response is transient and indicates moderate forces applied to the brainstem, resulting in a traumatic brain injury. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Murnaghan–Nakayama rule**
Murnaghan–Nakayama rule:
In group theory, a branch of mathematics, the Murnaghan–Nakayama rule, named after Francis Murnaghan and Tadashi Nakayama, is a combinatorial method to compute irreducible character values of a symmetric group.
There are several generalizations of this rule beyond the representation theory of symmetric groups, but they are not covered here.
Murnaghan–Nakayama rule:
The irreducible characters of a group are of interest to mathematicians because they concisely summarize important information about the group, such as the dimensions of the vector spaces in which the elements of the group can be represented by linear transformations that “mix” all the dimensions. For many groups, calculating irreducible character values is very difficult; the existence of simple formulas is the exception rather than the rule.
Murnaghan–Nakayama rule:
The Murnaghan–Nakayama rule is a combinatorial rule for computing symmetric group character values χλρ using a particular kind of Young tableaux.
Murnaghan–Nakayama rule:
Here λ and ρ are both integer partitions of some integer n, the order of the symmetric group under consideration. The partition λ specifies the irreducible character, while the partition ρ specifies the conjugacy class on whose group elements the character is evaluated to produce the character value. The partitions are represented as weakly decreasing tuples; for example, two of the partitions of 8 are (5,2,1) and (3,3,1,1).
Murnaghan–Nakayama rule:
There are two versions of the Murnaghan-Nakayama rule, one non-recursive and one recursive.
Non-recursive version:
Theorem: χρλ=∑T∈BST(λ,ρ)(−1)ht(T) where the sum is taken over the set BST(λ,ρ) of all border-strip tableaux of shape λ and type ρ.
Non-recursive version:
That is, each tableau T is a tableau such that the k-th row of T has λk boxes the boxes of T are filled with integers, with the integer i appearing ρi times the integers in every row and column are weakly increasing the set of squares filled with the integer i form a border strip, that is, a connected skew-shape with no 2×2-square.The height, ht(T), is the sum of the heights of the border strips in T. The height of a border strip is one less than the number of rows it touches.
Non-recursive version:
It follows from this theorem that the character values of a symmetric group are integers.
For some combinations of λ and ρ, there are no border-strip tableaux. In this case, there are no terms in the sum and therefore the character value is zero.
Non-recursive version:
Example Consider the calculation of one of the character values for the symmetric group of order 8, when λ is the partition (5,2,1) and ρ is the partition (3,3,1,1). The shape partition λ specifies that the tableau must have three rows, the first having 5 boxes, the second having 2 boxes, and the third having 1 box. The type partition ρ specifies that the tableau must be filled with three 1's, three 2's, one 3, and one 4. There are six such border-strip tableaux: If we call these T_{1} , T_{2} , T_{3} , T_{4} , T_{5} , and T_{6} , then their heights are ht(T1)=0+1+0+0=1ht(T2)=1+0+0+0=1ht(T3)=1+0+0+0=1ht(T4)=2+0+0+0=2ht(T5)=2+0+0+0=2ht(T6)=2+1+0+0=3 and the character value is therefore χ(3,3,1,1)(5,2,1)=(−1)1+(−1)1+(−1)1+(−1)2+(−1)2+(−1)3=−1−1−1+1+1−1=−2
Recursive version:
Theorem: χρλ=∑ξ∈BS(λ,ρ1)(−1)ht(ξ)χρ∖ρ1λ∖ξ where the sum is taken over the set BS(λ,ρ1) of border strips within the Young diagram of shape λ that have ρ1 boxes and whose removal leaves a valid Young diagram. The notation λ∖ξ represents the partition that results from removing the border strip ξ from λ. The notation ρ∖ρ1 represents the partition that results from removing the first element ρ1 from ρ.
Recursive version:
Note that the right-hand side is a sum of characters for symmetric groups that have smaller order than that of the symmetric group we started with on the left-hand side. In other words, this version of the Murnaghan-Nakayama rule expresses a character of the symmetric group Sn in terms of the characters of smaller symmetric groups Sk with k<n.
Recursive version:
Applying this rule recursively will result in a tree of character value evaluations for smaller and smaller partitions. Each branch stops for one of two reasons: Either there are no border strips of the required length within the reduced shape, so the sum on the right is zero, or a border strip occupying the entire reduced shape is removed, leaving a Young diagram with no boxes. At this point we are evaluating χλρ when both λ and ρ are the empty partition (), and the rule requires that this terminal case be defined as having character χ()()=1 This recursive version of the Murnaghan-Nakayama rule is especially efficient for computer calculation when one computes character tables for Sk for increasing values of k and stores all of the previously computed character tables.
Example:
We will again compute the character value with λ=(5,2,1) and ρ=(3,3,1,1).
Example:
To begin, consider the Young diagram with shape λ. Since the first part of ρ is 3, look for border strips that consist of 3 boxes. There are two possibilities: In the first diagram, the border strip has height 0, and removing it produces the reduced shape (2,2,1). In the second diagram, the border strip has height 1, and removing it produces the reduced shape (5). Therefore, one has χ(3,3,1,1)(5,2,1)=χ(3,1,1)(2,2,1)−χ(3,1,1)(5) expressing a character value of S8 in terms of two character values of S5.
Example:
Applying the rule again to both terms, one finds χ(3,1,1)(2,2,1)=−χ(1,1)(2) and χ(3,1,1)(5)=χ(1,1)(2) reducing to a character value of S2.
Applying again, one finds χ(1,1)(2)=χ(1)(1) reducing to the only character value of S1.
A final application produces the terminal character χ()()=1 :χ(1)(1)=χ()()=1 Working backwards from this known character, the result is χ(3,3,1,1)(5,2,1)=−2 , as before. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IPO5**
IPO5:
Importin-5 is a protein that in humans is encoded by the IPO5 gene. The protein encoded by this gene is a member of the importin beta family. Structurally, the protein adopts the shape of a right hand solenoid and is composed of 24 HEAT repeats.
Function:
Nuclear transport, a signal- and energy-dependent process, takes place through nuclear pore complexes embedded in the nuclear envelope. The import of proteins containing a nuclear localization signal (NLS) requires the NLS import receptor, a heterodimer of importin alpha and beta subunits also known as karyopherins. Importin alpha binds the NLS-containing cargo in the cytoplasm and importin beta docks the complex at the cytoplasmic side of the nuclear pore complex. In the presence of nucleoside triphosphates and the small GTP binding protein Ran, the complex moves into the nuclear pore complex and the importin subunits dissociate. Importin alpha enters the nucleoplasm with its passenger protein and importin beta remains at the pore. Interactions between importin beta and the FG repeats of nucleoporins are essential in translocation through the pore complex.IPO5 facilitates cytoplasmic polyadenylation element-binding protein (CPEB)3 translocation by binding to RRM1 motif of CPEB3 in neurons. NMDAR signaling increases RanBP1 expression and reduces the level of cytoplasmic GTP-bound Ran. These changes enhance CPEB3–IPO5 interaction, which consequently accelerates the nuclear import of CPEB3 and promotes its nuclear function. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tracheal deviation**
Tracheal deviation:
Tracheal deviation is a clinical sign that results from unequal intrathoracic pressure within the chest cavity. It is most commonly associated with traumatic pneumothorax, but can be caused by a number of both acute and chronic health issues, such as pneumonectomy, atelectasis, pleural effusion, fibrothorax (pleural fibrosis), or some cancers (tumors within the bronchi, lung, or pleural cavity) and certain lymphomas associated with the mediastinal lymph nodes.
Tracheal deviation:
In most adults and children, the trachea can be seen and felt directly in the middle of the anterior (front side) neck behind the jugular notch of the manubrium and superior to this point as it extends towards the larynx. However, when tracheal deviation is present, the trachea will be displaced in the direction of less pressure. Meaning, that if one side of the chest cavity has an increase in pressure (such as in the case of a pneumothorax) the trachea will shift towards the opposing side.The trachea is the tube that carries air from the throat to the lungs. It is also commonly referred to as the windpipe. The trachea is one of the most important parts of the respiratory system and damage to the trachea can indicate a life-threatening emergency. The normal position of the trachea is straight up and down, running along the center of the front side of the throat. Certain conditions can cause the trachea to shift to one side or the other. This is a medical emergency that requires immediate medical attention to discover the cause of the shift and begin an appropriate course of treatment. There are several causes for a tracheal deviation, and the condition often presents along with difficulty breathing, coughing and abnormal breath sounds. The most common cause of tracheal deviation is a pneumothorax, which is a collection of air inside the chest, between the chest cavity and the lung. A pneumothorax can be spontaneous, caused by existing lung disease, or by trauma. Treatment varies, depending on the severity of the pneumothorax. Smaller pockets of air tend to dissipate on their own, while larger areas can cause complications and are usually vented by a needle in the chest. As soon as the pneumothorax is treated, the tracheal deviation also will resolve itself. A congenital lack of one lung, surgical removal of a lung or pleural fibrosis, which is an inflammation of the lung membranes caused by an infection. As a result of the wide range of causes of tracheal deviation, it is essential to seek medical attention so that an accurate diagnosis can be obtained.
Normal variation:
In children, where trachea is more flexible, it can be displaced by the aortic arch up to 90 degrees. The trachea can also be displaced to the left if the aortic arch lies to the right of the trachea.
Treatment:
Since tracheal deviation is a sign as opposed to a condition, treatment is focused on correcting the cause of the finding. In the case of pneumothorax, thoracentesis or chest tube insertion is performed to relieve the pressure within the affected pleural cavity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peripheral consonant**
Peripheral consonant:
In Australian linguistics, the peripheral consonants are a natural class encompassing consonants articulated at the extremes of the mouth: labials (lip) and velars (soft palate). That is, they are the non-coronal consonants (palatal, dental, alveolar, and postalveolar). In Australian languages, these consonants pattern together both phonotactically and acoustically. In Arabic and Maltese philology, the moon letters transcribe non-coronal consonants, but they do not form a natural class.
Phonology:
Australian languages typically favour peripheral consonants word- and syllable-initially, and they are not allowed or common word- and syllable-finally, unlike the apicals.
In the extinct Martuthunira, the peripheral stops /p/ and /k/ shared similar allophony. Whereas the other stops could be voiced between vowels or following a nasal, the peripherals were usually voiceless. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KSnapshot**
KSnapshot:
The KDE Gear (also known as the KDE Applications Bundle or KDE Applications) is a set of applications and supporting libraries that are developed by the KDE community, primarily used on Linux-based operating systems but mostly multiplatform, and released on a common release schedule.
The bundle is composed of over 100 applications. Examples of prominent applications in the bundle include the file manager Dolphin, document viewer Okular, text editor Kate, archiving tool Ark and terminal emulator Konsole.Previously the KDE Applications Bundle was part of the KDE Software Compilation.
Extragear:
Software that is not part of the official KDE Applications bundle can be found in the "Extragear" section. They release on their own schedule and feature their own versioning numbers. There are many standalone applications like KTorrent, Krita or Amarok that are mostly designed to be portable between operating systems and deployable independent of a particular workspace or desktop environment. Some brands consist of multiple applications, such as Calligra Office Suite or KDE Kontact. There are several options for obtaining and installing KDE applications under Linux. Moreover, most of the KDE platform and applications have been ported to OpenBSD and NetBSD. While prior editions of KDE were often seen on other flavors of Unix, such as Solaris the popularity of the open source alternatives running on a wide range of hardware (having been ported to nearly every RISC and x86 64 processors) has made KDE projects on similar OSs less obvious.
List of applications part of the bundle:
Development Software development KDE SDK is a collection of two dozen distinct integrated (both within the SDK but also with other KDE applications, e.g. many work with Dolphin, the default file manager) applications and components that work with/are part of KDevelop, and is suitable for general purpose software development in a range of languages. It provides the tooling used to engineer KDE, and is particularly rich in tools to support Qt and C++ development, as well as the more fashionable Rust, Python, etc.
List of applications part of the bundle:
Most of the KDE SDK is available for Windows and macOS in addition to Linux and BSD.
While created for the KDE desktop, prebuilt software, including nightly releases, is available for Mac OS, Linux (via AppImage, AppStream or Flathub, as well as Snap), as well as via most major Linux distributions package managers, in addition to the source code via KDE Gitlab.
Windows installers for production/released version of Kate, KDevelop and Umbrello are available as well as via the store.
List of applications part of the bundle:
Several KDE applications are available for Android using the Kirigami framework. built using KDevelop including KDE Connect, KDE Itinerary, a digital travel assistant that integrates train, bus, and air bookings with maps, the KDE Kalendar application, and boarding passes, and KAlgebra, a graphing scientific calculator.Various other packages are being built for testing on Android, although plans for some of the core parts of the SDK (e.g. Kate) have not been announced.
List of applications part of the bundle:
Unless noted, KDE applications can use KIO slaves for ftp, http, ftp over ssh (fish) , Google drive, WebDAV to browse/access files just as they can local files, samba (Windows shared files), archives, man, and info pages. E.g. to browse a WebDAV location, in place of the file path, webdav://www.hostname.com/path/.
The various components can be used on their own (e.g. Kate as a general purpose text editor) on their own, or in combination (e.g. Kate uses KDiff3 internally to compare cached autorecovery file with the last saved version).
Kate – an advanced text editor for programmers, and general text editor.As of KDE 4, KEdit has been replaced by Kate &/or Kwrite.
KDevelop – an integrated development environment for multiple languages, with a plug-in/extension framework (e.g. plug-ins for PHP, Ruby, Python, Markdown documentation authoring/preview, a SVG viewer, etc.), and control flow viewer.
Supported languages include: C/C++ and ObjC (backed by the Clang/LLVM libraries) Including some extra features for the Qt Framework Including language support for CUDA and OpenCL Qt QML and JavaScript, Python, PHP In addition to the "supported" languages, there is syntax highlighting for a wide range of mark-up, configuration, programming, scripting, and data languages.
GUI integration with multiple different version control systems including Git, Bazaar, Subversion, CVS, Mercurial (hg), and Perforce.
Support for CMake and QMake, as well as generic and custom build files.
List of applications part of the bundle:
Cervisia – CVS frontend KDESvn – graphical Subversion client KAppTemplate – Template-based code project generator KDiff3 – Diff/Patch frontend (see Comparison of file comparison tools) Kommander – Dynamic dialog editor Kompare – Diff/Patch frontend Lokalize – a computer–aided translation system Okteta - a hex editor Poxml Swappo Clazy Qt-oriented static code analyzer based on the Clang framework Massif Visualizer – Visualizer for Valgrind Massif data files Umbrello – UML diagram applicationELF Dissector ELF binary inspector Fielding REST API tester Doxyqml Doxygen filter to allow generation of API Documentation for QML Heaptrack traces all memory allocations and annotates these events with stack traces.KDebugSettings KUIViewer views UI files (e.g. from Qt Designer).Dferry D-Buss library and tools CuteHMI Open-source HMI (Human Machine Interface) software written in C++ and QML.
List of applications part of the bundle:
Web development KImageMapEditor – an HTML image map editor KXSLDbg – an XSLT debugger Education Science Cirkuit – An application to generate publication-ready figures KBibTeX – an application to manage bibliography databases in the BibTeX format Semantik – a mindmapping-like tool for document generation RKWard – an easy to use, transparent frontend to R KTechLab - an IDE for electronic and PIC microcontroller circuit design and simulation Games Toys AMOR – Amusing Misuse Of Resources. Desktop creature KTeaTime – Tea cooking timer KTux KWeather Graphics Internet Multimedia Playback Production Office Kontact provides personal information management, backed by the Akonadi framework (including Akregator, KNode, KMail, etc.) The Calligra Suite provides an office suite, including Calligra Flow – a flowchart and diagram editor Calligra Plan – a project management tool Calligra Sheets – Spreadsheet Calligra Stage – Presentation application Calligra Words – Word processor Kexi – a visual database creator KEuroCalc – a currency converter and calculator Kile – integrated LaTeX environment KMyMoney – a personal finance manager TaskJuggler – a project management tool Skrooge – Personal finances manager LabPlot – a data plotting and analysis tool LemonPOS – a point of sales application for small and mid–size business Tellico – a collection organizer System Utilities Accessibility KMag – a screen magnifying tool KMouseTool – Automatic Mouse Click KMouth – a speech synthesizer frontend Discontinued Unmaintained Applications next
Releases:
The KDE Applications Bundle is released every four months and has bugfix releases in each intervening month. A date-based version scheme is used, which is composed of the year and month. A third digit is used for bugfix releases.With the April 2021 release, the KDE Applications Bundle has been renamed to KDE Gear. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Terpene synthase C terminal domain**
Terpene synthase C terminal domain:
In molecular biology, this protein domain belongs to the terpene synthase family (TPS). Its role is to synthesize terpenes, which are part of primary metabolism, such as sterols and carotene, and also part of the secondary metabolism. This entry will focus on the C terminal domain of the TPS protein.
Function:
Terpenes synthases have a role in producing important molecules in metabolism, these molecules are part of a large group called terpenoids . In particular, the C terminal domain catalyzes the cyclization of geranyl diphosphate, orienting and stabilizing multiple reactive carbocation intermediates. Or in simpler terms, the C terminal aids the synthesis of new molecules.
Structure:
It is thought to have at least two alpha helices.
Conservation:
Sequences containing this protein domain belong to the terpene synthase family. It has been suggested that this gene family be designated tps (for terpene synthase). Sequence comparisons reveal similarities between the monoterpene (C10) synthases, sesquiterpene (C15) synthases and the diterpene (C20) synthases. It has been split into six subgroups on the basis of phylogeny, called Tpsa-Tpsf .
Tpsa includes vetispiridiene synthase.
Tpsb includes (-)-limonene synthase.
Tpsc includes copalyl diphosphate synthase (kaurene synthase A).
Tpsd includes taxadiene synthase, pinene synthase, and myrcene synthase.
Tpse includes ent-kaurene synthase B.
Tpsf includes linalool synthase. In the fungus Phaeosphaeria sp. (strain L487) the synthesis of ent-kaurene from geranylgeranyl dophosphate is promoted by a single bifunctional protein. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PIGS (gene)**
PIGS (gene):
GPI transamidase component PIG-S is an enzyme that in humans is encoded by the PIGS gene.
PIGS (gene):
This gene encodes a protein that is involved in GPI-anchor biosynthesis. The glycosylphosphatidylinositol (GPI) anchor is a glycolipid found on many blood cells and serves to anchor proteins to the cell surface. This gene encodes an essential component of the multisubunit enzyme, GPI transamidase. GPI transamidase mediates GPI anchoring in the endoplasmic reticulum, by catalyzing the transfer of fully assembled GPI units to proteins. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Immortality**
Immortality:
Immortality is the concept of eternal life. Some modern species may possess biological immortality.Some scientists, futurists, and philosophers have theorized about the immortality of the human body, with some suggesting that human immortality may be achievable in the first few decades of the 21st century with the help of certain technologies such as mind uploading (digital immortality). Other advocates believe that life extension is a more achievable goal in the short term, with immortality awaiting further research breakthroughs. The absence of aging would provide humans with biological immortality, but not invulnerability to death by disease or injury. Whether the process of internal immortality is delivered within the upcoming years depends chiefly on research (and in neuron research in the case of internal immortality through an immortalized cell line) in the former view and perhaps is an awaited goal in the latter case.What form an unending human life would take, or whether an immaterial soul exists and possesses immortality, has been a major point of focus of religion, as well as the subject of speculation and debate. In religious contexts, immortality is often stated to be one of the promises of divinities to human beings who perform virtue or follow divine law.
Definitions:
Scientific Life extension technologies claim to be developing a path to complete rejuvenation. Cryonics holds out the hope that the dead can be revived in the future, following sufficient medical advancements. While, as shown with creatures such as hydra and Planarian worms, it is indeed possible for a creature to be biologically immortal, these are animals which are physiologically very different from humans, and it is not known if something comparable will ever be possible for humans.
Definitions:
Religious Immortality in religion refers usually to either the belief in physical immortality or a more spiritual afterlife. In traditions such as ancient Egyptian beliefs, Mesopotamian beliefs and ancient Greek beliefs, the immortal gods consequently were considered to have physical bodies. In Mesopotamian and Greek religion, the gods also made certain men and women physically immortal, whereas in Christianity, many believe that all true believers will be resurrected to physical immortality. Similar beliefs that physical immortality is possible are held by Rastafarians or Rebirthers.
Physical immortality:
Physical immortality is a state of life that allows a person to avoid death and maintain conscious thought. It can mean the unending existence of a person from a physical source other than organic life, such as a computer.
Pursuit of physical immortality before the advent of modern science included alchemists seeking to create the Philosopher's Stone, and various cultures' legends such as the Fountain of Youth or the Peaches of Immortality inspiring attempts at discovering elixirs of life.
Physical immortality:
Modern scientific trends, such as cryonics, digital immortality, breakthroughs in rejuvenation, or predictions of an impending technological singularity, to achieve genuine human physical immortality, must still overcome all causes of death to succeed: Causes of death There are three main causes of death: aging, disease, and injury Such issues can be resolved with the solutions provided in research to any end providing such alternate theories at present that require unification.
Physical immortality:
Aging Aubrey de Grey, a leading researcher in the field, defines aging as "a collection of cumulative changes to the molecular and cellular structure of an adult organism, which result in essential metabolic processes, but which also, once they progress far enough, increasingly disrupt metabolism, resulting in pathology and death." The current causes of aging in humans are cell loss (without replacement), DNA damage, oncogenic nuclear mutations and epimutations, cell senescence, mitochondrial mutations, lysosomal aggregates, extracellular aggregates, random extracellular cross-linking, immune system decline, and endocrine changes. Eliminating aging would require finding a solution to each of these causes, a program de Grey calls engineered negligible senescence. There is also a huge body of knowledge indicating that change is characterized by the loss of molecular fidelity.
Physical immortality:
Disease Disease is theoretically surmountable by technology. In short, it is an abnormal condition affecting the body of an organism, something the body shouldn't typically have to deal with its natural make up. Human understanding of genetics is leading to cures and treatments for a myriad of previously incurable diseases. The mechanisms by which other diseases do damage are becoming better understood. Sophisticated methods of detecting diseases early are being developed. Preventative medicine is becoming better understood. Neurodegenerative diseases like Parkinson's and Alzheimer's may soon be curable with the use of stem cells. Breakthroughs in cell biology and telomere research are leading to treatments for cancer. Vaccines are being researched for AIDS and tuberculosis. Genes associated with type 1 diabetes and certain types of cancer have been discovered, allowing for new therapies to be developed. Artificial devices attached directly to the nervous system may restore sight to the blind. Drugs are being developed to treat a myriad of other diseases and ailments.
Physical immortality:
Trauma Physical trauma would remain as a threat to perpetual physical life, as an otherwise immortal person would still be subject to unforeseen accidents or catastrophes. The speed and quality of paramedic response remains a determining factor in surviving severe trauma. A body that could automatically repair itself from severe trauma, such as speculated uses for nanotechnology, would mitigate this factor. The brain cannot be risked to trauma if a continuous physical life is to be maintained. This aversion to trauma risk to the brain would naturally result in significant behavioral changes that would render physical immortality undesirable for some people.
Physical immortality:
Environmental change Organisms otherwise unaffected by these causes of death would still face the problem of obtaining sustenance (whether from currently available agricultural processes or from hypothetical future technological processes) in the face of changing availability of suitable resources as environmental conditions change. After avoiding aging, disease, and trauma, death through resource limitation is still possible, such as hypoxia or starvation.
Physical immortality:
If there is no limitation on the degree of gradual mitigation of risk then it is possible that the cumulative probability of death over an infinite horizon is less than certainty, even when the risk of fatal trauma in any finite period is greater than zero. Mathematically, this is an aspect of achieving ‘actuarial escape velocity’.
Physical immortality:
Biological immortality Biological immortality is an absence of aging. Specifically it is the absence of a sustained increase in rate of mortality as a function of chronological age. A cell or organism that does not experience aging, or ceases to age at some point, is biologically immortal.Biologists have chosen the word "immortal" to designate cells that are not limited by the Hayflick limit, where cells no longer divide because of DNA damage or shortened telomeres. The first and still most widely used immortal cell line is HeLa, developed from cells taken from the malignant cervical tumor of Henrietta Lacks without her consent in 1951. Prior to the 1961 work of Leonard Hayflick, there was the erroneous belief fostered by Alexis Carrel that all normal somatic cells are immortal. By preventing cells from reaching senescence one can achieve biological immortality; telomeres, a "cap" at the end of DNA, are thought to be the cause of cell aging. Every time a cell divides the telomere becomes a bit shorter; when it is finally worn down, the cell is unable to split and dies. Telomerase is an enzyme which rebuilds the telomeres in stem cells and cancer cells, allowing them to replicate an infinite number of times. No definitive work has yet demonstrated that telomerase can be used in human somatic cells to prevent healthy tissues from aging. On the other hand, scientists hope to be able to grow organs with the help of stem cells, allowing organ transplants without the risk of rejection, another step in extending human life expectancy. These technologies are the subject of ongoing research, and are not yet realized.
Physical immortality:
Biologically immortal species Life defined as biologically immortal is still susceptible to causes of death besides aging, including disease and trauma, as defined above. Notable immortal species include: Bacteria – Bacteria reproduce through binary fission. A parent bacterium splits itself into two identical daughter cells which eventually then split themselves in half. This process repeats, thus making the bacterium essentially immortal. A 2005 PLoS Biology paper suggests that after each division the daughter cells can be identified as the older and the younger, and the older is slightly smaller, weaker, and more likely to die than the younger.
Physical immortality:
Turritopsis dohrnii, a jellyfish (phylum Cnidaria, class Hydrozoa, order Anthoathecata), after becoming a sexually mature adult, can transform itself back into a polyp using the cell conversion process of transdifferentiation. Turritopsis dohrnii repeats this cycle, meaning that it may have an indefinite lifespan. Its immortal adaptation has allowed it to spread from its original habitat in the Caribbean to "all over the world".
Physical immortality:
Hydra is a genus belonging to the phylum Cnidaria, the class Hydrozoa and the order Anthomedusae. They are simple fresh-water predatory animals possessing radial symmetry.
Physical immortality:
Evolution of aging As the existence of biologically immortal species demonstrates, there is no thermodynamic necessity for senescence: a defining feature of life is that it takes in free energy from the environment and unloads its entropy as waste. Living systems can even build themselves up from seed, and routinely repair themselves. Aging is therefore presumed to be a byproduct of evolution, but why mortality should be selected for remains a subject of research and debate. Programmed cell death and the telomere "end replication problem" are found even in the earliest and simplest of organisms. This may be a tradeoff between selecting for cancer and selecting for aging.Modern theories on the evolution of aging include the following: Mutation accumulation is a theory formulated by Peter Medawar in 1952 to explain how evolution would select for aging. Essentially, aging is never selected against, as organisms have offspring before the mortal mutations surface in an individual.
Physical immortality:
Antagonistic pleiotropy is a theory proposed as an alternative by George C. Williams, a critic of Medawar, in 1957. In antagonistic pleiotropy, genes carry effects that are both beneficial and detrimental. In essence this refers to genes that offer benefits early in life, but exact a cost later on, i.e. decline and death.
The disposable soma theory was proposed in 1977 by Thomas Kirkwood, which states that an individual body must allocate energy for metabolism, reproduction, and maintenance, and must compromise when there is food scarcity. Compromise in allocating energy to the repair function is what causes the body gradually to deteriorate with age, according to Kirkwood.
Physical immortality:
Immortality of the germ line Individual organisms ordinarily age and die, while the germlines which connect successive generations are potentially immortal. The basis for this difference is a fundamental problem in biology. The Russian biologist and historian Zhores A. Medvedev considered that the accuracy of genome replicative and other synthetic systems alone cannot explain the immortality of germ lines. Rather Medvedev thought that known features of the biochemistry and genetics of sexual reproduction indicate the presence of unique information maintenance and restoration processes at the different stages of gametogenesis. In particular, Medvedev considered that the most important opportunities for information maintenance of germ cells are created by recombination during meiosis and DNA repair; he saw these as processes within the germ cells that were capable of restoring the integrity of DNA and chromosomes from the types of damage that cause irreversible aging in somatic cells.
Physical immortality:
Prospects for human biological immortality Life-extending substances Some scientists believe that boosting the amount or proportion of telomerase in the body, a naturally forming enzyme that helps maintain the protective caps at the ends of chromosomes, could prevent cells from dying and so may ultimately lead to extended, healthier lifespans. A team of researchers at the Spanish National Cancer Centre (Madrid) tested the hypothesis on mice. It was found that those mice which were "genetically engineered to produce 10 times the normal levels of telomerase lived 50% longer than normal mice".In normal circumstances, without the presence of telomerase, if a cell divides repeatedly, at some point all the progeny will reach their Hayflick limit. With the presence of telomerase, each dividing cell can replace the lost bit of DNA, and any single cell can then divide unbounded. While this unbounded growth property has excited many researchers, caution is warranted in exploiting this property, as exactly this same unbounded growth is a crucial step in enabling cancerous growth. If an organism can replicate its body cells faster, then it would theoretically stop aging.
Physical immortality:
Embryonic stem cells express telomerase, which allows them to divide repeatedly and form the individual. In adults, telomerase is highly expressed in cells that need to divide regularly (e.g., in the immune system), whereas most somatic cells express it only at very low levels in a cell-cycle dependent manner.
Physical immortality:
Technological immortality, biological machines, and "swallowing the doctor" Technological immortality is the prospect for much longer life spans made possible by scientific advances in a variety of fields: nanotechnology, emergency room procedures, genetics, biological engineering, regenerative medicine, microbiology, and others. Contemporary life spans in the advanced industrial societies are already markedly longer than those of the past because of better nutrition, availability of health care, standard of living and bio-medical scientific advances. Technological immortality predicts further progress for the same reasons over the near term. An important aspect of current scientific thinking about immortality is that some combination of human cloning, cryonics or nanotechnology will play an essential role in extreme life extension. Robert Freitas, a nanorobotics theorist, suggests tiny medical nanorobots could be created to go through human bloodstreams, find dangerous things like cancer cells and bacteria, and destroy them. Freitas anticipates that gene-therapies and nanotechnology will eventually make the human body effectively self-sustainable and capable of living indefinitely in empty space, short of severe brain trauma. This supports the theory that we will be able to continually create biological or synthetic replacement parts to replace damaged or dying ones. Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and using as yet hypothetical biological machines, in his 1986 book Engines of Creation. Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines (see biological machine). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.
Physical immortality:
Cryonics Cryonics, the practice of preserving organisms (either intact specimens or only their brains) for possible future revival by storing them at cryogenic temperatures where metabolism and decay are almost completely stopped, can be used to 'pause' for those who believe that life extension technologies will not develop sufficiently within their lifetime. Ideally, cryonics would allow clinically dead people to be brought back in the future after cures to the patients' diseases have been discovered and aging is reversible. Modern cryonics procedures use a process called vitrification which creates a glass-like state rather than freezing as the body is brought to low temperatures. This process reduces the risk of ice crystals damaging the cell-structure, which would be especially detrimental to cell structures in the brain, as their minute adjustment evokes the individual's mind.
Physical immortality:
Mind-to-computer uploading One idea that has been advanced involves uploading an individual's habits and memories via direct mind-computer interface. The individual's memory may be loaded to a computer or to a new organic body. Extropian futurists like Moravec and Kurzweil have proposed that, thanks to exponentially growing computing power, it will someday be possible to upload human consciousness onto a computer system, and exist indefinitely in a virtual environment.
Physical immortality:
This could be accomplished via advanced cybernetics, where computer hardware would initially be installed in the brain to help sort memory or accelerate thought processes. Components would be added gradually until the person's entire brain functions were handled by artificial devices, avoiding sharp transitions that would lead to issues of identity, thus running the risk of the person to be declared dead and thus not be a legitimate owner of his or her property. After this point, the human body could be treated as an optional accessory and the program implementing the person could be transferred to any sufficiently powerful computer.
Physical immortality:
Another possible mechanism for mind upload is to perform a detailed scan of an individual's original, organic brain and simulate the entire structure in a computer. What level of detail such scans and simulations would need to achieve to emulate awareness, and whether the scanning process would destroy the brain, is still to be determined.It is suggested that achieving immortality through this mechanism would require specific consideration to be given to the role of consciousness in the functions of the mind. An uploaded mind would only be a copy of the original mind, and not the conscious mind of the living entity associated in such a transfer. Without a simultaneous upload of consciousness, the original living entity remains mortal, thus not achieving true immortality.
Physical immortality:
Research on neural correlates of consciousness is yet inconclusive on this issue. Whatever the route to mind upload, persons in this state could then be considered essentially immortal, short of loss or traumatic destruction of the machines that maintained them.
Physical immortality:
Cybernetics Transforming a human into a cyborg can include brain implants or extracting a human processing unit and placing it in a robotic life-support system. Even replacing biological organs with robotic ones could increase life span (e.g. pace makers) and depending on the definition, many technological upgrades to the body, like genetic modifications or the addition of nanobots would qualify an individual as a cyborg. Some people believe that such modifications would make one impervious to aging and disease and theoretically immortal unless killed or destroyed.
Religious views:
As late as 1952, the editorial staff of the Syntopicon found in their compilation of the Great Books of the Western World, that "The philosophical issue concerning immortality cannot be separated from issues concerning the existence and nature of man's soul." Thus, the vast majority of speculation on immortality before the 21st century was regarding the nature of the afterlife.
Religious views:
Ancient Greek religion Immortality in ancient Greek religion originally always included an eternal union of body and soul as can be seen in Homer, Hesiod, and various other ancient texts. The soul was considered to have an eternal existence in Hades, but without the body the soul was considered dead. Although almost everybody had nothing to look forward to but an eternal existence as a disembodied dead soul, a number of men and women were considered to have gained physical immortality and been brought to live forever in either Elysium, the Islands of the Blessed, heaven, the ocean or literally right under the ground.
Religious views:
Among those humans made immortal were Amphiaraus, Ganymede, Ino, Iphigenia, Menelaus, Peleus, and a great number of those who fought in the Trojan and Theban wars.
Religious views:
Some were considered to have died and been resurrected before they achieved physical immortality. Asclepius was killed by Zeus only to be resurrected and transformed into a major deity. In some versions of the Trojan War myth, Achilles, after being killed, was snatched from his funeral pyre by his divine mother Thetis, resurrected, and brought to an immortal existence in either Leuce, the Elysian plains, or the Islands of the Blessed. Memnon, who was killed by Achilles, seems to have received a similar fate. Alcmene, Castor, Heracles, and Melicertes were also among the figures sometimes considered to have been resurrected to physical immortality. According to Herodotus' Histories, the 7th century BCE sage Aristeas of Proconnesus was first found dead, after which his body disappeared from a locked room. Later he was found not only to have been resurrected but to have gained immortality.
Religious views:
The parallel between these traditional beliefs and the later resurrection of Jesus was not lost on early Christians, as Justin Martyr argued: "when we say ... Jesus Christ, our teacher, was crucified and died, and rose again, and ascended into heaven, we propose nothing different from what you believe regarding those whom you consider sons of Zeus."The philosophical idea of an immortal soul was a belief first appearing with either Pherecydes or the Orphics, and most importantly advocated by Plato and his followers. This, however, never became the general norm in Hellenistic thought. As may be witnessed even into the Christian era, not least by the complaints of various philosophers over popular beliefs, many or perhaps most traditional Greeks maintained the conviction that certain individuals were resurrected from the dead and made physically immortal and that others could only look forward to an existence as disembodied and dead, though everlasting, souls.
Religious views:
Buddhism One of the three marks of existence in Buddhism is anattā, "non-self". This teaching states that the body does not have an eternal soul but is composed of five skandhas or aggregates. Additionally, another mark of existence is impermanence, also called anicca, which runs directly counter to concepts of immortality or permanence. According to one Tibetan Buddhist teaching, Dzogchen, individuals can transform the physical body into an immortal body of light called the rainbow body.
Religious views:
Christianity Christian theology holds that Adam and Eve lost physical immortality for themselves and all their descendants through the Fall, although this initial "imperishability of the bodily frame of man" was "a preternatural condition".Christians who profess the Nicene Creed believe that every dead person (whether they believed in Christ or not) will be resurrected from the dead at the Second Coming; this belief is known as universal resurrection. Paul the Apostle, in following his past life as a Pharisee (a Jewish social movement that held to a future physical resurrection), proclaims an amalgamated view of resurrected believers where both the physical and the spiritual are rebuilt in the likeness of post-resurrection Christ, who "will transform our lowly body to be like his glorious body" (ESV). This thought mirrors Paul's depiction of believers having been "buried therefore with him [that is, Christ] by baptism into death" (ESV).N.T. Wright, a theologian and former Bishop of Durham, has said many people forget the physical aspect of what Jesus promised. He told Time: "Jesus' resurrection marks the beginning of a restoration that he will complete upon his return. Part of this will be the resurrection of all the dead, who will 'awake', be embodied and participate in the renewal. Wright says John Polkinghorne, a physicist and a priest, has put it this way: 'God will download our software onto his hardware until the time he gives us new hardware to run the software again for ourselves.' That gets to two things nicely: that the period after death (the Intermediate state) is a period when we are in God's presence but not active in our own bodies, and also that the more important transformation will be when we are again embodied and administering Christ's kingdom." This kingdom will consist of Heaven and Earth "joined together in a new creation", he said.
Religious views:
Christian apocrypha include immortal human figures such as Cartaphilus who were cursed with physical immortality for various transgressions against Christ during the Passion. The medieval Waldensians believed in the immortality of the soul. Leaders of sects such as John Asgill and John Wroe taught followers that physical immortality was possible.Many Patristic writers have connected the immortal rational soul to the image of God found in Genesis 1:26. Among them is Athanasius of Alexandria and Clement of Alexandria, who say that the immortal rational soul itself is the image of God. Even Early Christian Liturgies exhibit this connection between the immortal rational soul and the creation of humanity in the image of God.
Religious views:
Hinduism Hindus believe in an immortal soul which is reincarnated after death. According to Hinduism, people repeat a process of life, death, and rebirth in a cycle called samsara. If they live their life well, their karma improves and their station in the next life will be higher, and conversely lower if they live their life poorly. After many life times of perfecting its karma, the soul is freed from the cycle and lives in perpetual bliss. There is no place of eternal torment in Hinduism, although if a soul consistently lives very evil lives, it could work its way down to the very bottom of the cycle.There are explicit renderings in the Upanishads alluding to a physically immortal state brought about by purification, and sublimation of the 5 elements that make up the body. For example, in the Shvetashvatara Upanishad (Chapter 2, Verse 12), it is stated "When earth, water, fire, air and sky arise, that is to say, when the five attributes of the elements, mentioned in the books on yoga, become manifest then the yogi's body becomes purified by the fire of yoga and he is free from illness, old age and death." Another view of immortality is traced to the Vedic tradition by the interpretation of Maharishi Mahesh Yogi: That man indeed whom these (contacts)do not disturb, who is even-minded inpleasure and pain, steadfast, he is fitfor immortality, O best of men.
Religious views:
To Maharishi Mahesh Yogi, the verse means, "Once a man has become established in the understanding of the permanent reality of life, his mind rises above the influence of pleasure and pain. Such an unshakable man passes beyond the influence of death and in the permanent phase of life: he attains eternal life ... A man established in the understanding of the unlimited abundance of absolute existence is naturally free from existence of the relative order. This is what gives him the status of immortal life."An Indian Tamil saint known as Vallalar claimed to have achieved immortality before disappearing forever from a locked room in 1874.
Religious views:
Judaism The traditional concept of an immaterial and immortal soul distinct from the body was not found in Judaism before the Babylonian exile, but developed as a result of interaction with Persian and Hellenistic philosophies. Accordingly, the Hebrew word nephesh, although translated as "soul" in some older English-language Bibles, actually has a meaning closer to "living being". Nephesh was rendered in the Septuagint as ψυχή (psūchê), the Greek word for ‘soul’.The only Hebrew word traditionally translated "soul" (nephesh) in English language Bibles refers to a living, breathing conscious body, rather than to an immortal soul.
Religious views:
In the New Testament, the Greek word traditionally translated "soul" (ψυχή) has substantially the same meaning as the Hebrew, without reference to an immortal soul."Soul" may refer either to the whole person, the self, as in "three thousand souls" were converted in Acts 2:41 (see Acts 3:23).
The Hebrew Bible speaks about Sheol (שאול), originally a synonym of the grave – the repository of the dead or the cessation of existence, until the resurrection of the dead. This doctrine of resurrection is mentioned explicitly only in Daniel 12:1–4 although it may be implied in several other texts. New theories arose concerning Sheol during the intertestamental period.
Religious views:
The views about immortality in Judaism is perhaps best exemplified by the various references to this in Second Temple period. The concept of resurrection of the physical body is found in 2 Maccabees, according to which it will happen through recreation of the flesh. Resurrection of the dead is specified in detail in the extra-canonical books of Enoch, and in Apocalypse of Baruch. According to the British scholar in ancient Judaism P.R. Davies, there is "little or no clear reference ... either to immortality or to resurrection from the dead" in the Dead Sea scrolls texts.
Religious views:
Both Josephus and the New Testament record that the Sadducees did not believe in an afterlife, but the sources vary on the beliefs of the Pharisees. The New Testament claims that the Pharisees believed in the resurrection, but does not specify whether this included the flesh or not. According to Josephus, who himself was a Pharisee, the Pharisees held that only the soul was immortal and the souls of good people will be reincarnated and "pass into other bodies", while "the souls of the wicked will suffer eternal punishment." The Book of Jubilees seems to refer to the resurrection of the soul only, or to a more general idea of an immortal soul.Rabbinic Judaism claims that the righteous dead will be resurrected in the Messianic Age, with the coming of the messiah. They will then be granted immortality in a perfect world. The wicked dead, on the other hand, will not be resurrected at all. This is not the only Jewish belief about the afterlife. The Tanakh is not specific about the afterlife, so there are wide differences in views and explanations among believers.
Religious views:
Taoism It is repeatedly stated in the Lüshi Chunqiu that death is unavoidable. Henri Maspero noted that many scholarly works frame Taoism as a school of thought focused on the quest for immortality. Isabelle Robinet asserts that Taoism is better understood as a way of life than as a religion, and that its adherents do not approach or view Taoism the way non-Taoist historians have done. In the Tractate of Actions and their Retributions, a traditional teaching, spiritual immortality can be rewarded to people who do a certain amount of good deeds and live a simple, pure life. A list of good deeds and sins are tallied to determine whether or not a mortal is worthy. Spiritual immortality in this definition allows the soul to leave the earthly realms of afterlife and go to pure realms in the Taoist cosmology.
Religious views:
Zoroastrianism Zoroastrians believe that on the fourth day after death, the human soul leaves the body and the body remains as an empty shell. Souls would go to either heaven or hell; these concepts of the afterlife in Zoroastrianism may have influenced Abrahamic religions. The Persian word for "immortal" is associated with the month "Amurdad", meaning "deathless" in Persian, in the Iranian calendar (near the end of July). The month of Amurdad or Ameretat is celebrated in Persian culture as ancient Persians believed the "Angel of Immortality" won over the "Angel of Death" in this month.
Philosophical arguments for the immortality of the soul:
Alcmaeon of Croton Alcmaeon of Croton argued that the soul is continuously and ceaselessly in motion. The exact form of his argument is unclear, but it appears to have influenced Plato, Aristotle, and other later writers.
Philosophical arguments for the immortality of the soul:
Plato Plato's Phaedo advances four arguments for the soul's immortality: The Cyclical Argument, or Opposites Argument explains that Forms are eternal and unchanging, and as the soul always brings life, then it must not die, and is necessarily "imperishable". As the body is mortal and is subject to physical death, the soul must be its indestructible opposite. Plato then suggests the analogy of fire and cold. If the form of cold is imperishable, and fire, its opposite, was within close proximity, it would have to withdraw intact as does the soul during death. This could be likened to the idea of the opposite charges of magnets.
Philosophical arguments for the immortality of the soul:
The Theory of Recollection explains that we possess some non-empirical knowledge (e.g. The Form of Equality) at birth, implying the soul existed before birth to carry that knowledge. Another account of the theory is found in Plato's Meno, although in that case Socrates implies anamnesis (previous knowledge of everything) whereas he is not so bold in Phaedo.
The Affinity Argument, explains that invisible, immortal, and incorporeal things are different from visible, mortal, and corporeal things. Our soul is of the former, while our body is of the latter, so when our bodies die and decay, our soul will continue to live.
Philosophical arguments for the immortality of the soul:
The Argument from Form of Life or The Final Argument explains that the Forms, incorporeal and static entities, are the cause of all things in the world, and all things participate in Forms. For example, beautiful things participate in the Form of Beauty; the number four participates in the Form of the Even, etc. The soul, by its very nature, participates in the Form of Life, which means the soul can never die.
Philosophical arguments for the immortality of the soul:
Plotinus Plotinus offers a version of the argument that Kant calls "The Achilles of Rationalist Psychology". Plotinus first argues that the soul is simple, then notes that a simple being cannot decompose. Many subsequent philosophers have argued both that the soul is simple and that it must be immortal. The tradition arguably culminates with Moses Mendelssohn's Phaedon.
Metochites Theodore Metochites argues that part of the soul's nature is to move itself, but that a given movement will cease only if what causes the movement is separated from the thing moved – an impossibility if they are one and the same.
Avicenna Avicenna argued for the distinctness of the soul and the body, and the incorruptibility of the former.
Aquinas The full argument for the immortality of the soul and Thomas Aquinas' elaboration of Aristotelian theory is found in Question 75 of the First Part of the Summa Theologica.
Descartes René Descartes endorses the claim that the soul is simple, and also that this entails that it cannot decompose. Descartes does not address the possibility that the soul might suddenly disappear.
Leibniz In early work, Gottfried Wilhelm Leibniz endorses a version of the argument from the simplicity of the soul to its immortality, but like his predecessors, he does not address the possibility that the soul might suddenly disappear. In his monadology he advances a sophisticated novel argument for the immortality of monads.
Philosophical arguments for the immortality of the soul:
Moses Mendelssohn Moses Mendelssohn's Phaedon is a defense of the simplicity and immortality of the soul. It is a series of three dialogues, revisiting the Platonic dialogue Phaedo, in which Socrates argues for the immortality of the soul, in preparation for his own death. Many philosophers, including Plotinus, Descartes, and Leibniz, argue that the soul is simple, and that because simples cannot decompose they must be immortal. In the Phaedon, Mendelssohn addresses gaps in earlier versions of this argument (an argument that Kant calls the Achilles of Rationalist Psychology). The Phaedon contains an original argument for the simplicity of the soul, and also an original argument that simples cannot suddenly disappear. It contains further original arguments that the soul must retain its rational capacities as long as it exists.
Ethics:
The possibility of clinical immortality raises a host of medical, philosophical, and religious issues and ethical questions. These include persistent vegetative states, the nature of personality over time, technology to mimic or copy the mind or its processes, social and economic disparities created by longevity, and survival of the heat death of the universe.
Undesirability Physical immortality has also been imagined as a form of eternal torment, as in the myth of Tithonus, or in Mary Shelley's short story The Mortal Immortal, where the protagonist lives to witness everyone he cares about die around him. For additional examples in fiction, see Immortality in fiction.
Ethics:
Kagan (2012) argues that any form of human immortality would be undesirable. Kagan's argument takes the form of a dilemma. Either our characters remain essentially the same in an immortal afterlife, or they do not: If our characters remain basically the same – that is, if we retain more or less the desires, interests, and goals that we have now – then eventually, over an infinite stretch of time, we will get bored and find eternal life unbearably tedious.
Ethics:
If, on the other hand, our characters are radically changed – e.g., by God periodically erasing our memories or giving us rat-like brains that never tire of certain simple pleasures – then such a person would be too different from our current self for us to care much what happens to them.Either way, Kagan argues, immortality is unattractive. The best outcome, Kagan argues, would be for humans to live as long as they desired and then to accept death gratefully as rescuing us from the unbearable tedium of immortality.
Sociology:
If human beings were to achieve immortality, there would most likely be a change in the world's social structures. Sociologists argue that human beings' awareness of their own mortality shapes their behavior. With the advancements in medical technology in extending human life, there may need to be serious considerations made about future social structures. The world is already experiencing a global demographic shift of increasingly ageing populations with lower replacement rates. The social changes that are made to accommodate this new population shift may be able to offer insight on the possibility of an immortal society.
Sociology:
Sociology has a growing body of literature on the sociology of immortality, which details the different attempts at reaching immortality (whether actual or symbolic) and their prominence in the 21st century. These attempts include renewed attention to the dead in the West, practices of online memorialization, and biomedical attempts to increase longevity. These attempts at reaching immortality and their effects in societal structures have led some to argue that we are becoming a "Postmortal Society". Foreseen changes to societies derived from the pursuit of immortality would encompass societal paradigms and worldviews, as well as the institutional landscape. Similarly, different forms of reaching immortality might entail a significant reconfiguration of societies, from becoming more technologically-oriented to becoming more aligned with nature Immortality would increase population growth, bringing with it many consequences as for example the impact of population growth on the environment and planetary boundaries.
Politics:
Although some scientists state that radical life extension, delaying and stopping aging are achievable, there are no international or national programs focused on stopping aging or on radical life extension. In 2012 in Russia, and then in the United States, Israel and the Netherlands, pro-immortality political parties were launched. They aimed to provide political support to anti-aging and radical life extension research and technologies and at the same time transition to the next step, radical life extension, life without aging, and finally, immortality and aim to make possible access to such technologies to most currently living people.
Symbols:
There are numerous symbols representing immortality. The ankh is an Egyptian symbol of life that holds connotations of immortality when depicted in the hands of the gods and pharaohs, who were seen as having control over the journey of life. The Möbius strip in the shape of a trefoil knot is another symbol of immortality. Most symbolic representations of infinity or the life cycle are often used to represent immortality depending on the context they are placed in. Other examples include the Ouroboros, the Chinese fungus of longevity, the ten kanji, the phoenix, the peacock in Christianity, and the colors amaranth (in Western culture) and peach (in Chinese culture). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Journal of Theological Interpretation**
Journal of Theological Interpretation:
The Journal of Theological Interpretation is a biannual peer-reviewed academic journal covering theology and biblical hermeneutics. It was established in 2007 and is published by Eisenbrauns. The editor-in-chief is Joel B. Green (Fuller Theological Seminary). The journal is abstracted and indexed in ATLA Religion Database. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ribosylnicotinamide kinase**
Ribosylnicotinamide kinase:
In enzymology, a ribosylnicotinamide kinase (EC 2.7.1.22) is an enzyme that catalyzes the chemical reaction ATP + N-ribosylnicotinamide ⇌ ADP + nicotinamide ribonucleotideThus, the two substrates of this enzyme are ATP and N-ribosylnicotinamide, whereas its two products are ADP and nicotinamide ribonucleotide.
This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:N-ribosylnicotinamide 5'-phosphotransferase. This enzyme is also called ribosylnicotinamide kinase (phosphorylating). This enzyme participates in nicotinate and nicotinamide metabolism.
Health:
Studies show potential for obesity treatment and for longer healthier life. Ribosylnicotinamide kinase seems to activate similar genes that Resveratrol does.
Food:
The enzyme can be found in milk and beer. Since the molecules are difficult to detect, it is expected that there are a lot more food products containing ribosylnicotinamide kinase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Yellow Card Scheme**
Yellow Card Scheme:
The Yellow Card Scheme is the United Kingdom's system for collecting information on suspected adverse drug reactions (ADRs) to medicines. The scheme allows the safety of the medicines and vaccines that are on the market to be monitored.
History:
The scheme was founded in 1964 after the thalidomide disaster, and was developed by Bill Inman. It is run by the Medicines and Healthcare products Regulatory Agency (MHRA) and the Commission on Human Medicines. It was extended to hospital pharmacists in 1997, and to community pharmacists in 1999.The Yellow Card Centre Scotland is a joint venture between MHRA and the Scottish Government.
Scope:
Suspected adverse reactions are collected on all licensed medicines and vaccines, whether issued on prescription or bought over the counter from a pharmacist or supermarket. The scheme also includes all herbal preparations and unlicensed medicines. Adverse reactions can be reported by anyone; this is usually done by healthcare professionals – including doctors, pharmacists and nurses – but patients and carers can also make reports.
Scope:
The types of adverse reactions that should be reported are: Those that have caused death or a serious illness Any adverse reaction, however minor, if associated with a new medicine or one that is under continued monitoring (highlighted in the British National Formulary with a ▼ black triangle) Any adverse reaction, however minor, if associated with a child (under 18 years of age) or in pregnancy
Usage:
Reports can be entered through the MHRA's website, or a smartphone app which is available for iOS and Android devices. The app can also provide news and alerts to users.Yellow Cards are available from pharmacies and a few are presented near the back of the BNF as tear-off pages; copies may also be obtained by telephoning +44 (0) 808 100 3352. The scheme provides forms that allow members of the public to report suspected side effects, as well as health professionals.NHS Digital publishes an information standard DCB1582 for electronic submission of adverse reactions by IT systems (until 2014, this was ISB 1582 from the Information Standards Board). The specification is based on the ICH E2B (R2) international standard format. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vesicular texture**
Vesicular texture:
Vesicular texture is a volcanic rock texture characterized by a rock being pitted with many cavities (known as vesicles) at its surface and inside.
Vesicular texture:
This texture is common in aphanitic, or glassy, igneous rocks that have come to the surface of the earth, a process known as extrusion. As magma rises to the surface the pressure on it decreases. When this happens gasses dissolved in the magma are able to come out of solution, forming gas bubbles (the cavities) inside it. When the magma finally reaches the surface as lava and cools, the rock solidifies around the gas bubbles and traps them inside, preserving them as holes filled with gas called vesicles.
Vesicular texture:
A related texture is amygdaloidal in which the volcanic rock, usually basalt or andesite, has cavities, or vesicles, that are filled with secondary minerals, such as zeolites, calcite, quartz, or chalcedony. Individual cavity fillings are termed amygdules (American usage) or amygdales (British usage). Sometimes these can be sources of semi-precious or precious stones such as diamonds.
Rock types that display a vesicular texture include pumice and scoria. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Artificial intelligence of things**
Artificial intelligence of things:
The Artificial Intelligence of Things (AIoT) is the combination of Artificial intelligence (AI) technologies with the Internet of things (IoT) infrastructure to achieve more efficient IoT operations, improve human-machine interactions and enhance data management and analytics.
Artificial intelligence of things:
In 2018, KPMG published a foresight study on the future of AI including scenarios until 2040. The analysts describe a scenario in detail where a community of things would see each device also contain its own AI that could link autonomously to other AIs to, together, perform tasks intelligently. Value creation would be controlled and executed in real-time using swarm intelligence. Many industries could be transformed with the application of swarm intelligence, including: automotive, cloud, medical, military, research, and technology.
Artificial intelligence of things:
In the AIoT an important facet is AI being done on some Thing. In its purest form this involves performing the AI on the device, i.e. at the edge or Edge Computing, with no need for external connections. There is no need for an Internet in AIoT, it is an evolution of the concept of the IoT and that is where the comparison ends.
Artificial intelligence of things:
The combined power of AI and IoT, promises to unlock unrealized customer value in a broad swath of industry verticals such as edge analytics, autonomous vehicles, personalized fitness, remote healthcare, precision agriculture, smart retail, predictive maintenance, and industrial automation.
Artificial Intelligence Through Medical Devices:
As defined by the 21st Century Cures Act in 2016, a medical device is a device that performs a function in healthcare with the intention of using it "in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or intended to affect the structure or any function of the body of man or other animals".Under the Federal Food, Drug, and Cosmetic Act, all AI systems falling within this definition are regulated by the FDA. Medical devices are classified into three classes by the FDA based on their uses and risks. The higher the risk is, the stricter the control. The Class I category includes devices with the smallest risk and Class III has the greatest risk. Approved medical devices that utilize artificial intelligence or machine learning (AI/ML) has been increasing steadily. By 2020, the United States The Food and Drug Administration (FDA) approved very many medical devices that utilized AI/ML. A year later, the FDA released a regulatory framework for machines that use AI/ML software, in addition to the EU medical device regulation, which replaced the EU medical. As technology continues to improve, it has rapidly increased the medical fields' method of working and diagnosing. Various AI applications can improve productivity and reduce medical errors, such as diagnoses and treatment selection, and creating risk predictions and stratifying diseases.AI also helps patients by providing patients' data, electronic health records, mobile apps, and providing easy access to devices and sensors to specific patients who are in need of such technologies. The need to protect patients' data is extreme. Using electronic records to conceal patient data becomes increasingly difficult as data becomes integrated into clinical care. The accessibility to patients' data may be easy to access for the patient, but it also brings skepticism of data protection.
Artificial Intelligence Through Medical Devices:
Technology and AI have combined to provide opportunities for better management of healthcare information and technology integration in the medical industry. AI is implemented to recognize abnormalities and suspicion to sensitive data being accessed by a third-party. On the other hand, it will be necessary to rethink confidentiality and other core medical ethics principles in order to implement deep learning systems, since we cannot rely solely on technology.
Artificial Intelligence in Cloud Engineering:
When integrating AI into cloud engineering, it can help multiple professional fields in maximizing data collection. It can improve performance and efficiency through digital management.
Artificial Intelligence in Cloud Engineering:
Cloud engineering follows engineering methods to apply to cloud computing and focuses on technological cloud services. In conceiving, developing, operating, and maintaining cloud computing systems, it adopts a systematic approach to commercialization, standardization, and governance. Among its diverse aspects are contributions from development engineering, software engineering, web development, performance engineering, security engineering, platform engineering, risk engineering, and quality engineering.Implementing AI into information technology's framework to establish smooth workloads and automate repetitive processes. Using these tools, organizations can better manage data as they develop greater amounts of collective data and integrate data recognition, classification, and management processes as time progresses.
Artificial Intelligence in Cloud Engineering:
With AI, it can bring efficiency to organizations, bringing strategic methods and saving time from repeated tasks. By executing analysis, organizations can save time and be more efficient. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Noradrenergic cell group A6sc**
Noradrenergic cell group A6sc:
Noradrenergic cell group A6sc is a group of cells fluorescent for norepinephrine that are scattered in the nucleus subceruleus of the macaque., | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Enovis**
Enovis:
Enovis is a medical technology company with a focus in orthopedics. The company was founded by brothers Mitchell and Steven Rales as the Colfax Corporation in 1995. Enovis is headquartered in Wilmington, Delaware and is listed on the NYSE as ENOV. The company has over 5,000 employees operating at 12 sites around the world.
History:
The company was founded in 1995 as the Colfax Corporation in Richmond, Virginia, by brothers Steven and Mitchell Rales. In August 1997, Colfax acquired approximately 93% of IMO's common stock through a public tender offer. At the time of the acquisition, IMO was a diversified industrial manufacturer, with $469.0 million in annual revenue and five business units: Boston Gear, IMO Pump, Morse Controls, Gems Sensors, and Roltra Morse. Simultaneously with the closing of the tender offer, IMO divested its Gems Sensors and Roltra Morse units to narrow the strategic focus to power transmission and fluid handling.
History:
In January 2012, Colfax completed the acquisition of Charter International, the UK-listed parent company of ESAB (welding and cutting) and Howden (air and gas handling products), for $2.4 billion. More than 25 acquisitions were completed from 2012–2019 to build-out the three industrial businesses, fluid handling, fabrication technologies (ESAB), and air & gas handling (Howden).
History:
In December 2017, Colfax sold its original fluid handling platform to Circor International for cash and stock with a total estimated aggregate consideration of $860 million. In November 2018, Colfax expanded into the medical device space by acquiring DJO Global from The Blackstone Group for $3.15bn. In May 2019, KPS Capital Partners (KPS) announced that it has signed an agreement to acquire Howden Turbo from Colfax Corporation for $1.8bn.In March 2021, Colfax announced that it would spin-off its ESAB industrial business in a tax-efficient manner and pursue a focused medical technologies strategy. This spin-off was completed in April 2022, following which ESAB became an independent public company and Colfax was renamed Enovis. The spin-off was the final action taken to transform the portfolio of businesses from purely industrial to medical technologies. These actions included divesting the fluid handling and air and gas handling businesses in 2017 and 2019, respectively, and acquiring DJO Global in 2019. Additional acquisitions were made, and Colfax went public with an initial public offering in May 2008 as a fluid handling company. The headquarters were subsequently moved to Annapolis Junction, Maryland. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Range gate pull-off**
Range gate pull-off:
Range gate pull-off (RGPO) is an electronic warfare technique used to break radar lock-on. The basic concept is to produce a pulse of radio signal similar to the one that the target radar would produce when it reflects off the aircraft. This second pulse is then increasingly delayed in time so that the radar's range gate begins to follow the false pulse instead of the real reflection, pulling it off the target.
Range gate pull-off:
Doppler radars may not use range gates and instead select a single target by narrowly filtering frequencies on either side of the target's initial return. Against these radars, the related velocity gate pull-off (VGPO) can be used. These send a return signal that slowly changes in frequency, rather than time, hoping the radar's velocity gate will be pulled off the target in the same general fashion.
Range gate pull-off:
Pull-off belongs to the wider family of "deceptive jamming" concepts that use details of the target radar to their advantage, rather than attempting to simply overpower the radar's signal. Alternate names for "pull-off" include "stealing" and "walk-off". A related technique is angle deception jamming.
Description:
Range gates and strobing Even the earliest radar systems included a system to highlight a single selected target for further analysis. For instance, 1939's Gun-Laying Mark I, the British Army's first operational radar, used an on-screen cursor known as the strobe to highlight a single target. This worked by filtering out, or gating, signals that were not within the strobe's short time period, typically a few microseconds, corresponding to a range of a few hundred meters. The signal within the strobe's window was then sent to secondary displays where two operators would determine the azimuth and elevation of that single target, by keeping its blip centered in their displays. Similar systems were used by many radars by the mid-war period.
Description:
By the end of the war, many experiments were being carried out on automatic target following, or radar lock-on. In these systems, the operator would select a target using the strobe, and then circuits in the radar would automatically track the target in azimuth and elevation. This eliminated the need for the additional operators. Since the target's range would continue to change as it moved, the circuitry also attempted to keep the strobe centered in range. Some systems automated even the strobing; the AI Mark V was designed for single-seat fighter aircraft where the pilot would be too busy to adjust the strobe, and instead had a second system to sweep the strobe through a wide range and then lock onto the first signal it saw.
Description:
In the post-war era the circuitry that produced the strobe and filtered out other returns became more widely known as a range gate.
Range Pull-off While testing a late-war radar design, the AI Mk. IX, a serious problem with the auto-follow system was found.
Description:
While this system was being developed, Bomber Command was pressing the Air Ministry to use "window", better known today as chaff, as a radar countermeasure. Fighter Command pointed out that the Germans could easily copy the system and use it against England, potentially re-opening The Blitz. It was suggested that the AI Mk. IX would ignore window because it decelerated rapidly after it was dropped, and would thus quickly pass out of the range gates and not be tracked. But exactly the opposite occurred in testing; the radar unerringly locked onto the window and the target disappeared from the display.
Description:
Range gate pull-off is essentially an electronic version of window. Instead of producing the secondary return by dropping a packet of foil reflectors, the second return is created by a transponder in the target aircraft. The transponder initially responds as rapidly as possible to the radar's signal, producing a second blip that overlaps the original. Over a period of time, it increasingly delays the return so that it falls "behind" the radar signal in time. The goal is to delay the signal so it counters the aircraft's motion, leaving a signal at what appears to be a (nearly) fixed location in space. If the radar was locked on to the aircraft, it will hopefully remain locked to this second pulse as the aircraft moves away from the original location. Eventually, the aircraft will fall outside the range gate and disappear, while the radar continues tracking the false signal. Thus, the false signal is said to "pull the range gate off the target".
Description:
One way to reject the signal from the RGPO jammer is to note that the transponder always takes some non-zero time to respond. This means the signal will always have some component that represents the original "skin reflection" before the transponder signal is superimposed. On a plan-position indicator, the false signal will appear as a second dot at increasing distances from the first, which the operator can then manually strobe to regain lock. Alternately, if the operator is aware there is a jammer operating, they can look for the closest signal, representing the "skin reflection", and mute down any following signals. This is easily accomplished in simple electronics, and often referred to as a "leading-edge tracker".Such systems can be defeated by tracking the original radar signal and extracting its pulse repetition frequency (PRF). With even a basic measure of the PRF, the jammer can broadcast noise across the time frame of the skin reflection in order to obscure it. This can be particularly effective against leading-edge trackers, which will no longer have a sharp signal to gate on. Since these systems generate two signals, one to blank the leading-edge and another to perform pull-off, these are sometimes known as "dual-mode jammers".A more complex solution requires extremely accurate tracking of the PRF. If this can be achieved, the RGPO can then broadcast its deception signal on either side of the skin reflection and walk-off in either direction. This technique easily defeats leading-edge tracking, and also makes it difficult for a manual operator to tell which of the returns is the "real" signal.
Description:
Velocity pull-off Doppler radars directly measure the target's velocity via the Doppler effect. In typical early implementations, the received signal was amplified and then sent into a bank of narrow-band filters, each one corresponding to a particular target velocity. A simpler system is used in some semi-active radar homing missiles, which are pre-programmed with a measured target velocity which is used to calculate the expected Doppler shift of the signal, and then filter out signals outside a narrow band around that frequency.If an RGPO jammer responds to such a signal by sending out the same frequency it received, this additional signal will be sent into the same filter, adding to the original signal and making it stronger. If the transponder instead responds at a fixed frequency, it will fall into a different filter and can be easily distinguished. In either case, the original target return remains locked-on.
Description:
Modifying a transponder to deal with Doppler radars is easy, it simply requires it to be able to adjust its frequency. In this case, the system initially responds at the same frequency as the original signal, and then increasingly shifts the frequency over time in a manner similar to the RGPO case. This will cause a second signal to appear in adjacent filters, with no way to know which is the original. Since the frequency can be easily adjusted up or down, it does not have the added complication seen in RGPOs that want to pull-off in either direction.Pulse-Doppler radars use both pulse timing and Doppler shifting to track targets, so by varying both the frequency and return timing (through amplitude modulation), these can be pulled off as well. Such a transponder will continue to work against non-Doppler radars as well, as these generally have wide frequency response and continue to see the signal as long as its frequency shift does not become significant.
Description:
Countermeasures The effectiveness of the pull-off can be reduced if the radar changes its pulse repetition frequency, thereby making it difficult for the transponder to continue smoothly delaying the fake signal. Frequency agility has the same effect, as the transponder cannot guess what frequency to send out the fake signals on until it hears the one from the radar.
Description:
Denying this capability means the signal from the transponder can only respond to signals after hearing them on its receiver. These signals will always represent returns from greater distances than the jammer aircraft. Pulse-to-pulse comparison techniques, like moving target indication, can be used to filter out these sorts of returns as they appear on the radar to be slower-moving targets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acetabular branch of medial circumflex femoral artery**
Acetabular branch of medial circumflex femoral artery:
The acetabular branch is an artery in the hip that arises from the medial circumflex femoral artery opposite the acetabular notch and enters the hip-joint beneath the transverse ligament in company with an articular branch from the obturator artery. It supplies the fat in the bottom of the acetabulum, and is continued along the ligament to the head of the femur. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SmartCode**
SmartCode:
SmartCode is a unified land development ordinance template for planning and urban design. Originally developed by Duany Plater-Zyberk & Company, this open source program is a model form-based unified land development ordinance designed to create walkable neighborhoods across the full spectrum of human settlement, from the most rural to the most urban, incorporating a transect of character and intensity within each. It folds zoning, subdivision regulations, urban design, and basic architectural standards into one compact document. Because the SmartCode enables community vision by coding specific outcomes that are desired in particular places, it is meant to be locally calibrated by professional planners, architects, and attorneys.
SmartCode:
The SmartCode is not a building code. Building codes address life/safety issues such as fire and storm protection. Examples of building codes include the International Building Code (IBC), International Residential Code (IRC), and International Code Council (ICC) documents.
SmartCode:
In the discussion of Smart Growth as an alternative to urban sprawl, one key aspect often overlooked is the currently prevailing system of community development codes and standards that by design, whether intentionally or not, have promoted subdivisions and strip malls. To change these community settlement patterns to allow for land conservation and to promote traditional patterns of hamlet, village, town and city, new codes are necessary. The most comprehensive example of a code designed for this purpose is the SmartCode as described below.
Technical description:
Model Code – The SmartCode is a model code, with metrics designed to create a generic medium-sized American city structured into walkable neighborhoods. The model code is freeware, a template meant to be locally customized by professional planners, architects, and attorneys. Form-Based – The SmartCode is a form-based code. Conventional Euclidean zoning regulates land development with the most emphasis on controlling land use. Form-based zoning has been developed over the last twenty years to overcome the problems of sprawl created by use-based codes. Form-based zoning regulates land development with the most emphasis on controlling urban form and less emphasis on controlling land uses (although uses with negative impacts, such as heavy industry, adult businesses, etc. are still regulated). Urban form features regulated under the SmartCode include the width of lots, size of blocks, building setbacks, building heights, placement of buildings on the lot, location of parking, etc.
Technical description:
Unified Land Development Regulation – The SmartCode is a unified land development code that can include zoning, subdivision regulations, urban design, signage, landscaping, and basic architectural standards.
Walkable Neighborhoods – One of the basic principles in the SmartCode is that towns and cities should be structured as a series of walkable neighborhoods. Walkable neighborhoods require a mix of land uses (residential, office, and retail), public spaces with a sense of enclosure to create “outdoor rooms”, and pedestrian-oriented transportation design.
Technical description:
Rural-Urban Transect – The zones within the SmartCode are designed to create complete human habitats ranging from the very rural to the very urban. Where conventional zoning categories are based on different land uses, SmartCode zoning categories are based on their rural-urban character. All categories within the SmartCode allow some mix of uses. SmartCode zoning categories ensure that a community offers a full diversity of building types, thoroughfare types, and civic space types, and that each has appropriate characteristics for its location.
Technical description:
Though version 9.0 is only 50 pages, the SmartCode may replace conventional zoning, subdivision, and design regulations, making walkable mixed-use development legal by right.
Technical description:
The first city to adopt a SmartCode as a mandatory overlay for its downtown was Petaluma, California, in June 2003. The City of Miami adopted an exclusive citywide SmartCode in October 2009 and implemented it in May 2010. It is known as Miami21. It was calibrated by Duany Plater-Zyberk & Company. Cities that have adopted SmartCodes as a parallel option to their conventional zoning include Gulfport, Mississippi, Pass Christian, Mississippi, and Montgomery, Alabama. In addition, scores of private traditional neighborhood developments (TND) have been permitted under transect-based codes that are essentially the same as Article 5 of the SmartCode.
Technical description:
The SmartCode also includes additional supplementary “modules” with specialized techniques. The SmartCode Sustainability Module was introduced by Jaime Correa from Jaime Correa and Associates, in the City of Miami; the SmartCode Light Imprint and Drainage Module was designed by Tom Low at the Charlotte office of Duany and Plater-Zyberk; the Environmental Standards Module has been produced by Doug Farr, in Chicago. Other Modules include: Natural Drainage Standards, architecture/lighting/sound and visibility and Hazard Mitigation Standards. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Carryover effect**
Carryover effect:
The carryover effect is a term used in clinical chemistry to describe the transfer of unwanted material from one container or mixture to another. It describes the influence of one sample upon the following one. It may be from a specimen, or a reagent, or even the washing medium. The significance of carry over is that even a small amount can lead to erroneous results.
Carryover effect in clinical laboratory:
Carryover experiments are widely used for clinical chemistry and immunochemistry analyzers to evaluate and validate carryover effects. The pipetting and washing systems in an automated analyzer are designed to continuously cycle between the aspiration of patient specimens and cleaning. An obvious concern is a potential for carryover of analyte from one patient specimen into one or more following patient specimens, which can falsely increase or decrease the measured analyte concentration. Specimen carryover is typically addressed by judicious choice of probe material, probe design, and an efficient probe washing system to flush the probe of residual patient specimens or reagents retained in their bores or clinging to the probe exterior surface before they are introduced into the next patient sample, reagent container, or cuvette/reaction vessel.
Significance in carryover assessment:
The pathological range of measurement could be of several order to reference interval(e.g., Sex hormone, Tumor marker, Troponin...etc.). A small portion of carryover could lead to erroneous results.
Carryover assessment:
IUPAC made a recommendation in 1991 for the description and measurement of carryover effects in clinical chemistry. The carryover ratio is the percentage of H3 carry to L1 constituting the carryover portion "h". In a design of 3 high samples followed by 3 low samples, h can be calculated as (L1 - mean of L2&L3) / (H3 - mean of L2&L3) The carryover ratio's acceptance criteria depend on the measurement and the laboratory concerned. For example, 1% carryover of plasma albumin would generally lead to a clinically insignificant effect, while 1% carryover of cardiac High sensitivity Troponin assay would be catastrophic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interspecies hydrogen transfer**
Interspecies hydrogen transfer:
Interspecies hydrogen transfer (IHT) is a form of interspecies electron transfer. It is a syntrophic process by which H2 is transferred from one organism to another, particularly in the rumen and other anaerobic environments.IHT was discovered between Methanobacterium bryantii strain M.o.H and an "S" organism in 1967 by Marvin Bryant, Eileen Wolin, Meyer Wolin, and Ralph Wolfe at the University of Illinois. The two form a culture that was mistaken as a species Methanobacillus omelianskii. It was shown in 1973 that this process occurs between Ruminococcus albus and Wolinella succinogenes. A more recent publication describes how the gene expression profiles of these organisms changes when they undergo interspecies hydrogen transfer; of note, a switch to an electron-confurcating hydrogenase occurs in R. albus 7.This process affects the carbon cycle: methanogens can participate in interspecies hydrogen transfer combining H2 and CO2 to produce CH4. Besides methanogens, acetogens, and sulfate-reducing bacteria can participate in IHT. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantitative Discourse Analysis Package**
Quantitative Discourse Analysis Package:
Quantitative Discourse Analysis Package (qdap) is an R package for computer assisted qualitative data analysis, particularly quantitative discourse analysis, transcript analysis and natural language processing. Qdap is installable from, and runs within, the R system.
Qdap is a tool for quantitative analysis of qualitative transcripts and therefore provides a bridge between quantitative and qualitative research approaches. It is designed for transcript analysis, but its features overlap with natural language processing and text mining.
Its features include: tools for the preparation of transcript data frequency counts of sentence types, words, sentences, turns of talk, syllables aggregation using grouping variables word extracting and visualization statistical analysis.For higher level statistical analysis and visualization of text, qdap is integrated with R and offers integration with other R packages.
Alternatives:
KH Coder (Windows, Linux, macOS) for quantitative content analysis and text mining. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Yeast expression platform**
Yeast expression platform:
A yeast expression platform is a strain of yeast used to produce large amounts of proteins, sugars or other compounds for research or industrial uses. While yeast are often more resource-intensive to maintain than bacteria, certain products can only be produced by eukaryotic cells like yeast, necessitating use of a yeast expression platform. Yeasts differ in productivity and with respect to their capabilities to secrete, process and modify proteins. As such, different types of yeast (i.e. different expression platforms) are better suited for different research and industrial applications.
Products:
Since the onset of genetic engineering, a number of microorganisms have been developed for the production of biological products. These products are used in medicine and industry to create pharmaceuticals like hepatitis B vaccines or insulin. Common platforms for the development of medicine and other products include the bacterium E. coli, and several yeasts and mammalian cells (including, notably, Chinese hamster ovary cells). In general a microorganism used as an expression platform has to meet several criteria: it should be able grow rapidly in large containers, produce proteins in an efficient way (i.e. with minimal resource input), be safe and, in case of pharmaceuticals, it should produce and modify the products to be as ready for human consumption as possible.
Strains used:
Yeasts are common hosts for the production of proteins from recombinant DNA. They offer relatively easy genetic manipulation and rapid growth to high cell densities on inexpensive media. As eukaryotes, they are able to perform protein modifications like glycosylation which are common in eukaryotic cells, but relatively rare in bacteria. Due to this, yeast can produce complex proteins that are identical or very similar to native products from plants or mammals. The first yeast expression platform was based on the baker’s yeast Saccharomyces cerevisiae. However since then a variety of yeast expression platforms have been studied and are widely used for various applications based on their different characteristics and capabilities. For instance some of them grow on a wide range of carbon sources and are not restricted to glucose, as it is the case with baker’s yeast. Several of them are also applied to genetic engineering and to the production of foreign proteins.
Strains used:
Arxula adeninivorans Arxula adeninivorans (also called Blastobotrys adeninivorans) is a dimorphic yeast, meaning it grows as a budding yeast up to a temperature of 42 °C, but as a filamentous form at higher temperatures. A. adeninivorans has unusual biochemical characteristics. It can grow on a wide range of substrates and can assimilate nitrate. Strains of A. adeninivorans have been developed that can produce natural plastics, and have been involved in the development of a biosensor for estrogens in environmental samples.
Strains used:
Candida boidinii Candida boidinii is a yeast notable for its ability to grow on methanol (called methylotrophism). Like other methylotrophic species such as Hansenula polymorpha and Pichia pastoris, it is used as a platform for the production of foreign proteins. Yields in a multigram range of a secreted foreign protein have been reported.
A computational method, IPRO, recently predicted mutations that experimentally switched the cofactor specificity of Candida boidinii xylose reductase from NADPH to NADH.
Strains used:
Ogataea polymorpha Ogataea polymorpha (synonyms Hansenula polymorpha or Pichia angusta) is another methylotrophic yeast (see Candida boidinii). It can grow on a wide range of other substrates; it is thermo-tolerant and can assimilate nitrate (see also Kluyveromyces lactis). It has been applied to the production of hepatitis B vaccines, insulin and interferon alpha-2a for the treatment of hepatitis C, as well as to a range of technical enzymes.
Strains used:
Kluyveromyces lactis Kluyveromyces lactis is a yeast regularly used for the production of kefir. It can grow on several sugars, most importantly on lactose which is present in milk and whey. It has successfully been applied among others to the production of chymosin (an enzyme that is usually present in the stomach of calves) for the production of cheese. Production takes place in fermenters on a 40,000 L scale.
Strains used:
Pichia pastoris Pichia pastoris is a methylotrophic yeast (see Candida boidinii and Hansenula polymorpha). It provides an efficient platform for the production of foreign proteins. Platform elements are available as a kit and it is worldwide used in academia for the production of proteins. Strains have been engineered that can produce complex human N-glycan (yeast glycans are similar but not identical to those found in humans.
Strains used:
Saccharomyces cerevisiae Saccharomyces cerevisiae is the traditional baker’s yeast used widely in brewing and baking. Often the collective term “yeast” is used for this single species. As an expression platform it has successfully been applied to the production of technical enzymes and of pharmaceuticals like insulin and hepatitis B vaccines.
Yarrowia lipolytica Yarrowia lipolytica is a dimorphic yeast (see Arxula adeninivorans) that can grow on a wide range of substrates. As such, it has a high potential for industrial applications but there are no recombinant products commercially available yet.
Use:
The various yeast expression platforms differ in several characteristics, including their productivity and with respect to their capabilities to secrete, to process and to modify proteins in particular examples. However, uses of all expression platforms have some basic similarities.
Use:
In order to produce a desired product, suitable yeast strains are transformed with a vector that contains all necessary genetic elements for production of a biological product of interest. Vectors must also contain a selection marker, which is required to select yeast which have successfully taken up the vector from those which have not. Vectors also contain certain DNA elements allowing the yeast to incorporate the foreign DNA into the chromosome of the yeast and to replicate it. Most importantly, vectors contain a segment responsible for the production of the desired compound, called an expression cassette. The cassette contains a sequence of regulatory elements that control how much and under which circumstances a certain product is eventually made. This is followed by the gene for the biological product itself. The expression cassette is terminated by a terminator sequence that stops of the transcription of the expressed gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glass etching**
Glass etching:
Glass etching, or "French embossing", is a popular technique developed during the mid-1800s that is still widely used in both residential and commercial spaces today. Glass etching comprises the techniques of creating art on the surface of glass by applying acidic, caustic, or abrasive substances. Traditionally this is done after the glass is blown or cast, although mold-etching has replaced some forms of surface etching. The removal of minute amounts of glass causes the characteristic rough surface and translucent quality of frosted glass.
Techniques:
Various techniques are used to achieve an etched surface in glass, whether for artistic effect, or simply to create a translucent surface.
Acid etching is done using hexafluorosilicic acid (H2SiF6) which, when anhydrous, is colourless. The acid can be prepared by mixing quartz powder (silicon dioxide), calcium fluoride, and concentrated sulfuric acid; the acid forms after the resulting mixture is heated and the fumes (silicon tetrafluoride) have been absorbed by concentrated sulfuric acid.
Glass etching cream is used by hobbyists as it is generally easier to use than acid. Available from art supply stores, it consists of fluoride compounds, such as hydrogen fluoride and sodium fluoride. As the types of acids used in this process are extremely hazardous (see hydrofluoric acid for safety), abrasive methods have gained popularity.
Techniques:
Abrasive blasting ("sandblasting") is another common technique for creating patterns in glassware, creating a "frosted" look to the glass. It is often used commercially. High-pressure air mixed with an abrasive material cuts away at the glass surface to create the desired effect. The longer the stream of air and abrasive material are focused in one spot, the deeper the cut.
Techniques:
Leptat glass is a glass that has been etched using a patented acid process. Leptat takes its name from the Czech word meaning "to etch", because the technique was inspired by a Bohemian, Czech Republic (former Czechoslovakian) glass exhibit viewed at a past World's Fair in Osaka, Japan, and patented in the United States by Bernard E. Gruenke, Jr. of the Conrad Schmitt Studios. Abstract, figural, contemporary, and traditional designs have been executed in Leptat glass. A secondary design or pattern is sometimes etched more lightly into the negative areas, for further interest. Gold leaf or colored enamels also can be inlaid to highlight the designs. The Leptat technique allows the glass to reflect light from many surfaces, like a jewel-cut gem.
Techniques:
Mold etching In the 1920s a mold-etch process was invented, in which art was etched directly into the mold, so that each cast piece emerged from the mold with the texture already on the surface of the glass. This reduced manufacturing costs and, combined with a wider use of colored glass, led to cheap glassware in the 1930s, which later became known as Depression glass.
Techniques:
Frost etching is the process in which vinyl window material is cut to produce a pattern and then applied to a window to give a frosted patterned effect.
Applications:
There are many interior and exterior applications for acid-etched glass. Acid-etched glass is widely used for: Enhancing every area where glass can be used and where a little privacy and natural light is desired Creating feature walls or partitions.
Enriching doors and windows Heightening the look of balustrades Augmenting shower and bath enclosures | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Expect**
Expect:
Expect is an extension to the Tcl scripting language written by Don Libes. The program automates interactions with programs that expose a text terminal interface. Expect, originally written in 1990 for the Unix platform, has since become available for Microsoft Windows and other systems.
Basics:
Expect is used to automate control of interactive applications such as Telnet, FTP, passwd, fsck, rlogin, tip, SSH, and others. Expect uses pseudo terminals (Unix) or emulates a console (Windows), starts the target program, and then communicates with it, just as a human would, via the terminal or console interface. Tk, another Tcl extension, can be used to provide a GUI.
Usage:
Expect serves as a "glue" to link existing utilities together. The general idea is to figure out how to make Expect use the system's existing tools rather than figure out how to solve a problem inside of Expect.
Usage:
A key usage of Expect involves commercial software products. Many of these products provide some type of command-line interface, but these usually lack the power needed to write scripts. They were built to service the users administering the product, but the company often does not spend the resources to fully implement a robust scripting language. An Expect script can spawn a shell, look up environmental variables, perform some Unix commands to retrieve more information, and then enter into the product's command-line interface armed with the necessary information to achieve the user's goal. After retrieving information by interacting with the product via its command-line interface, the script can make intelligent decisions about what action to take, if any.
Usage:
Every time an Expect operation is completed, the results are stored in a local variable called $expect_out. This allows the script to harvest information to feedback to the user, and it also allows conditional behavior of what to send next based on the circumstances.
A common use of Expect is to set up a testing suite for programs, utilities or embedded systems. DejaGnu is a testing suite written using Expect for use in testing. It has been used for testing GCC and remote targets such as embedded development.
Expect script can be automated using a tool called 'autoexpect'. This tool observes your actions and generates an Expect script using heuristics. Though generated code may be large and somewhat cryptic, one can always tweak the generated script to get the exact code.
Usage:
Another example is a script that automates FTP: Below is an example that automates SFTP (with a password): Using passwords as command-line arguments, like in this example, is a huge security hole, as any other user on the machine can read this password by running "ps". You can, however, add code that will prompt you for your password rather than giving your password as an argument. This should be more secure. See the example below.
Usage:
Another example of automated SSH login to a user machine:
Alternatives:
Various projects implement Expect-like functionality in other languages, such as C#, Java, Scala, Groovy, Perl, Python, Ruby, Shell and Go. These are generally not exact clones of the original Expect, but the concepts tend to be very similar.
C# Expect.NET — Expect functionality for C# (.NET) DotNetExpect — An Expect-inspired console automation library for .NET Erlang lux - test automation framework with Expect style execution commands.
Go GoExpect - Expect-like package for the Go language go-expect - an Expect-like Go language library to automate control of terminal or console based programs.
Groovy expect4groovy - a Groovy DSL implementation of Expect tool.
Java ExpectIt — a pure Java 1.6+ implementation of the Expect tool. It is designed to be simple, easy to use and extensible.
expect4j — an attempt at a Java clone of the original Expect ExpectJ — a Java implementation of the Unix expect utility Expect-for-Java — pure Java implementation of the Expect tool expect4java - a Java implementation of the Expect tool, but supports nested closures. There is also wrapper for Groovy language DSL.
Perl Expect.pm — Perl module (newest version at metacpan.org) Python Pexpect — Python module for controlling interactive programs in a pseudo-terminal winpexpect — port of pexpect to the Windows platform paramiko-expect — A Python expect-like extension for the Paramiko SSH library which also supports tailing logs.
Ruby RExpect — a drop in replacement for the expect.rb module in the standard library.
Expect4r — Interact with Cisco IOS, IOS-XR, and Juniper JUNOS CLI Rust rexpect - pexpect-like package for the Rust language.
Scala scala-expect — a Scala implementation of a very small subset of the Expect tool.
Shell Empty — expect-like utility to run interactive commands in the Unix shell-scripts sexpect — Expect for shells. It's implemented in the client/server model which also supports attach/detach (like GNU screen). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Online banking**
Online banking:
Online banking, also known as internet banking, virtual banking, web banking or home banking, is a system that enables customers of a bank or other financial institution to conduct a range of financial transactions through the financial institution's website or mobile app. Since the early 2000s this has become the most common way that customers access their bank accounts.
Online banking:
The online banking system will typically connect to or be part of the core banking system operated by a bank to provide customers access to banking services in addition to or in place of historic branch banking. Online banking significantly reduces the banks' operating cost by reducing reliance on a branch network and offers convenience to some customers by lessening the need to visit a branch bank as well as being able to perform banking transactions even when branches are closed. Internet banking provides personal and corporate banking services offering features such as making electronic payments, viewing account balances, obtaining statements, checking recent transactions and transferring money between accounts.
Online banking:
Some banks operate as a "direct bank" or "neobank" that operate entirely via the internet or internet and telephone without having any physical branches relying completely on their online banking facilities.
History:
Precursors The precursor to the modern online banking services was distance banking electronically and by telephone since the early 1980s. The term 'online' became popular in the late 1980s and referred to the use of a terminal, keyboard, and TV or monitor to access the banking system using a phone line. 'Home banking' can also refer to the use of a numeric keypad to send tones down a phone line with instructions to the bank.
History:
Emergence of computer banking The first home banking service was offered to consumers in December 1980 by United American Bank, a community bank with headquarters in Knoxville, Tennessee. United American partnered with Radio Shack to produce a secure custom modem for its TRS-80 computer that allowed bank customers to access their account information securely. Services available in its first years included bill pay, account balance checks, and loan applications, as well as game access, budget and tax calculators and daily newspapers. Thousands of customers paid $25–30 per month for the service.Large banks, many working on parallel tracks to United American, followed in 1981 when four of New York's major banks (Citibank, Chase Manhattan, Chemical Bank, and Manufacturers Hanover) offered home banking services, using the videotex system. Because of the commercial failure of videotex, these banking services never became popular except in France (where millions of videotex terminals (Minitel) where given out by the telecom provider) and the UK, where the Prestel system was used.
History:
The first videotext banking service in France was launched on December 20, 1983, by CCF Bank (now part of HSBC). Videotext online Banking services eventually reached 19% market share by 1991The developers of United American Bank's first-to-market computer banking system aimed to license it nationally, but they were overtaken by competitors when United American failed in 1983 as a result of loan fraud on the part of bank owner Jake Butcher, the 1978 Tennessee Democratic nominee for governor and promoter of the 1982 Knoxville World's Fair. First Tennessee Bank, which purchased the failed bank, did not attempt to develop or commercialize the computer banking platform.
History:
Internet and customer reluctance and banking When the clicks-and-bricks euphoria hit in the late 1990s, many banks began to view web-based banking as a strategic imperative. In 1996 OP Financial Group, a cooperative bank, became the second online bank in the world and the first in Europe. The attraction of online banking is fairly obvious: diminished transaction costs, easier integration of services, interactive marketing capabilities, and other benefits that boost customer lists and profit margins. Additionally, online banking services allow institutions to bundle more services into single packages, thereby luring customers and minimizing overhead.
History:
In 1995, Wells Fargo was the first U.S. bank to add account services to its website, with other banks quickly following suit. That same year, Presidential became the first U.S. bank to open bank accounts over the internet. According to research by Online Banking Report, at the end of 1999 less than 0.4% of households in the U.S. were using online banking. At the beginning of 2004, some 33 million U.S. households (31%) were using some form of online banking. Five years later, 47% of Americans used online banking, according to a survey by Gartner Group. Meanwhile, in the UK online banking grew from 63% to 70% of internet users between 2011 and 2012.By 2018, the number of digital banking users in the U.S. reached approximately 61 percent. The penetration of online banking in Europe has been increased as well. In 2019, 93 percent of the Norwegian population access online banking sites, which is the highest in Europe, followed by Denmark and Netherlands. Across Asia, more than 700 million consumers are estimated to use digital banking regularly, according to a 2015 survey by McKinsey and Company.By 2000, 80% of U.S. banks offered e-banking. Customer use grew slowly. At Bank of America, for example, it took 10 years to acquire 2 million e-banking customers. However, a significant cultural change took place after the Y2K scare ended.
History:
In 2001, Bank of America became the first bank to top 3 million online banking customers, more than 20% of its customer base. In comparison, larger national institutions, such as Citigroup claimed 2.2 million online relationships globally, while J.P. Morgan Chase estimated it had more than 750,000 online banking customers. Wells Fargo had 2.5 million online banking customers, including small businesses. Online customers proved more loyal and profitable than regular customers. In October 2001, Bank of America customers executed a record 3.1 million electronic bill payments, totaling more than $1 billion. As of 2017, the bank has 34 million active digital accounts, both online and mobile. In 2009, a report by Gartner Group estimated that 47% of United States adults and 30% in the United Kingdom bank online.The early 2000s saw the rise of the branch-less banks as internet only institutions. These internet-based banks incur lower overhead costs than their brick-and-mortar counterparts. In the United States, deposits at some direct banks are FDIC-insured and offer the same level of insurance protection as traditional banks. Neobanks are branch-less banks in the United States which are not FDIC-insured.
History:
First online banking services by region The United Kingdom Online banking started in the United Kingdom with the launch of Nottingham Building Society (NBS)'s Homelink service in September 1982, initially on a restricted basis, before it was expanded nationally in 1983. Homelink was delivered through a partnership with the Bank of Scotland and British Telecom's Prestel service. The system used Prestel viewlink system and a computer, such as the BBC Micro, or keyboard (Tandata Td1400) connected to the telephone system and television set. The system allowed users to "transfer money between accounts, pay bills and arrange loans... compare prices and order goods from a few major retailers, check local restaurant menus or real estate listings, arrange vacations... enter bids in Homelink's regular auctions and send electronic mail to other Homelink users." In order to make bank transfers and bill payments, a written instruction giving details of the intended recipient had to be sent to the NBS who set the details up on the Homelink system. Typical recipients were gas, electricity and telephone companies and accounts with other banks. Details of payments to be made were input into the NBS system by the account holder via Prestel. A cheque was then sent by NBS to the payee and an advice giving details of the payment was sent to the account holder. BACS was later used to transfer the payment directly.
History:
The United States In the United States in-home banking was "is still in its infancy" with banks "cautiously testing consumer interest" in 1984, a year after online banking went national in the UK. At the time Chemical Bank in New York was "still working out the bugs from its service, which offers somewhat limited features". The service from Chemical, called Pronto, was launched in 1983 and was aimed at individuals and small businesses. It enabled them to maintain electronic checkbook registers, see account balances, and transfer funds between checking and savings accounts. The other three major banks — Citibank, Chase Bank and Manufacturers Hanover — started to offer home banking services soon after. Chemical's Pronto failed to attract enough customers to break even and was abandoned in 1989. Other banks had a similar experience.
History:
Since it first appeared in the United States, online banking has been federally governed by the Electronic Funds Transfer Act of 1978.
France After a test period with 2,500 users starting in 1984, online banking services were launched in 1988, using Minitel terminals that were distributed freely to the population by the government. By 1990, 6.5 million Minitels were installed in households. Online banking was one of the most popular services.
Online banking services later migrated to the Internet.
Japan In January 1997, the first online banking service was launched by Sumitomo Bank. By 2010, most major banks implemented online banking services, however, the types of services offered varied. According to a poll conducted by Japanese Bankers Association (JBA) in 2012, 65.2% were the users of personal internet banking.
China In January 2015, WeBank, the online bank created by Tencent, started 4-month-long online banking trail operation.
Australia In December 1995, Advance Bank acquired by St.George Bank, started to provide customers with online banking with the rollout of the C++ Internet banking program.
India In 1998, ICICI Bank introduced internet banking to its customers.
Brazil In 1996, Banco Original SA launched its online-only retail banking. In 2019 new banks began to emerge as the Conta Simples, focused only for companies.
History:
Slovenia Virtual or online banking became a reality in Slovenia in 1997, when SKB bank launched this service under the name of SKB Net. Two years later, they were followed by the largest Slovenian bank, NLB bank, who started offering online banking services in 1999 under the name of NLB Klik. Nowadays, actually every bank in Slovenia is offering online banking services. The Slovenian Central bank's data shows that there was a rise of 5,1% in 2017 from the previous year and the number almost doubled from more than ten years ago. At the end of 2019, the number of users was almost 1 million. The number of payments is around 26 million per quarter, which means that there are more than 100 million payments made online in Slovenia every year, and another 3 million made to offshore accounts. Data from the Slovenian Central bank also show that the total value of payments in 2017 reached more than €240 million. More than 900,000 use online banking in Slovenia Canada Virtual banking first became a possibility in 1996 with the Bank of Montreal's mbanx. mbanx was released at the very beginning of the internet banking revolution in Canada and was the first full-service online bank Also in 1996, RBC started providing banking information online and had the first personal computer banking software released that yearIn 1997, the bank ING Direct Canada (now known as Tangerine Bank) was founded with almost entirely online banking using only small cafes for meetings and very few physical branches. This was completely different from how banks had operated in Canada previously. By the early 2000s, all of the major banks in Canada rolled out some form of online banking.
History:
Ukraine Remote customer service of banks via the internet or Online banking (e-banking) in Ukraine was introduced more than two decades ago. Legal entities have been using the remote control of bank accounts since the mid-1990s. PrivatBank, which launched the “Privat24” system in 2000, became a pioneer in retail online banking.Since 2000, most financial institutions have been actively implementing online offices and web banking.
History:
2007 - the number of Ukrainian banks that introduced Online Banking exceeded 20.
2018 - the ability to manage accounts and make transfers online is available in almost all financial institutions in Ukraine.
History:
Nowadays, the list of Internet banking services, with rare exceptions, repeats the entire product line of banks. With the help of Internet banking (IB), you can not only control the movement of funds in their accounts, but also perform more complex operations: for example, order a payment card or open a deposit account, repay the loan, and recently it became possible to buy and sell currency.The rapid development of Internet banking in Ukraine is provoking the growth of Internet users. It is important to mention that the largest functionality, more than 40 options - from transfers and opening deposits to home accounting and purchasing tickets are available in PrivatBank. There are 37 options in the Internet banking system of the First Ukrainian International Bank, 35 - in Alfa-Bank. One of the most popular services in which Internet banking users are interested in the ability to pay remotely for utilities.
History:
Νorth Macedonia Compared to several years ago, when the people living in Macedonia had to go directly to the banks to perform financial transactions, today there is a widely functional e-banking system. Macedonian banks today offer conventional e-banking services, electronic products including debit/credit cards and e-trading and contemporary electronic services like internet banking and online investing. What is important when it comes to e-banking is the trust in banks, usability of the platforms and the overall marketing for e-banking from banks. Moreover, it's also important to constantly update the e-banking services. One successful example regarding the above-mentioned characteristics in Macedonia is “Stopanska Banka” AD Skopje. In the country, several factors significantly influence the level of adoption and usage of e-banking services, such as age, level of education and complexity of the e-banking services offered by banks. Naturally, elderly clients use e-banking services less than younger people. In addition, the level of education has a significant influence on the level of usage, meaning that the higher the education level, the more likely is for the citizen to use e-banking services. As for the satisfaction, citizens are generally more satisfied with the e-banking services offered by various banks when they have a diverse portfolio of services and offer fast and simple completion of transactions.
History:
Cook Islands The Bank of the Cook Islands introduced online banking in 2015, under the leadership of Vaine Nooana-Arioka.
Operation:
To access a bank and online banking facility, a customer with internet access will need to register with the bank for the service, and set up a password and other credentials for customer verification. The customer visits the financial institution's secure website, and enters the online banking facility using the customer number and credentials previously set up.
Operation:
Each financial institution can determine the types of financial transactions which a customer may transact through online banking, but usually includes obtaining account balances, a list of recent transactions, electronic bill payments, financing loans and funds transfers between a customer's or another's accounts. Most banks set limits on the amounts that may be transacted, and other restrictions. Most banks also enable customers to download copies of bank statements, which can be printed at the customer's premises (some banks charge a fee for mailing hard copies of bank statements). Some banks also enable customers to download transactions directly into the customer's accounting software. The facility may also enable the customer to order a cheque book, statements, report loss of credit cards, stop payment on a cheque, advise change of address and other routine actions.
Operation:
Some financial institutions offer special internet banking services, for example, Personal financial management support, such as importing data into personal accounting software. Some online banking platforms support account aggregation to allow the customers to monitor all of their accounts in one place whether they are with their main bank or with other institutions.
Security:
Security of a customer's financial information is very important, without which online banking could not operate. Similarly the reputational risks to banks themselves are important. Financial institutions have set up various security processes to reduce the risk of unauthorized online access to a customer's records, but there is no consistency to the various approaches adopted.
The use of a secure website has been almost universally embraced.
Security:
Though single password authentication is still in use, it by itself is not considered secure enough for online banking in some countries. There are essentially two different security methods in use for online banking: The PIN/TAN system where the PIN represents a password, used for the login and TANs representing one-time passwords to authenticate transactions. TANs can be distributed in different ways, the most popular one is to send a list of TANs to the online banking user by postal letter. Another way of using TANs is to generate them by need using a security token. These token generated TANs depend on the time and a unique secret, stored in the security token (two-factor authentication or 2FA).More advanced TAN generators (chipTAN) also include the transaction data into the TAN generation process after displaying it on their own screen to allow the user to discover man-in-the-middle attacks carried out by Trojans trying to secretly manipulate the transaction data in the background of the PC.Another way to provide TANs to an online banking user is to send the TAN of the current bank transaction to the user's (GSM) mobile phone via SMS. The SMS text usually quotes the transaction amount and details, the TAN is only valid for a short period of time. Especially in Germany, Austria and the Netherlands many banks have adopted this "SMS TAN" service. There is also "PhotoTAN" service, where the bank generates and sends a QR code image to a smartphone device of the online banking user.
Security:
Usually online banking with PIN/TAN is done via a web browser using SSL secured connections, so that there is no additional encryption needed.Signature based online banking where all transactions are signed and encrypted digitally. The Keys for the signature generation and encryption can be stored on smartcards or any memory medium, depending on the concrete implementation (see, e.g., the Spanish ID card DNI electrónico).
Security:
Attacks Attacks on online banking used today are based on deceiving the user to steal login data and valid TANs. Two well known examples for those attacks are phishing and pharming. Cross-site scripting and keylogger/Trojan horses can also be used to steal login information.A method to attack signature based online banking methods is to manipulate the used software in a way, that correct transactions are shown on the screen and faked transactions are signed in the background.Another kind of attack is the so-called man-in-the-browser attack, a variation of the man-in-the-middle attack where a Trojan horse permits a remote attacker to secretly modify the destination account number and also the amount in the web browser.A 2008 U.S. Federal Deposit Insurance Corporation Technology Incident Report, compiled from suspicious activity reports banks file quarterly, lists 536 cases of computer intrusion, with an average loss per incident of $30,000. That adds up to a nearly $16-million loss in the second quarter of 2007. Computer intrusions increased by 150 percent between the first quarter of 2007 and the second. In 80 percent of the cases, the source of the intrusion is unknown but it occurred during online banking, the report states.In 2014 in the UK, losses from online banking fraud rose by 48% compared with 2013. According to a study by a group of Cambridge University cybersecurity researchers in 2017, online banking fraud has doubled since 2011.As of 2012 there were also combined attacks using malware and social engineering to persuade the user himself to transfer money to the fraudsters on the ground of false claims (like the claim the bank would require a "test transfer" or the claim a company had falsely transferred money to the user's account and he should "send it back").
Security:
Countermeasures There exist several countermeasures which try to avoid attacks.
Security:
Whatever operating system is used, it is advised that the operating system is still supported, and properly patched.Digital certificates are used against phishing and pharming, in signature based online banking variants (HBCI/FinTS) the use of "Secoder" card readers is a measurement to uncover software side manipulations of the transaction data.In 2001, the U.S. Federal Financial Institutions Examination Council issued guidance for multifactor authentication (MFA) and then required to be in place by the end of 2006.In 2012, the European Union Agency for Network and Information Security advised all banks to consider the PC systems of their users being infected by malware by default and therefore use security processes where the user can cross-check the transaction data against manipulations like for example (provided the security of the mobile phone holds up) SMS TAN where the transaction data is sent along with the TAN number or standalone smartcard readers with an own screen including the transaction data into the TAN generation process while displaying it beforehand to the user (see chipTAN) to counter man-in-the-middle attacks.
Criticism and problems:
The increase in online banking with a concomitant closure of local bank branch offices or reduced retail opening hours discriminates against people who cannot use online banking, for physical or mental limitations like age, or illness.
Criticism and problems:
In 2022, a retired Spanish urologist with Parkinson's disease gathered more than 600,000 signatures in an online petition asking banks and other institutions to serve all citizens, and not discriminate against the oldest and most vulnerable members. In Spain, the number of bank branches had shrunk to about 20,000 in 19 years since the bailout of 2012 and with the Coronavirus pandemic another 3000 branches closed in 2 years. "They are excluding those of us who have trouble using the internet." In February 2022, Spanish banks signed a protocol at the Ministry of Economy (Spain) pledging to offer better customer services to senior citizens, for example by "extending again their branch opening hours, giving priority to older people to access counters and simplifying the interface of their apps and web pages".With online banking, race discrimination is even less likely to be pinpointed, because of intransparent decision-making by algorithms.Online banking requires access to broadband services. However not everyone has equal access to the internet, which has been called the digital divide. In March 2022, the U.S. Federal Communications Commission formed a task force to prevent digital discrimination. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lack-of-fit sum of squares**
Lack-of-fit sum of squares:
In statistics, a sum of squares due to lack of fit, or more tersely a lack-of-fit sum of squares, is one of the components of a partition of the sum of squares of residuals in an analysis of variance, used in the numerator in an F-test of the null hypothesis that says that a proposed model fits well. The other component is the pure-error sum of squares.
Lack-of-fit sum of squares:
The pure-error sum of squares is the sum of squared deviations of each value of the dependent variable from the average value over all observations sharing its independent variable value(s). These are errors that could never be avoided by any predictive equation that assigned a predicted value for the dependent variable as a function of the value(s) of the independent variable(s). The remainder of the residual sum of squares is attributed to lack of fit of the model since it would be mathematically possible to eliminate these errors entirely.
Principle:
In order for the lack-of-fit sum of squares to differ from the sum of squares of residuals, there must be more than one value of the response variable for at least one of the values of the set of predictor variables. For example, consider fitting a line y=αx+β by the method of least squares. One takes as estimates of α and β the values that minimize the sum of squares of residuals, i.e., the sum of squares of the differences between the observed y-value and the fitted y-value. To have a lack-of-fit sum of squares that differs from the residual sum of squares, one must observe more than one y-value for each of one or more of the x-values. One then partitions the "sum of squares due to error", i.e., the sum of squares of residuals, into two components: sum of squares due to error = (sum of squares due to "pure" error) + (sum of squares due to lack of fit).The sum of squares due to "pure" error is the sum of squares of the differences between each observed y-value and the average of all y-values corresponding to the same x-value.
Principle:
The sum of squares due to lack of fit is the weighted sum of squares of differences between each average of y-values corresponding to the same x-value and the corresponding fitted y-value, the weight in each case being simply the number of observed y-values for that x-value. Because it is a property of least squares regression that the vector whose components are "pure errors" and the vector of lack-of-fit components are orthogonal to each other, the following equality holds: observed value fitted value (error) observed value local average (pure error) weight local average fitted value (lack of fit) Hence the residual sum of squares has been completely decomposed into two components.
Mathematical details:
Consider fitting a line with one predictor variable. Define i as an index of each of the n distinct x values, j as an index of the response variable observations for a given x value, and ni as the number of y values associated with the i th x value. The value of each response variable observation can be represented by Yij=αxi+β+εij,i=1,…,n,j=1,…,ni.
Mathematical details:
Let α^,β^ be the least squares estimates of the unobservable parameters α and β based on the observed values of x i and Y i j.
Let Y^i=α^xi+β^ be the fitted values of the response variable. Then ε^ij=Yij−Y^i are the residuals, which are observable estimates of the unobservable values of the error term ε ij. Because of the nature of the method of least squares, the whole vector of residuals, with N=∑i=1nni scalar components, necessarily satisfies the two constraints ∑i=1n∑j=1niε^ij=0 0.
It is thus constrained to lie in an (N − 2)-dimensional subspace of R N, i.e. there are N − 2 "degrees of freedom for error".
Now let Y¯i∙=1ni∑j=1niYij be the average of all Y-values associated with the i th x-value.
We partition the sum of squares due to error into two components: (sum of squares due to pure error) (sum of squares due to lack of fit)
Probability distributions:
Sums of squares Suppose the error terms ε i j are independent and normally distributed with expected value 0 and variance σ2. We treat x i as constant rather than random. Then the response variables Y i j are random only because the errors ε i j are random.
It can be shown to follow that if the straight-line model is correct, then the sum of squares due to error divided by the error variance, 1σ2∑i=1n∑j=1niε^ij2 has a chi-squared distribution with N − 2 degrees of freedom.
Probability distributions:
Moreover, given the total number of observations N, the number of levels of the independent variable n, and the number of parameters in the model p: The sum of squares due to pure error, divided by the error variance σ2, has a chi-squared distribution with N − n degrees of freedom; The sum of squares due to lack of fit, divided by the error variance σ2, has a chi-squared distribution with n − p degrees of freedom (here p = 2 as there are two parameters in the straight-line model); The two sums of squares are probabilistically independent.
Probability distributions:
The test statistic It then follows that the statistic lack-of-fit sum of squares degrees of freedom pure-error sum of squares degrees of freedom =∑i=1nni(Y¯i∙−Y^i)2/(n−p)∑i=1n∑j=1ni(Yij−Y¯i∙)2/(N−n) has an F-distribution with the corresponding number of degrees of freedom in the numerator and the denominator, provided that the model is correct. If the model is wrong, then the probability distribution of the denominator is still as stated above, and the numerator and denominator are still independent. But the numerator then has a noncentral chi-squared distribution, and consequently the quotient as a whole has a non-central F-distribution.
Probability distributions:
One uses this F-statistic to test the null hypothesis that the linear model is correct. Since the non-central F-distribution is stochastically larger than the (central) F-distribution, one rejects the null hypothesis if the F-statistic is larger than the critical F value. The critical value corresponds to the cumulative distribution function of the F distribution with x equal to the desired confidence level, and degrees of freedom d1 = (n − p) and d2 = (N − n).
Probability distributions:
The assumptions of normal distribution of errors and independence can be shown to entail that this lack-of-fit test is the likelihood-ratio test of this null hypothesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Four-Eyed Prince**
Four-Eyed Prince:
Four-Eyed Prince (Japanese: メガネ王子, Hepburn: Megane Ōji) is a four volume shōjo manga series by Wataru Mizukami (水上航, Mizukami Wataru).
Reception:
"Four-Eyed Prince has the depth of a reader letter to Cosmo, but if that sounds like icing on the bespectacled cake, then this is going to be a sweet enough dessert for the younger female crowd." — Joseph Luster, Otaku USA "For a first manga romance, perhaps for a young teen reader graduating up to love stories, I think this is a fine choice." — Johanna Draper Carlson, Manga Worth Reading.
Reception:
"If you have a glasses fetish, there's plenty of eye-candy here for you in the body of the manga and also in the Four Eyed Café Special Report in the extras." — Sakura Eries, Mania.
"It’s an ultra light and fluffy fun volume of Four-Eyed Prince." — Rachel Bentham, activeAnime. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fibre satellite distribution**
Fibre satellite distribution:
Fibre satellite distribution is a technology that enables satellite TV signals from an antenna to be distributed using an optical fibre cable infrastructure and then converted to electrical signals for use with conventional set-top box receivers.
Fibre satellite distribution:
Particularly applicable to satellite TV distribution systems in a multi-dwelling unit, such as a block of flats (but useful in smaller domestic distribution systems too), such a hybrid fibre/electrical system reduces the cabling required, reduces signal noise and interference, and provides for an easy upgrade to increase the number of tuners connected at each dwelling.Conventional systems that distribute the electrical satellite IF signal via a star network of coaxial cable require one relatively short cable run from the central distribution equipment to each tuner connected to the system, whereas in a fibre system, cables can be very long, and split at successive locations, in a tree structure without detriment to the reception.
Advantages:
The primary benefit of using optical fibre for a satellite TV IF distribution system is that the fibre can carry the entire received spectrum on one cable, which can then be split to provide for multiple tuners, without requiring a separate feed from the antenna to each tuner. Additional outlets can be added to increase the number of receivers within one home without accessing the central antenna or main infrastructure.
Advantages:
Fibre cable is cheap in long run, retailing at about twice the price of equivalent copper coaxial cable, but replacing four runs of coaxial cable with a single fibre cable.
It is also much smaller than the coaxial signal cable used for electrical IF distribution, but robust and flexible. The losses in a fibre system are almost negligible so very long cable runs of hundreds of metres are possible without any signal reinforcement.
Because the signal is carried as a beam of light, it is impervious to the electrical interference that even the best coaxial satellite cable may suffer, and cables can be safely and conveniently run alongside mains power cables. Power consumption is also lower than an equivalent electrical system.
Development:
While optical fibre has been used for telephone and Internet backbone data, and even for television and multimedia carriage for terrestrial cable, for many years, use for satellite IF distribution has been held back by considerations of cost and installation convenience.
Development:
However, since about 2007, UK company, Global Invacom (which also markets domestic and communal satellite reception and distribution equipment, including SCR single cable distribution equipment) has developed a low cost standardised system of optical fibre distribution suitable for domestic installations and small or medium commercial communal dish systems.The development was assisted by Astra satellite operator SES both with advice and financial support in the form of the prize for the Astra Innovation Contest run by Astra in 2007, which Global Invacom won for the proposal and initial development of optical fibre distribution systems for satellite TV.
How it works:
The complete spectrum of Ku-band satellite reception stretches from 10.70 GHz-12.75 GHz across two signal polarisations, or a bandwidth of about 4000 MHz. This cannot be carried on a single coaxial cable and so in a conventional satellite reception system, just one of four sub-bands (received in vertical and horizontal polarization, and high and low frequency,) is sent from the antenna to the indoor receiver as 0.95 GHz-2.15 GHz IF. Which sub-band is required is signalled from the receiver to the antenna’s LNB by a 13/18V and 0/22 kHz tone on the LNB supply sent up the same coaxial cable. In a single antenna distribution system, special quattro LNB supplies all four sub-bands at once, from four outputs and these are supplied as required to each of the multiple outlets connected to an IF multiswitch.In an optical fibre system at the LNB the four sub-bands are "stacked" in frequency, one above the other, at 0.95 GHz-3.0 GHz (the whole frequency range received in vertical polarisation) and 3.4 GHz-5.45 GHz (horizontal polarisation) and transmitted together as a modulated optical signal down the fibre cable using a 1310 nm semiconductor laser.
How it works:
The losses in the cable are extremely small (in the region of 0.3dB/km) and the Global Invacom optical LNB output can be split up to 32 ways with a cable length of up to 10 km between the LNB and the receiver.
At, or near, the receiver, the optical signal is converted back to the traditional electrical signal with a virtual multiswitch, providing one or more outputs that “appear” to the receiver as a conventional LNB.
Practical considerations:
Although the hybrid optical fibre/electrical system provides many advantages over electrical IF distribution to widespread or complex systems, it also requires a new approach by installers familiar only with electrical installations.
Practical considerations:
The single mode fibre cables use an 8μm fibre armoured with a steel wrap and Kevlar strands inside a plastic jacket. Fibre cable cannot be easily joined (an expensive fusion splicer is required for reliable joints) but pre-made cables with FC type screw-on connectors (mechanically similar in use, but smaller, to the F-connectors used for electrical satellite IF) are available in lengths from 1m to 100m. The same connectors are used on all the optical components of a system, including the optical LNB, splitters, cable joiners, virtual LNB units, etc.
Practical considerations:
The cables must be properly prepared (the end of the fibre itself cleaned) before connections are made and provision must be made to attenuate the LNB signal, to avoid overloading the receiver, if it is not split between receivers as there is so little attenuation inherent in the cable.
The optical LNB requires two cables connected – the fibre signal cable and a separate F-connector cable to supply the 12V supply to power the LNB electronics. If the installation is a conversion from an electrical system to fibre, an existing redundant coaxial signal cable can be used for the power supply. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chambranle**
Chambranle:
In architecture and joinery, the chambranle is the border, frame, or ornament, made of stone or wood, that is a component of the three sides round chamber doors, large windows, and chimneys.
When a chambranle is plain and without mouldings, it is called a band, case, or frame. The chambranle consists of three parts; the two sides, called montants, or ports, and the top, called the traverse or supercilium. The chambranle of an ordinary door is frequently called a doorcase; of a window, window frame; and of a chimney, manteltree.
History:
In ancient architecture, antepagmenta were garnishings in posts or doors, wrought in stone or timber, or lintels of a window. The word comes from Latin and has been borrowed in English to be used for the entire chambranle, i.e. the door case, or window frame. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peripatric speciation**
Peripatric speciation:
Peripatric speciation is a mode of speciation in which a new species is formed from an isolated peripheral population.: 105 Since peripatric speciation resembles allopatric speciation, in that populations are isolated and prevented from exchanging genes, it can often be difficult to distinguish between them. Nevertheless, the primary characteristic of peripatric speciation proposes that one of the populations is much smaller than the other. The terms peripatric and peripatry are often used in biogeography, referring to organisms whose ranges are closely adjacent but do not overlap, being separated where these organisms do not occur—for example on an oceanic island compared to the mainland. Such organisms are usually closely related (e.g. sister species); their distribution being the result of peripatric speciation.
Peripatric speciation:
The concept of peripatric speciation was first outlined by the evolutionary biologist Ernst Mayr in 1954. Since then, other alternative models have been developed such as centrifugal speciation, that posits that a species' population experiences periods of geographic range expansion followed by shrinking periods, leaving behind small isolated populations on the periphery of the main population. Other models have involved the effects of sexual selection on limited population sizes. Other related models of peripherally isolated populations based on chromosomal rearrangements have been developed such as budding speciation and quantum speciation.
Peripatric speciation:
The existence of peripatric speciation is supported by observational evidence and laboratory experiments.: 106 Scientists observing the patterns of a species biogeographic distribution and its phylogenetic relationships are able to reconstruct the historical process by which they diverged. Further, oceanic islands are often the subject of peripatric speciation research due to their isolated habitats—with the Hawaiian Islands widely represented in much of the scientific literature.
History:
Peripatric speciation was originally proposed by Ernst Mayr in 1954, and fully theoretically modeled in 1982. It is related to the founder effect, where small living populations may undergo selection bottlenecks. The founder effect is based on models that suggest peripatric speciation can occur by the interaction of selection and genetic drift,: 106 which may play a significant role. Mayr first conceived of the idea by his observations of kingfisher populations in New Guinea and its surrounding islands.: 389 Tanysiptera galatea was largely uniform in morphology on the mainland, but the populations on the surrounding islands differed significantly—referring to this pattern as "peripatric".: 389 This same pattern was observed by many of Mayr's contemporaries at the time such as by E. B. Ford's studies of Maniola jurtina.: 522 Around the same time, the botanist Verne Grant developed a model of quantum speciation very similar to Mayr's model in the context of plants.In what has been called Mayr's genetic revolutions, he postulated that genetic drift played the primary role that resulted in this pattern.: 389 Seeing that a species cohesion is maintained by conservative forces such as epistasis and the slow pace of the spread of favorable alleles in a large population (based heavily on J. B. S. Haldane's calculations), he reasoned that speciation could only take place in which a population bottleneck occurred.: 389 A small, isolated, founder population could be established on an island for example. Containing less genetic variation from the main population, shifts in allele frequencies may occur from different selection pressures.: 390 This to further changes in the network of linked loci, driving a cascade of genetic change, or a "genetic revolution"—a large-scale reorganization of the entire genome of the peripheral population.: 391 Mayr did recognize that the chances of success were incredibly low and that extinction was likely; though noting that some examples of successful founder populations existed at the time.: 522 Shortly after Mayr, William Louis Brown, Jr. proposed an alternative model of peripatric speciation in 1957 called centrifugal speciation. In 1976 and 1980, the Kaneshiro model of peripatric speciation was developed by Kenneth Y. Kaneshiro which focused on sexual selection as a driver for speciation during population bottlenecks.
Models:
Peripatric Peripatric speciation models are identical to models of vicariance (allopatric speciation).: 105 Requiring both geographic separation and time, speciation can result as a predictable byproduct. Peripatry can be distinguished from allopatric speciation by three key features:: 105 The size of the isolated population Strong selection caused by the dispersal and colonization of novel environments, The effects of genetic drift on small populations.The size of a population is important because individuals colonizing a new habitat likely contain only a small sample of the genetic variation of the original population. This promotes divergence due to strong selective pressures, leading to the rapid fixation of an allele within the descendant population. This gives rise to the potential for genetic incompatibilities to evolve. These incompatibilities cause reproductive isolation, giving rise to—sometimes rapid—speciation events.: 105 Furthermore, two important predictions are invoked, namely that geological or climatic changes cause populations to become locally fragmented (or regionally when considering allopatric speciation), and that an isolated population's reproductive traits evolve enough as to prevent interbreeding upon potential secondary contact.The peripatric model results in, what have been called, progenitor-derivative species pairs, whereby the derivative species (the peripherally isolated population)—geographically and genetically isolated from the progenitor species—diverges. A specific phylogenetic signature results from this mode of speciation: the geographically widespread progenitor species becomes paraphyletic (thereby becoming a paraspecies), with respect to the derivative species (the peripheral isolate).: 470 The concept of a paraspecies is therefore a logical consequence of the evolutionary species concept, by which one species gives rise to a daughter species. It is thought that the character traits of the peripherally isolated species become apomorphic, while the central population remains pleisomorphic.Modern cladistic methods have developed definitions that have incidentally removed derivative species by defining clades in a way that assumes that when a speciation event occurs, the original species no longer exists, while two new species arise; this is not the case in peripatric speciation. Mayr warned against this, as it causes a species to lose their classification status. Loren H. Rieseberg and Luc Brouillet recognized the same dilemma in plant classification.
Models:
Quantum and budding speciation The botanist Verne Grant proposed the term quantum speciation that combined the ideas of J. T. Gulick (his observation of the variation of species in semi-isolation), Sewall Wright (his models of genetic drift), Mayr (both his peripatric and genetic revolution models), and George Gaylord Simpson (his development of the idea of quantum evolution).: 114 Quantum speciation is a rapid process with large genotypic or phenotypic effects, whereby a new, cross-fertilizing plant species buds off from a larger population as a semi-isolated peripheral population.: 114 Interbreeding and genetic drift takes place due to the reduced population size, driving changes to the genome that would most likely result in extinction (due to low adaptive value).: 115 In rare instances, chromosomal traits with adaptive value may arise, resulting in the origin of a new, derivative species. Evidence for the occurrence of this type of speciation has been found in several plant species pairs: Layia discoidea and L. glandulosa, Clarkia lingulata and C. biloba, and Stephanomeria malheurensis and S. exigua ssp. coronaria.A closely related model of peripatric speciation is called budding speciation—largely applied in the context of plant speciation. The budding process, where a new species originates at the margins of an ancestral range, is thought to be common in plants—especially in progenitor-derivative species pairs.
Models:
Centrifugal speciation William Louis Brown, Jr. proposed an alternative model of peripatric speciation in 1957 called centrifugal speciation. This model contrasts with peripatric speciation by virtue of the origin of the genetic novelty that leads to reproductive isolation. A population of a species experiences periods of geographic range expansion followed by periods of contraction. During the contraction phase, fragments of the population become isolated as small refugial populations on the periphery of the central population. Because of the large size and potentially greater genetic variation within the central population, mutations arise more readily. These mutations are left in the isolated peripheral populations, promoting reproductive isolation. Consequently, Brown suggested that during another expansion phase, the central population would overwhelm the peripheral populations, hindering speciation. However, if the species finds a specialized ecological niche, the two may coexist. The phylogenetic signature of this model is that the central population becomes derived, while the peripheral isolates stay pleisomorphic—the reverse of the general model. In contrast to centrifugal speciation, peripatric speciation has sometimes been referred to as centripetal speciation (see figures 1 and 2 for a contrast). Centrifugal speciation has been largely ignored in the scientific literature, often dominated by the traditional model of peripatric speciation. Despite this, Brown cited a wealth of evidence to support his model, of which has not yet been refuted.Peromyscus polionotus and P. melanotis (the peripherally isolated species from the central population of P. maniculatus) arose via the centrifugal speciation model. Centrifugal speciation may have taken place in tree kangaroos, South American frogs (Ceratophrys), shrews (Crocidura), and primates (Presbytis melalophos). John C. Briggs associates centrifugal speciation with centers of origin, contending that the centrifugal model is better supported by the data, citing species patterns from the proposed 'center of origin' within the Indo-West Pacific Kaneshiro model When a sexual species experiences a population bottleneck—that is, when the genetic variation is reduced due to small population size—mating discrimination among females may be altered by the decrease in courtship behaviors of males. Sexual selection pressures may become weakened by this in an isolated peripheral population, and as a by-product of the altered mating recognition system, secondary sexual traits may appear. Eventually, a growth in population size paired with novel female mate preferences will give rise to reproductive isolation from the main population-thereby completing the peripatric speciation process. Support for this model comes from experiments and observation of species that exhibit asymmetric mating patterns such as the Hawaiian Drosophila species or the Hawaiian cricket Laupala. However, this model has not been entirely supported by experiments, and therefore, it may not represent a plausible process of peripatric speciation that takes place in nature.
Evidence:
Observational evidence and laboratory experiments support the occurrence of peripatric speciation. Islands and archipelagos are often the subject of speciation studies in that they represent isolated populations of organisms. Island species provide direct evidence of speciation occurring peripatrically in such that, "the presence of endemic species on oceanic islands whose closest relatives inhabit a nearby continent" must have originated by a colonization event.: 106–107 Comparative phylogeography of oceanic archipelagos shows consistent patterns of sequential colonization and speciation along island chains, most notably on the Azores islands, Canary Islands, Society Islands, Marquesas Islands, Galápagos Islands, Austral Islands, and the Hawaiian Islands—all of which express geological patterns of spatial isolation and, in some cases, linear arrangement. Peripatric speciation also occurs on continents, as isolation of small populations can occur through various geographic and dispersion events. Laboratory studies have been conducted where populations of Drosophila, for example, are separated from one another and evolve in reproductive isolation.
Evidence:
Hawaiian archipelago Drosophila species on the Hawaiian archipelago have helped researchers understand speciation processes in great detail. It is well established that Drosophila has undergone an adaptive radiation into hundreds of endemic species on the Hawaiian island chain;: 107 originating from a single common ancestor (supported from molecular analysis). Studies consistently find that colonization of each island occurred from older to younger islands, and in Drosophila, speciating peripatrically at least fifty percent of the time.: 108 In conjunction with Drosophila, Hawaiian lobeliads (Cyanea) have also undergone an adaptive radiation, with upwards of twenty-seven percent of extant species arising after new island colonization—exemplifying peripatric speciation—once again, occurring in the old-to-young island direction.Other endemic species in Hawaii also provide evidence of peripatric speciation such as the endemic flightless crickets (Laupala). It has been estimated that, "17 species out of 36 well-studied cases of [Laupala] speciation were peripatric".: 108 Plant species in genera's such as Dubautia, Wilkesia, and Argyroxiphium have also radiated along the archipelago. Other animals besides insects show this same pattern such as the Hawaiian amber snail (Succinea caduca), and ‘Elepaio flycatchers.Tetragnatha spiders have also speciated peripatrically on the Hawaiian islands, Numerous arthropods have been documented existing in patterns consistent with the geologic evolution of the island chain, in such that, phylogenetic reconstructions find younger species inhabiting the geologically younger islands and older species inhabiting the older islands (or in some cases, ancestors date back to when islands currently below sea level were exposed). Spiders such as those from the genus Orsonwelles exhibit patterns compatible with the old-to-young geology. Other endemic genera such as Argyrodes have been shown to have speciated along the island chain. Pagiopalus, Pedinopistha, and part of the family Thomisidae have adaptively radiated along the island chain, as well as the wolf spider family, Lycosidae.A host of other Hawaiian endemic arthropod species and genera have had their speciation and phylogeographical patterns studied: the Drosophila grimshawi species complex, damselflies (Megalagrion xanthomelas and Megalagrion pacificum), Doryonychus raptor, Littorophiloscia hawaiiensis, Anax strenuus, Nesogonia blackburni, Theridion grallator, Vanessa tameamea, Hyalopeplus pellucidus, Coleotichus blackburniae, Labula, Hawaiioscia, Banza (in the family Tettigoniidae), Caconemobius, Eupethicea, Ptycta, Megalagrion, Prognathogryllus, Nesosydne, Cephalops, Trupanea, and the tribe Platynini—all suggesting repeated radiations among the islands.
Evidence:
Other islands Phylogenetic studies of a species of crab spider (Misumenops rapaensis) in the genus Thomisidae located on the Austral Islands have established the, "sequential colonization of [the] lineage down the Austral archipelago toward younger islands". M. rapaensis has been traditionally thought of as a single species; whereas this particular study found distinct genetic differences corresponding to the sequential age of the islands. The figwart plant species Scrophularia lowei is thought to have arisen through a peripatric speciation event, with the more widespread mainland species, Scrophularia arguta dispersing to the Macaronesian islands. Other members of the same genus have also arisen by single colonization events between the islands.
Evidence:
Species patterns on continents The occurrence of peripatry on continents is more difficult to detect due to the possibility of vicariant explanations being equally likely.: 110 However, studies concerning the Californian plant species Clarkia biloba and C. lingulata strongly suggest a peripatric origin. In addition, a great deal of research has been conducted on several species of land snails involving chirality that suggests peripatry (with some authors noting other possible interpretations).: 111 The chestnut-tailed antbird (Sciaphylax hemimelaena) is located within the Noel Kempff Mercado National Park (Serrania de Huanchaca) in Bolivia. Within this region exists a forest fragment estimated to have been isolated for 1000–3000 years. The population of S. hemimelaena antbirds that reside in the isolated patch express significant song divergence; thought to be an "early step" in the process of peripatric speciation. Further, peripheral isolation "may partly explain the dramatic diversification of suboscines in Amazonia".The montane spiny throated reed frog species complex (genus: Hyperolius) originated through occurrences of peripatric speciation events. Lucinda P. Lawson maintains that the species' geographic ranges within the Eastern Afromontane Biodiversity Hotspot support a peripatric model that is driving speciation; suggesting that this mode of speciation may play a significant role in "highly fragmented ecosystems".In a study of the phylogeny and biogeography of the land snail genus Monacha, the species M. ciscaucasica is thought to have speciated peripatrically from a population of M. roseni. In addition, M. claussi consists of a small population located on the peripheral of the much larger range of M. subcarthusiana suggesting that it also arose by peripatric speciation.
Evidence:
Red spruce (Picea rubens) has arisen from an isolated population of black spruce (Picea mariana). During the Pleistocene, a population of black spruce became geographically isolated, likely due to glaciation. The geographic range of the black spruce is much larger than the red spruce. The red spruce has significantly lower genetic diversity in both its DNA and its mitochondrial DNA than the black spruce. Furthermore, the genetic variation of the red spruce has no unique mitochondrial haplotypes, only subsets of those in the black spruce; suggesting that the red spruce speciated peripatrically from the black spruce population. It is thought that the entire genus Picea in North America has diversified by the process of peripatric speciation, as numerous pairs of closely related species in the genus have smaller southern population ranges; and those with overlapping ranges often exhibit weak reproductive isolation.Using a phylogeographic approach paired with ecological niche models (i.e. prediction and identification of expansion and contraction species ranges into suitable habitats based on current ecological niches, correlated with fossil and molecular data), researchers found that the prairie dog species Cynomys mexicanus speciated peripatrically from Cynomys ludovicianus approximately 230,000 years ago. North American glacial cycles promoted range expansion and contraction of the prairie dogs, leading to the isolation of a relic population in a refugium located in the present day Coahuila, Mexico. This distribution and paleobiogeographic pattern correlates with other species expressing similar biographic range patterns such as with the Sorex cinereus complex.
Evidence:
Laboratory experiments Peripatric speciation has been researched in both laboratory studies and nature. Jerry Coyne and H. Allen Orr in Speciation suggest that most laboratory studies of allopatric speciation are also examples of peripatric speciation due to their small population sizes and the inevitable divergent selection that they undergo.: 106 Much of the laboratory research concerning peripatry is inextricably linked to founder effect research. Coyne and Orr conclude that selection's role in speciation is well established, whereas genetic drift's role is unsupported by experimental and field data—suggesting that founder-effect speciation does not occur.: 410 Nevertheless, a great deal of research has been conducted on the matter, and one study conducted involving bottleneck populations of Drosophila pseudoobscura found evidence of isolation after a single bottleneck.The table is a non-exhaustive table of laboratory experiments focused explicitly on peripatric speciation. Most of the studies also conducted experiments on vicariant speciation as well. The "replicates" column signifies the number of lines used in the experiment—that is, how many independent populations were used (not the population size or the number of generations performed). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cyclotriol**
Cyclotriol:
Cyclotriol (developmental code name ZK-136295; also known as 14α,17α-ethanoestriol) is a synthetic estrogen which was studied in the 1990s and was never marketed. It is a derivative of estriol with a bridge between the C14α and C17α positions. The drug has 40% of the relative binding affinity of estradiol for the human ERα. It showed an absolute bioavailability of 40% with high interindividual variability and an elimination half-life of 12.3 hours in pharmacokinetic studies in women. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ladle (spoon)**
Ladle (spoon):
A ladle is a type of cooking implement used for soup, stew, or other foods.Although designs vary, a typical ladle has a long handle terminating in a deep bowl, frequently with the bowl oriented at an angle to the handle to facilitate lifting liquid out of a pot or other vessel and conveying it to a bowl. Some ladles involve a point on the side of the basin to allow for finer stream when pouring the liquid; however, this can create difficulty for left handed users, as it is easier to pour towards oneself. Thus, many of these ladles feature such pinches on both sides.
Ladle (spoon):
In modern times ladles are usually made of the same stainless steel alloys as other kitchen utensils; however, they can be made of aluminium, silver, plastics, melamine resin, wood, bamboo or other materials. Ladles are made in a variety of sizes depending upon use; for example, the smaller sizes of less than 5 inches (130 mm) in length are used for sauces or condiments, while extra large sizes of more than 15 inches (380 mm) in length are used for soup or punch.Ladles are also a part of religious rituals in many cultures. In a Japanese temple, a wooden ladle known as hishaku is used in performing chozu, a ritual required before entering the temple, signifying self purification. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wrightiadione**
Wrightiadione:
Wrightiadione is an isoflavone that occurs in the plant Wrightia tomentosa and can also be synthesized. It is a novel template for the TrkA kinase inhibitors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clostridium enterotoxin**
Clostridium enterotoxin:
Clostridium enterotoxins are toxins produced by Clostridium species. Clostridial species are one of the major causes of food poisoning/gastrointestinal illnesses. They are anaerobic, gram-positive, spore-forming rods that occur naturally in the soil. Among the family are: Clostridium botulinum, which produces one of the most potent toxins in existence; Clostridium tetani, causative agent of tetanus; and Clostridium perfringens, commonly found in wound infections and diarrhea cases.The major virulence factor of C. perfringens is the CPE enterotoxin, which is secreted upon invasion of the host gut, and contributes to food poisoning and other gastrointestinal illnesses. It has a molecular weight of 35.3 kDa, and is responsible for the disintegration of tight junctions between epithelial cells in the gut. This mechanism is mediated by host claudin-3 and claudin-4 receptors, situated at the tight junctions.Clostridium enterotoxin is a nine-stranded beta sheet sandwich in shape. It has been determined that it is very similar to other spore-forming bacteria. The binding site is between beta sheets eight and nine. This allows the human claudin-3,4,6,7,8 and 14 to bind but not 1,2,5, and 10. The way the protein work is it destroys the cell membrane structure of animals by binding to claudin family proteins. These are components of tight junctions of the epithelial cell membrane. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exitron**
Exitron:
Exitrons (exonic introns) are produced through alternative splicing and have characteristics of both introns and exons, but are described as retained introns. Even though they are considered introns, which are typically cut out of pre mRNA sequences, there are significant problems that arise when exitrons are spliced out of these strands, with the most obvious result being altered protein structures and functions. They were first discovered in plants, but have recently been found in metazoan species as well.
Alternative splicing:
Exitrons are a result of alternative splicing (AS), in which introns are typically cut out of a pre mRNA sequence, while exons remain in the sequence and are translated into proteins. The same sequence within a pre mRNA strand can be considered an intron or exon depending on the desired protein to be produced. As a result, different final mRNA sequences are generated and a large variety of proteins can be made from one single gene. Mutations that exist in these sequences can also alter the way in which a sequence is spliced and as a result, change the protein produced. Splicing mutations of a mRNA sequence has been found to account for 15-60% of human genetic diseases, which suggests there may be a crucial role of exitrons in organ homeostasis.
Discovery:
A previous study had looked at alternative splicing in Rockcress (Arabidopsis) plants and pinpointed characteristics of retained introns in sequences. They had a subset of what they called "cryptic introns" that did not contain stop codons and are now deemed exitrons. The same researchers conducted further studies on their newly discovered exitrons and found 1002 exitrons in 892 Rockcress genes, a flowering plant that has been used to model exitrons. Although they were discovered in plants, exitrons have also been found in other metazoan species and humans as well. A recent comprehensive analysis of exitron splicing in 33 cancer types highlighted the abundance and impact of exitrons in human cancers. This study revealed exitron splicing disrupts functional protein domains, causing cancer driver effects and introducing a new potential source of neoantigens.
Distinguishing these regions from typical introns:
Transcripts with exitrons in their sequences can be distinguished from those with retained introns in several ways: (1) transcripts containing exitrons are transported out of the nucleus to be translated, whereas those containing introns are identified as incompletely processed and are kept in the nucleus where they cannot be translated. (2) only transcripts with exitrons of lengths not divisible by three have the potential to incorporate premature termination sequences, while sequences with introns normally result in premature termination. Thus, frameshifting exitron events were more likely to evade nonsense-mediated decay (NMD) than intron retentions. (3) exitron transcripts are usually the major isoforms, but those with introns are only present in small amounts. (4) exitrons had distinct cis-acting features such as weak 5′ and 3′ splice sites, high GC content, and short length compared to retained introns.
Characteristics:
Exitrons are considered introns, but have characteristics of both introns and exons. They originated from ancestral coding exons, but have weaker splice site signals than other introns. Exitrons have been found to be longer and have a higher GC content than intron regions and constitutive introns. However, they are of similar size to constitutive exons and their GC content is lower compared to other exons. Exitrons lack stop codons within their sequences, have synonymous substitutions, and are most commonly found in multiples of three nucleotides. Exitron sequences contain sites for numerous post-translational modifications, including sumoylation, ubiquitylation, S-nitrosylation, and lysine acetylation. The ability of exitron splicing (EIS) to alter protein states demonstrates the effect it can have on proteome assortment.
Characteristics:
In Arabidopsis Exitron splicing affects 3.3% of Arabidopsis protein coding genes. 11% of intron regions were composed of exitrons and 3.7% of AS events detected in a sample were exitron splicings. The regulation of EIS in tissues is controlled by certain stresses, which serves as a regulatory role in plant adaptation and development.
Characteristics:
In human cancers A analysis showed that exitron splicing affected 63% of human coding genes and that 95% of those events were tumor-specific. It was found that exitron splicing occurred more frequently in cancer tissues (63%), compared to normal human tissue cells (17%), with the highest rate of exitron splicing occurring in ovarian, esophageal, stomach, and acute myeloid leukemia tumors. Using a generalized additive model, researchers determined that exitron splicing dysregulation in cancers could largely be explained by differential expression of splicing factors.
Effects:
Exitron splicing has been found to be a conserved strategy for increasing proteome plasticity in both plants and animals since it affects plant and human protein features in a similar manner. When exitrons are spliced out of a sequence, it has resulted in internally deleted proteins and affected protein domains, disordered regions, and various post-translational modification sites that impact protein function. Spliced exitrons can result in premature termination of a protein, while in contrast, a non-spliced exitron results in a full-length protein.The processing of these exitrons has been found to be sensitive to cell types and environmental conditions and their splicing is linked to cancer. The impairment of EIS can potentially contribute to the initiation of cancer formation through its effect on several cancer-related genes. These genes include oncogenes and genes involved in cell adhesion, migration, and metastasis.EIS also facilitated the discovery of novel cancer driver genes. One of the significantly exitron-spliced genes (SEGs), NEFH, which rarely experiences mutations, was identified as a novel tumor suppressor in prostate cancer. Exitron splicing has the potential to introduce highly immunogenic neoantigens, which can be targetable with immunotherapy, thereby providing a promising avenue for cancer treatment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rhenium**
Rhenium:
Rhenium is a chemical element with the symbol Re and atomic number 75. It is a silvery-gray, heavy, third-row transition metal in group 7 of the periodic table. With an estimated average concentration of 1 part per billion (ppb), rhenium is one of the rarest elements in the Earth's crust. Rhenium has the third-highest melting point and second-highest boiling point of any element at 5869 K. Rhenium resembles manganese and technetium chemically and is mainly obtained as a by-product of the extraction and refinement of molybdenum and copper ores. Rhenium shows in its compounds a wide variety of oxidation states ranging from −1 to +7.
Rhenium:
Discovered by Walter Noddack, Ida Tacke and Otto Berg in 1925, rhenium was the last stable element to be discovered. It was named after the river Rhine in Europe, from which the earliest samples had been obtained and worked commercially.Nickel-based superalloys of rhenium are used in combustion chambers, turbine blades, and exhaust nozzles of jet engines. These alloys contain up to 6% rhenium, making jet engine construction the largest single use for the element. The second-most important use is as a catalyst: rhenium is an excellent catalyst for hydrogenation and isomerization, and is used for example in catalytic reforming of naphtha for use in gasoline (rheniforming process). Because of the low availability relative to demand, rhenium is expensive, with price reaching an all-time high in 2008/2009 of US$10,600 per kilogram (US$4,800 per pound). Due to increases in rhenium recycling and a drop in demand for rhenium in catalysts, the price of rhenium had dropped to US$2,844 per kilogram (US$1,290 per pound) as of July 2018.
History:
Rhenium (Latin: Rhenus meaning: "Rhine") was the last-discovered of the elements that have a stable isotope (other new elements discovered in nature since then, such as francium, are radioactive). The existence of a yet-undiscovered element at this position in the periodic table had been first predicted by Dmitri Mendeleev. Other calculated information was obtained by Henry Moseley in 1914. In 1908, Japanese chemist Masataka Ogawa announced that he had discovered the 43rd element and named it nipponium (Np) after Japan (Nippon in Japanese). However, recent analysis indicated the presence of rhenium (element 75), not element 43, although this reinterpretation has been questioned by Eric Scerri. The symbol Np was later used for the element neptunium, and the name "nihonium", also named after Japan, along with symbol Nh, was later used for element 113. Element 113 was also discovered by a team of Japanese scientists and was named in respectful homage to Ogawa's work.Rhenium is generally considered to have been discovered by Walter Noddack, Ida Noddack, and Otto Berg in Germany. In 1925 they reported that they had detected the element in platinum ore and in the mineral columbite. They also found rhenium in gadolinite and molybdenite. In 1928 they were able to extract 1 g of the element by processing 660 kg of molybdenite. It was estimated in 1968 that 75% of the rhenium metal in the United States was used for research and the development of refractory metal alloys. It took several years from that point before the superalloys became widely used.
Characteristics:
Rhenium is a silvery-white metal with one of the highest melting points of all elements, exceeded by only tungsten and carbon. It also has one of the highest boiling points of all elements, and the highest among stable elements. It is also one of the densest, exceeded only by platinum, iridium and osmium. Rhenium has a hexagonal close-packed crystal structure, with lattice parameters a = 276.1 pm and c = 445.6 pm.Its usual commercial form is a powder, but this element can be consolidated by pressing and sintering in a vacuum or hydrogen atmosphere. This procedure yields a compact solid having a density above 90% of the density of the metal. When annealed this metal is very ductile and can be bent, coiled, or rolled. Rhenium-molybdenum alloys are superconductive at 10 K; tungsten-rhenium alloys are also superconductive around 4–8 K, depending on the alloy. Rhenium metal superconducts at 1.697±0.006 K.In bulk form and at room temperature and atmospheric pressure, the element resists alkalis, sulfuric acid, hydrochloric acid, nitric acid, and aqua regia. It will however, react with nitric acid upon heating.
Characteristics:
Isotopes Rhenium has one stable isotope, rhenium-185, which nevertheless occurs in minority abundance, a situation found only in two other elements (indium and tellurium). Naturally occurring rhenium is only 37.4% 185Re, and 62.6% 187Re, which is unstable but has a very long half-life (≈1010 years). A kilogram of natural rhenium emits 1.07 MBq of radiation due to the presence of this isotope. This lifetime can be greatly affected by the charge state of the rhenium atom. The beta decay of 187Re is used for rhenium–osmium dating of ores. The available energy for this beta decay (2.6 keV) is one of the lowest known among all radionuclides. The isotope rhenium-186m is notable as being one of the longest lived metastable isotopes with a half-life of around 200,000 years. There are 33 other unstable isotopes that have been recognized, ranging from 160Re to 194Re, the longest-lived of which is 183Re with a half-life of 70 days.
Characteristics:
Compounds Rhenium compounds are known for all the oxidation states between −3 and +7 except −2. The oxidation states +7, +6, +4, and +2 are the most common. Rhenium is most available commercially as salts of perrhenate, including sodium and ammonium perrhenates. These are white, water-soluble compounds. Tetrathioperrhenate anion [ReS4]− is possible.
Characteristics:
Halides and oxyhalides The most common rhenium chlorides are ReCl6, ReCl5, ReCl4, and ReCl3. The structures of these compounds often feature extensive Re-Re bonding, which is characteristic of this metal in oxidation states lower than VII. Salts of [Re2Cl8]2− feature a quadruple metal-metal bond. Although the highest rhenium chloride features Re(VI), fluorine gives the d0 Re(VII) derivative rhenium heptafluoride. Bromides and iodides of rhenium are also well known.
Characteristics:
Like tungsten and molybdenum, with which it shares chemical similarities, rhenium forms a variety of oxyhalides. The oxychlorides are most common, and include ReOCl4, ReOCl3.
Oxides and sulfides The most common oxide is the volatile yellow Re2O7. The red rhenium trioxide ReO3 adopts a perovskite-like structure. Other oxides include Re2O5, ReO2, and Re2O3. The sulfides are ReS2 and Re2S7. Perrhenate salts can be converted to tetrathioperrhenate by the action of ammonium hydrosulfide.
Other compounds Rhenium diboride (ReB2) is a hard compound having a hardness similar to that of tungsten carbide, silicon carbide, titanium diboride or zirconium diboride.
Characteristics:
Organorhenium compounds Dirhenium decacarbonyl is the most common entry to organorhenium chemistry. Its reduction with sodium amalgam gives Na[Re(CO)5] with rhenium in the formal oxidation state −1. Dirhenium decacarbonyl can be oxidised with bromine to bromopentacarbonylrhenium(I): Re2(CO)10 + Br2 → 2 Re(CO)5BrReduction of this pentacarbonyl with zinc and acetic acid gives pentacarbonylhydridorhenium: Re(CO)5Br + Zn + HOAc → Re(CO)5H + ZnBr(OAc)Methylrhenium trioxide ("MTO"), CH3ReO3 is a volatile, colourless solid has been used as a catalyst in some laboratory experiments. It can be prepared by many routes, a typical method is the reaction of Re2O7 and tetramethyltin: Re2O7 + (CH3)4Sn → CH3ReO3 + (CH3)3SnOReO3Analogous alkyl and aryl derivatives are known. MTO catalyses for the oxidations with hydrogen peroxide. Terminal alkynes yield the corresponding acid or ester, internal alkynes yield diketones, and alkenes give epoxides. MTO also catalyses the conversion of aldehydes and diazoalkanes into an alkene.
Characteristics:
Nonahydridorhenate A distinctive derivative of rhenium is nonahydridorhenate, originally thought to be the rhenide anion, Re−, but actually containing the ReH2−9 anion in which the oxidation state of rhenium is +7.
Characteristics:
Occurrence Rhenium is one of the rarest elements in Earth's crust with an average concentration of 1 ppb; other sources quote the number of 0.5 ppb making it the 77th most abundant element in Earth's crust. Rhenium is probably not found free in nature (its possible natural occurrence is uncertain), but occurs in amounts up to 0.2% in the mineral molybdenite (which is primarily molybdenum disulfide), the major commercial source, although single molybdenite samples with up to 1.88% have been found. Chile has the world's largest rhenium reserves, part of the copper ore deposits, and was the leading producer as of 2005. It was only recently that the first rhenium mineral was found and described (in 1994), a rhenium sulfide mineral (ReS2) condensing from a fumarole on Kudriavy volcano, Iturup island, in the Kuril Islands. Kudriavy discharges up to 20–60 kg rhenium per year mostly in the form of rhenium disulfide. Named rheniite, this rare mineral commands high prices among collectors.
Production:
Approximately 80% of rhenium is extracted from porphyry molybdenum deposits. Some ores contain 0.001% to 0.2% rhenium. Roasting the ore volatilizes rhenium oxides. Rhenium(VII) oxide and perrhenic acid readily dissolve in water; they are leached from flue dusts and gasses and extracted by precipitating with potassium or ammonium chloride as the perrhenate salts, and purified by recrystallization. Total world production is between 40 and 50 tons/year; the main producers are in Chile, the United States, Peru, and Poland. Recycling of used Pt-Re catalyst and special alloys allow the recovery of another 10 tons per year. Prices for the metal rose rapidly in early 2008, from $1000–$2000 per kg in 2003–2006 to over $10,000 in February 2008. The metal form is prepared by reducing ammonium perrhenate with hydrogen at high temperatures: 2 NH4ReO4 + 7 H2 → 2 Re + 8 H2O + 2 NH3There are technologies for the associated extraction of rhenium from productive solutions of underground leaching of uranium ores.
Applications:
Rhenium is added to high-temperature superalloys that are used to make jet engine parts, using 70% of the worldwide rhenium production. Another major application is in platinum–rhenium catalysts, which are primarily used in making lead-free, high-octane gasoline.
Applications:
Alloys The nickel-based superalloys have improved creep strength with the addition of rhenium. The alloys normally contain 3% or 6% of rhenium. Second-generation alloys contain 3%; these alloys were used in the engines for the F-15 and F-16, whereas the newer single-crystal third-generation alloys contain 6% of rhenium; they are used in the F-22 and F-35 engines. Rhenium is also used in the superalloys, such as CMSX-4 (2nd gen) and CMSX-10 (3rd gen) that are used in industrial gas turbine engines like the GE 7FA. Rhenium can cause superalloys to become microstructurally unstable, forming undesirable topologically close packed (TCP) phases. In 4th- and 5th-generation superalloys, ruthenium is used to avoid this effect. Among others the new superalloys are EPM-102 (with 3% Ru) and TMS-162 (with 6% Ru), as well as TMS-138 and TMS-174.
Applications:
For 2006, the consumption is given as 28% for General Electric, 28% Rolls-Royce plc and 12% Pratt & Whitney, all for superalloys, whereas the use for catalysts only accounts for 14% and the remaining applications use 18%. In 2006, 77% of rhenium consumption in the United States was in alloys. The rising demand for military jet engines and the constant supply made it necessary to develop superalloys with a lower rhenium content. For example, the newer CFM International CFM56 high-pressure turbine (HPT) blades will use Rene N515 with a rhenium content of 1.5% instead of Rene N5 with 3%.Rhenium improves the properties of tungsten. Tungsten-rhenium alloys are more ductile at low temperature, allowing them to be more easily machined. The high-temperature stability is also improved. The effect increases with the rhenium concentration, and therefore tungsten alloys are produced with up to 27% of Re, which is the solubility limit. Tungsten-rhenium wire was originally created in efforts to develop a wire that was more ductile after recrystallization. This allows the wire to meet specific performance objectives, including superior vibration resistance, improved ductility, and higher resistivity. One application for the tungsten-rhenium alloys is X-ray sources. The high melting point of both elements, together with their high atomic mass, makes them stable against the prolonged electron impact. Rhenium tungsten alloys are also applied as thermocouples to measure temperatures up to 2200 °C.The high temperature stability, low vapor pressure, good wear resistance and ability to withstand arc corrosion of rhenium are useful in self-cleaning electrical contacts. In particular, the discharge that occurs during electrical switching oxidizes the contacts. However, rhenium oxide Re2O7 is volatile (sublimes at ~360 °C) and therefore is removed during the discharge.Rhenium has a high melting point and a low vapor pressure similar to tantalum and tungsten. Therefore, rhenium filaments exhibit a higher stability if the filament is operated not in vacuum, but in oxygen-containing atmosphere. Those filaments are widely used in mass spectrometers, ion gauges and photoflash lamps in photography.
Applications:
Catalysts Rhenium in the form of rhenium-platinum alloy is used as catalyst for catalytic reforming, which is a chemical process to convert petroleum refinery naphthas with low octane ratings into high-octane liquid products. Worldwide, 30% of catalysts used for this process contain rhenium. The olefin metathesis is the other reaction for which rhenium is used as catalyst. Normally Re2O7 on alumina is used for this process. Rhenium catalysts are very resistant to chemical poisoning from nitrogen, sulfur and phosphorus, and so are used in certain kinds of hydrogenation reactions.
Applications:
Other uses The isotopes 186Re and 188Re are radioactive and are used for treatment of liver cancer. They both have similar penetration depth in tissue (5 mm for 186Re and 11 mm for 188Re), but 186Re has the advantage of a longer half life (90 hours vs. 17 hours).188Re is also being used experimentally in a novel treatment of pancreatic cancer where it is delivered by means of the bacterium Listeria monocytogenes. The 188Re isotope is also used for the rhenium-SCT (skin cancer therapy). The treatment uses the isotope's properties as a beta emitter for brachytherapy in the treatment of basal cell carcinoma and squamous cell carcinoma of the skin.Related by periodic trends, rhenium has a similar chemistry to that of technetium; work done to label rhenium onto target compounds can often be translated to technetium. This is useful for radiopharmacy, where it is difficult to work with technetium – especially the technetium-99m isotope used in medicine – due to its expense and short half-life.
Precautions:
Very little is known about the toxicity of rhenium and its compounds because they are used in very small amounts. Soluble salts, such as the rhenium halides or perrhenates, could be hazardous due to elements other than rhenium or due to rhenium itself. Only a few compounds of rhenium have been tested for their acute toxicity; two examples are potassium perrhenate and rhenium trichloride, which were injected as a solution into rats. The perrhenate had an LD50 value of 2800 mg/kg after seven days (this is very low toxicity, similar to that of table salt) and the rhenium trichloride showed LD50 of 280 mg/kg. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rombo syndrome**
Rombo syndrome:
Rombo syndrome is a very rare genetic disorder characterized mainly by atrophoderma vermiculatum of the face,: 580 multiple milia, telangiectases, acral erythema, peripheral vasodilation with cyanosis, and a propensity to develop basal cell carcinomas.The lesions become visible in late childhood, began at ages 7 to 10 years and are most pronounced on the face. At that time a pronounced, somewhat cyanotic redness of the lips and hands was evident as well as moderate follicular atrophy of the skin on the cheeks. In adulthood, whitish-yellow, milia-like papules and telangiectatic vessels developed. The papules were present particularly on the cheeks and forehead, gradually becoming very conspicuous and dominating the clinical picture. Trichoepitheliomas were found in 1 case.In adults, the eyelashes and eyebrows were either missing or irregularly distributed with defective and maldirected growth. Basal cell carcinomas were a frequent complication. The skin atrophy was referred to as vermiculate atrophoderma. Basal cell carcinomas may develop around the age of 35. Histological observations during the early stage include irregularly distributed and atrophic hair follicles, milia, dilated dermal vessels, lack of elastin or elastin in clumps. After light irradiation a tendency to increased repair activity was observed both in epidermis and in the dermal fibroblasts.
Rombo syndrome:
Histologic sections showed the dermis to be almost devoid of elastin in most areas with clumping of elastic material in other areas. The disorder had been transmitted through at least 4 generations with instances of male-to-male transmission. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acanthocyte**
Acanthocyte:
Acanthocyte (from the Greek word ἄκανθα acantha, meaning 'thorn'), in biology and medicine, refers to an abnormal form of red blood cell that has a spiked cell membrane, due to thorny projections. A similar term is spur cells. Often they may be confused with echinocytes or schistocytes.
Acanthocytes have coarse, irregularly spaced, variably sized crenations, resembling many-pointed stars. They are seen on blood films in abetalipoproteinemia, liver disease, chorea acanthocytosis, McLeod syndrome, and several inherited neurological and other disorders such as neuroacanthocytosis, anorexia nervosa, infantile pyknocytosis, hypothyroidism, idiopathic neonatal hepatitis, alcoholism, congestive splenomegaly, Zieve syndrome, and chronic granulomatous disease.
Usage:
Spur cells may refer synonymously to acanthocytes, or may refer in some sources to a specific subset of 'extreme acanthocytes' that have undergone splenic modification whereby additional cell membrane loss has blunted the spicules and the cells have become spherocytic ('spheroacanthocyte'), as seen in some patients with severe liver disease.Acanthocytosis can refer generally to the presence of this type of crenated red blood cell, such as may be found in severe cirrhosis or pancreatitis, but can refer specifically to abetalipoproteinemia, a clinical condition with acanthocytic red blood cells, neurologic problems and steatorrhea. This particular cause of acanthocytosis (also known as abetalipoproteinemia, apolipoprotein B deficiency, and Bassen-Kornzweig syndrome) is a rare, genetically inherited, autosomal recessive condition due to the inability to fully digest dietary fats in the intestines as a result of various mutations of the microsomal triglyceride transfer protein (MTTP) gene.
Pathophysiology:
Acanthocytes arise from either alterations in membrane lipids or structural proteins. Alterations in membrane lipids are seen in abetalipoproteinemia and liver dysfunction. Alteration in membrane structural proteins are seen in neuroacanthocytosis and McLeod syndrome.
In liver dysfunction, apolipoprotein A-II deficient lipoprotein accumulates in plasma causing increased cholesterol in RBCs. This causes abnormalities of membrane of RBC causing remodeling in spleen and formation of acanthocytes.
In abetalipoproteinemia, there is deficiency of lipids and vitamin E causing abnormal morphology of RBCs.
Differential diagnoses:
The diagnosis of acanthocytosis should be differentiated from: acute or chronic anemia, hepatitis A, B, and C, hepatorenal syndrome, hypopituitarism, malabsorption syndromes, and malnutrition.Acanthocytosis secondary to malnourishment, such as anorexia nervosa and cystic fibrosis, remits with resolution of the nutritional deficiency. Acanthocyte-like cells may be found in hypothyroidism, after splenectomy, and in myelodysplasia.Acanthocytes should be distinguished from echinocytes, which are also called 'burr cells', which although crenated are dissimilar in that they have multiple, small, projecting spiculations at regular intervals on the cell membrane. Burr cells usually imply uremia, but are seen in many conditions, including mild hemolysis in hypomagnesemia and hypophosphatemia, hemolytic anemia in long-distance runners, and pyruvate kinase deficiency. Burr cells can also arise in vitro due to elevated pH, blood storage, ATP depletion, calcium accumulation, and contact with glass. Acanthocytes should also be distinguished from keratocytes, also called 'horn cells' which have a few very large protuberances. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hardware certification**
Hardware certification:
Hardware certification is the process through which computer hardware is tested to ensure it is compatible with specific software packages, and operates as intended in critical situations. With ever dropping prices of hardware devices, the market for networking devices and systems is undergoing a kind of change that can be loosely termed as "generalization." Big established enterprises like Cisco, Novell, Sun Microsystems, etc., no longer manufacture all the hardware required in the market, instead they "license" or "certify" small hardware players operating in countries like Taiwan or China.
Certification process:
Vendor certification To obtain certification, the hardware or software has to conform to a set of protocols and quality standards that are put in place by the original creator of technology. Usually the certification process is done by a "certification partner." Certification partners are selected by the original creators of the technology and these partners are given the authority to do the testing and certification process. After a product is found to be compatible, it is labelled as xxx certified where xxx is the name of the original creator of the technology. Vendors use a "label" on the products to advertise the fact of compatibility with the said technology.
Certification process:
The process of certification ensures that the products made by different manufacturers are standardized and are compatible with each other as indicated in hardware or software platform.
Third-party certification Third-party certification is undertaken by an independent body. To obtain third-party certification, the hardware or software has to confirm to a set of quality standards determined by the third-party. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**International Journal of Sensor Networks**
International Journal of Sensor Networks:
The International Journal of Sensor Networks (IJSNet) is a monthly peer-reviewed scientific journal covering research on distributed, wired, and wireless sensor networks. It is published by Inderscience Publishers. The journal was established in 2006.
Abstracting and indexing:
The journal is abstracted and indexed in Science Citation Index Expanded, Current Contents/Engineering, Computing & Technology, Scopus, Academic OneFile, and the ACM Digital Library. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Proteans**
Proteans:
Proteans (or the Proteus effect) are unpredictable, subtle, often subconscious, flirting signals, such as a woman's touching of her hair when first meeting a man.
Proteans:
The term was coined by Humphries and Driver in 1970 for unpredictable behaviour exhibited by prey animals. It was used in the context of human courtship behaviour by Grammer et al. in 2000.The researchers named the ritual for the shape-shifting Greek god because of the ambiguity of the signals. The name also suggests a first impression, or something that precedes actual flirting. Because of the unconscious nature of proteans, they are not overt invitations to proceed, but more akin to "tells" in a poker game.
Proteans:
One study found that women tend to exhibit interest in the first few minutes of their interactions with strangers regardless of their level of attraction, and only indicated their true level of interest after this time.These signals often indicate that the sender is trying to decide whether they are interested in the "receiver". However, some individuals, instead of playing along, will overestimate the sender's interest and do something more obvious, like asking for a phone number. This can be clumsy and confusing to both parties, and understanding the concept of protean signals is useful for avoiding such missteps. Misinterpreting those cues and responding to them overeagerly is commonly said to happen to men more than women, although both can suffer when this happens. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2,2′-Bipyridine**
2,2′-Bipyridine:
2,2′-Bipyridine (bipy or bpy, pronounced ) is an organic compound with the formula C10H8N2. This colorless solid is an important isomer of the bipyridine family. It is a bidentate chelating ligand, forming complexes with many transition metals. Ruthenium and platinum complexes of bipy exhibit intense luminescence, which may have practical applications.
Preparation, structure, and general properties:
2,2'-Bipyridine was first prepared by decarboxylation of divalent metal derivatives of pyridine-2-carboxylate: M(O2CC5H4N)2 → (C5H4N)2 + 2 CO2 + ...It is prepared by the dehydrogenation of pyridine using Raney nickel: 2 C5H5N → (C5H4N)2 + H2Although bipyridine is often drawn with its nitrogen atoms in cis conformation, the lowest energy conformation both in solid state and in solution is in fact coplanar, with nitrogen atoms in trans position. Monoprotonated bipyridine adopts a cis conformation.
Reactions:
A large number of complexes of 2,2'-bipyridine have been described. It binds metals as a chelating ligand, forming a 5-membered chelate ring. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mash ingredients**
Mash ingredients:
Mash ingredients, mash bill, mashbill, or grain bill are the materials that brewers use to produce the wort that they then ferment into alcohol. Mashing is the act of creating and extracting fermentable and non-fermentable sugars and flavor components from grain by steeping it in hot water, and then letting it rest at specific temperature ranges to activate naturally occurring enzymes in the grain that convert starches to sugars. The sugars separate from the mash ingredients, and then yeast in the brewing process converts them to alcohol and other fermentation products.
Mash ingredients:
A typical primary mash ingredient is grain that has been malted. Modern-day malt recipes generally consist of a large percentage of a light malt and, optionally, smaller percentages of more flavorful or highly colored types of malt. The former is called "base malt"; the latter is known as "specialty malts".
Mash ingredients:
The grain bill of a beer or whisky may vary widely in the number and proportion of ingredients. For example, in beer-making, a simple pale ale might contain a single malted grain, while a complex porter may contain a dozen or more ingredients. In whisky production, Bourbon uses a mash made primarily from maize (often mixed with rye or wheat and a small amount of malted barley), and single malt Scotch exclusively uses malted barley.
Variables:
Each particular ingredient has its own flavor that contributes to the final character of the beverage. In addition, different ingredients carry other characteristics, not directly relating to the flavor, which may dictate some of the choices made in brewing: nitrogen content, diastatic power, color, modification, and conversion.
Variables:
Nitrogen content The nitrogen content of a grain relates to the mass fraction of the grain that is made up of protein, and is usually expressed as a percentage; this fraction is further refined by distinguishing what fraction of the protein is water-soluble, also usually expressed as a percentage; 40% is typical for most beermaking grains. Generally, brewers favor lower-nitrogen grains, while distillers favor high-nitrogen grains. In most beermaking, an average nitrogen content in the grains of at most 10% is sought; higher protein content, especially the presence of high-mass proteins, causes "chill haze", a cloudy visual quality to the beer. However, this is mostly a cosmetic desire dating from the mass production of glassware for presenting serving beverages; traditional styles such as sahti, saison, and bière de garde, as well as several Belgian styles, make no special effort to create a clear product. The quantity of high-mass proteins can be reduced during the mash by making use of a protease rest. In Britain, preferred brewers' grains are often obtained from winter harvests and grown in low-nitrogen soil; in central Europe, no special changes are made for the grain-growing conditions and multi-step decoction mashing is favored instead.
Variables:
Distillers, by contrast, are not as constrained by the amount of protein in their mash as the non-volatile nature of proteins means that none is included in the final distilled product. Therefore, distillers seek out higher-nitrogen grains to ensure a more efficiently-made product. Higher-protein grains generally have more diastatic power.
Variables:
Diastatic power Diastatic power (DP), also called the "diastatic activity" or "enzymatic power", is a property of malts (grains that have begun to germinate) that refers to the malt's ability to break down starches into simpler fermentable sugars during the mashing process. Germination produces a number of enzymes, such as amylase, that can convert the starch naturally present in barley and other grains into sugar. The mashing process activates these enzymes by soaking the grain in water at a controlled temperature. In general, the hotter a grain is kilned, the less its diastatic activity. As a consequence, only lightly colored grains can be used as base malts, with Munich malt being the darkest base malt generally available.
Variables:
Diastatic activity can also be provided by diastatic malt extract or by inclusion of separately-prepared brewing enzymes.
Variables:
Diastatic power for a grain is measured in degrees Lintner (°Lintner or °L, although the latter can conflict with the symbol °L for Lovibond color); or in Europe by Windisch-Kolbach units (°WK). The two measures are related by Lintner WK 16 3.5 WK 3.5 Lintner 16 .A malt with enough power to self-convert has a diastatic power near 35 °Lintner (94 °WK). Until recently, the most active, so-called "hottest", malts currently available were American six-row pale barley malts, which have a diastatic power of up to 160 °Lintner (544 °WK). Wheat malts have begun to appear on the market with diastatic power of up to 200 °Lintner. Although with the huskless wheat being somewhat difficult to work with, this is usually used in conjunction with barley, or as an addition to add high diastatic power to a mash.
Variables:
Color In brewing, the color of a grain or product is evaluated by the Standard Reference Method (SRM), Lovibond (°L), American Society of Brewing Chemists (ASBC) or European Brewery Convention (EBC) standards. While SRM and ASBC originate in North America and EBC in Europe, all three systems can be found in use throughout the world; degrees Lovibond has fallen out of industry use but has remained in use in homebrewing circles as the easiest to implement without a spectrophotometer. The darkness of grains range from as light as less than 2 SRM/4 EBC for Pilsener malt to as dark as 700 SRM/1600 EBC for black malt and roasted barley.
Variables:
Modification The quality of starches in a grain is variable with the strain of grain used and its growing conditions. "Modification" refers specifically to the extent to which starch molecules in the grain consist of simple chains of starch molecules versus branched chains; a fully modified grain contains only simple-chain starch molecules. A grain that is not fully modified requires mashing in multiple steps rather than at simply one temperature as the starches must be de-branched before amylase can work on them. One indicator of the degree of modification of a grain is that grain's Nitrogen ratio; that is, the amount of soluble Nitrogen (or protein) in a grain vs. the total amount of Nitrogen (or protein). This number is also referred to as the "Kolbach Index" and a malt with a Kolbach index between 36% and 42% is considered a malt that is highly modified and suitable for single infusion mashing. Maltsters use the length of the acrospire vs. the length of the grain to determine when the appropriate degree of modification has been reached before drying or kilning.
Variables:
Conversion Conversion is the extent to which starches in the grain have been enzymatically broken down into sugars. A caramel or crystal malt is fully converted before it goes into the mash; most malted grains have little conversion; unmalted grains, meanwhile, have little or no conversion. Unconverted starch becomes sugar during the last steps of mashing, through the action of alpha and beta amylases.
Malts:
The oldest and most predominant ingredient in brewing is barley, which has been used in beer-making for thousands of years. Modern brewing predominantly uses malted barley for its enzymatic power, but ancient Babylonian recipes indicate that, without the ability to malt grain in a controlled fashion, baked bread was simply soaked in water.
Malted barley dried at a sufficiently low temperature contains enzymes such as amylase, which convert starch into sugar. Therefore, sugars can be extracted from the barley's own starches simply by soaking the grain in water at a controlled temperature; this is mashing.
Malts:
Pilsner malt Pilsner malt, the basis of pale lager, is quite pale and strongly flavored. Invented in the 1840s, Pilsner malt is the lightest-colored generally available malt, and also carries a strong, sweet malt flavor. Usually a pale lager's grain bill consists entirely of this malt, which has enough enzymatic power to be used as a base malt. The commercial desirability of light-colored beers has also led to some British brewers adopting Pilsner malt (sometimes described simply as "lager malt" in Britain) in creating golden ales. In Germany, Pilsner malt is also used in some interpretations of the Kölsch style. ASBC 1-2/EBC 3–4, DP 60 °Lintner.
Malts:
Pale malt Pale malt is the basis of pale ale and bitter, and the precursor in production of most other British beer malts. Dried at temperatures sufficiently low to preserve all the brewing enzymes in the grain, it is light in color and, today, the cheapest barley malt available due to mass production. It can be used as a base malt—that is, as the malt constituting the majority of the grist—in many styles of beer. Typically, English pale malts are kilned at 95–105 °C. Color ASBC 2-3/EBC 5–7. Diastatic power (DP) 45 °Lintner.
Malts:
Mild malt Mild malt is often used as the base malt for mild ale, and is similar in color to pale malt. Mild malt is kilned at slightly higher temperatures than pale malt to provide a less neutral, rounder flavor generally described as "nutty". ASBC 3/EBC 6.
Malts:
Amber malt Amber malt is a more toasted form of pale malt, kilned at temperatures of 150–160 °C, and is used in brown porter; older formulations of brown porter use amber malt as a base malt (though this was diastatic and produced in different conditions from a modern amber malt). Amber malt has a bitter flavor that mellows on aging, and can be quite intensely flavored. In addition to its use in porter, it also appears in a diverse range of British beer recipes. ASBC 50-70/EBC 100–140; amber malt has no diastatic power.
Malts:
Stout malt Stout malt is sometimes seen as a base malt for stout beer; light in color, it is prepared so as to maximize diastatic power in order to better convert the large quantities of dark malts and unmalted grain used in stouts. In practice, however, most stout recipes make use of pale malt for its much greater availability. ASBC 2-3/EBC 4–6, DP 60–70 °Lintner.
Malts:
Brown malt Brown malt is a darker form of pale malt, and is used typically in brown ale as well as in porter and stout. Like amber malt, it can be prepared from pale malt at home by baking a thin layer of pale malt in an oven until the desired color is achieved. 50–70 °L, no enzymes.
Chocolate malt Chocolate malt is similar to pale and amber malts but kilned at even higher temperatures. Producing complex chocolate and cocoa flavours, it is used in porters and sweet stouts as well as dark mild ales. It contains no enzymes. ASBC 450-500/EBC 1100–1300.
Malts:
Black malt Black malt, also called patent malt or black patent malt, is barley malt that has been kilned to the point of carbonizing, around 200 °C. The term "patent malt" comes from its invention in England in 1817, late enough that the inventor of the process for its manufacture, Daniel Wheeler, was awarded a patent. Black malt provides the colour and some of the flavour in black porter, contributing an acrid, ashy undertone to the taste. In small quantities, black malt can also be used to darken beer to a desired color, sometimes as a substitute for caramel colour. Due to its high kilning temperature, it contains no enzymes. ASBC 500-600/EBC >1300.
Malts:
Crystal malt Crystal malts, or caramel malts are prepared separately from pale malts. They are high-nitrogen malts that are wetted and roasted in a rotating drum before kilning. They produce strongly sweet toffee-like flavors and are sufficiently converted that they can be steeped without mashing to extract their flavor. Crystal malts are available in a range of colors, with darker-colored crystal malts kilned at higher temperatures producing stronger, more caramel-like overtones. Some of the sugars in crystal malts caramelize during kilning and become unfermentable. Hence, adding crystal malt increases the final sweetness of a beer. They contain no enzymes. ASBC 50-165/EBC 90–320; the typical British crystal malt used in pale ale and bitter is around ASBC 70–80.
Malts:
Distiller's malt Standard distiller's malt or pot still malt is quite light and very high in nitrogen compared to beer malts. These malts are used in the production of whiskey/whisky and generally originate from northern Scotland.
Malts:
Peated malt Peated malt is distiller's malt that has been smoked over burning peat, which imparts the aroma and flavor characteristics of Islay whisky and some Irish whiskey. Recently, some brewers have also included peated malt in interpretations of Scotch ales, although this is generally ahistorical. When peat is used in large amounts for beer making, the resulting beer tends to have a very strong earthy and smoky flavor that most mainstream beer drinkers would find irregular.
Malts:
Vienna malt Vienna malt or Helles malt is the characteristic grain of Vienna lager and Märzen; although it generally takes up only ten to fifteen percent of the grain bill in a beer, it can be used as a base malt. It has sufficient enzymatic power to self-convert, and it is somewhat darker and kilned at a higher temperature than Pilsner malt. ASBC 3-4/EBC 7–10, DP 50 °Lintner.
Malts:
Munich malt Munich malt is used as the base malt of the bock beer style, especially doppelbock, and appears in dunkel lager and Märzens in smaller quantities. While a darker grain than pale malt, it has sufficient diastatic power to self-convert, despite being kilned at temperatures around 115 °C. It imparts "malty," although not necessarily sweet characteristics, depending on mashing temperatures. ASBC 4-6/EBC 10–15, DP 40 °Lintner.
Malts:
Rauchmalz Rauchmalz is a German malt that is prepared by being dried over an open flame rather than via kiln. The grain has a smoky aroma and is an essential ingredient in Bamberg Rauchbier.
Acid malt Acid malt, also known as acidulated malt, whose grains contain lactic acid, can be used as a continental analog to Burtonization. Acid malt lowers the mash pH and provides a rounder, fuller character to the beer, enhancing the flavor of Pilseners and other light lagers. Lowering the pH also helps prevent beer spoilage through oxidation.
Other malts Honey malt is an intensely flavored, lightly colored malt. 18–20 °L.
Melanoidin malt, a malt like the Belgian Aromatic malt, adds roundness and malt flavor to a beer with a comparably small addition in the grain bill. It also stabilizes the flavor.
Unmalted barley Unmalted barley kernels are used in mashes for some Irish whiskey.
Roast barley are un-malted barley kernels toasted in an oven until almost black. Roast barley is, after base malt, usually the most-used grain in stout beers, contributing the majority of the flavor and the characteristic dark-brown color; undertones of chocolate and coffee are common. ASBC 500-600/EBC >1300 or more, no diastatic activity.
Black barley is like roast barley except even darker, and may be used in stouts. It has a strong, astringent flavor and contains no enzymes.Flaked barley is unmalted, dried barley rolled into flat flakes. It imparts a rich, grainy flavor to beer and is used in many stouts, especially Guinness stout; it also improves head formation and retention.
Torrefied barley is barley kernels that have been heated until they pop like popcorn.
Other grains:
Wheat Wheat malt Beer brewed in the German Hefeweizen style relies heavily on malted wheat as a grain. Under the Reinheitsgebot, wheat was treated separately from barley, as it was the more expensive grain.
Torrefied wheat Torrefied wheat is used in British brewing to increase the size and retention of a head in beer. Generally it is used as an enhancer rather than for its flavor.
Raw wheat Belgian witbier and Lambic make heavy use of raw wheat in their grist. It provides the distinctive taste and clouded appearance in a witbier and the more complex carbohydrates needed for the wild yeast and bacteria that make a lambic.
Wheat flour Until the general availability of torrefied wheat, wheat flour was often used for similar purposes in brewing. Brewer's flour is only rarely available today, and is of a larger grist than baker's flour.
Oats Oats in the form of rolled or steel-cut oats are used as mash ingredients in Oatmeal Stout.
Other grains:
Rye The use of rye in a beer typifies the rye beer style, especially the German Roggenbier. Rye is also used in the Slavic kvass and Finnish sahti farmhouse styles, as readily available grains in eastern Europe. However, the use of rye in brewing is considered difficult as rye lacks a hull (like wheat) and contains large quantities of beta-glucans compared to other grains; these long-chain sugars can leach out during a mash, creating a sticky gelatinous gum in the mash tun, and as a result brewing with rye requires a long, thorough beta-glucanase rest. Rye is said to impart a spicy, dry flavor to beer.
Other grains:
Sorghum and millet Sorghum and millet are often used in African brewing. As gluten-free grains, they have gained popularity in the Northern Hemisphere as base materials for beers suitable for people with Celiac disease. Sorghum produces a dark, hazy beer. However, sorghum malt is difficult to prepare and rarely commercially available outside certain African countries. Millet is an ingredient in chhaang and pomba, and both grains together are used in oshikundu.
Other grains:
Rice and maize In the US, rice and maize (corn) are often used by commercial breweries as a means of adding fermentable sugars to a beer cheaply, due to the ready availability and low price of the grains. Maize is also the base grain in chicha and some cauim, as well as Bourbon whiskey and Tennessee Whiskey; while rice is the base grain of happoshu and various mostly Asian fermented beverages often referred to as "rice wines" such as sake and makgeolli; maize is also used as an ingredient in some Belgian beers such as Rodenbach to lighten the body.
Other grains:
Maize was originally introduced into the brewing of American lagers because of the high protein content of the six-row barley; adding maize, which is high in sugar but low in protein, helped thin out the body of the resulting beer. Increased amounts of maize use over time led to the development of the American pale lager style. Maize is generally not malted (although it is in some whiskey recipes) but instead introduced into the mash as flaked, dried kernels. Prior to a brew, rice and maize are cooked to allow the starch to gelatinize and thereby render it convertible.
Non-cereal grains:
Buckwheat and quinoa, while not cereal grasses (but are whole grains), both contain high levels of available starch and protein, while containing no gluten. Therefore, some breweries use these plants in the production of beer suitable for people with Celiac disease, either alone or in combination with sorghum.
Syrups and extracts:
Another way of adding sugar or flavoring to a malt beverage is the addition of natural or artificial sugar products such as honey, white sugar, Dextrose, and/or malt extract. While these ingredients can be added during the mash, the enzymes in the mash do not act on them. Such ingredients can be added during the boil of the wort rather than the mash, and as such, are also known as copper sugars.
Syrups and extracts:
One syrup commonly used in mash, however, is dry or dried malt extract or DME. DME is prepared by mashing malt in the normal fashion, then concentrating and spray drying the resulting wort. DME is used extensively in homebrewing as a substitute for base malt. It typically has no diastatic power because the enzymes are denatured in the production process.
Regional differences:
Britain British brewing makes use of a wide variety of malts, with considerable stylistic freedom for the brewer to blend them. Many British malts were developed only as recently as the Industrial Revolution, as improvements in temperature-controlled kilning allowed finer control over the drying and toasting of the malted grains.
Regional differences:
The typical British brewer's malt is a well-modified, low-nitrogen barley grown in the east of England or southeast of Scotland. In England, the best-known brewer's malt is made from the Maris Otter strain of barley; other common strains are Halcyon, Pipkin, Chariot, and Fanfare. Most malts in current use in Britain are derived from pale malt and were invented no earlier than the reign of Queen Anne. Brewing malt production in Britain is thoroughly industrialized, with barley grown on dedicated land and malts prepared in bulk in large, purpose-build maltings and distributed to brewers around the country to order.
Regional differences:
Continental Europe Before controlled-temperature kilning became available, malted grains were dried over wood fires; Rauchmalz (German: smoked malt) is malt dried using this traditional process. In Germany, beech is often used as the wood for the fire, imparting a strongly smoky flavor to the malt. This malt is then used as the primary component of rauchbier; alder-smoked malt is used in Alaskan smoked porters. Rauchmalz comes in several varieties, generally named for and corresponding to standard kilned varieties (e.g. Rauchpilsener to Pilsener); color and diastatic power are comparable to those for an equivalent kilned grain.
Regional differences:
Similarly to crystal malts in Britain, central Europe makes use of caramel malts, which are moistened and kilned at temperatures around 55–65 °C in a rotating drum before being heated to higher temperatures for browning. The lower-temperature moistened kilning causes conversion and mashing to take place in the oven, resulting in a grain's starches becoming mostly or entirely converted to sugar before darkening. Caramel malts are produced in color grades analogous to other lager malts: carapils for pilsener malt, caravienne or carahell for Vienna malt, and caramunch for Munich malt. Color and final kilning temperature are comparable to non-caramel analog malts; there is no diastatic activity. Carapils malt is sometimes also called dextrin malt. 10–120 °L.
Regional differences:
United States American brewing combines British and Central European heritages, and as such uses all the above forms of beer malt; Belgian-style brewing is less common but its popularity is growing. In addition, America also makes use of some specialized malts: 6-row pale malt is a pale malt made from a different species of barley. Quite high in nitrogen, 6-row malt is used as a "hot" base malt for rapid, thorough conversion in a mash, as well as for extra body and fullness; the flavor is more neutral than 2-row malt. 1.8 °L, 160 °Lintner.
Regional differences:
Victory malt is a specialized lightly roasted 2-row malt that provides biscuity, caramel flavors to a beer. Similar in color to amber and brown malt, it is often an addition to American brown ale. 25 °L, no diastatic power.
Other notable American barley malts include Special Roast and coffee malt. Special Roast is akin to a darker variety of victory malt.
Regional differences:
Belgium Belgian brewing makes use of the same grains as central European brewing. In general, though, Belgian malts are slightly darker and sweeter than their central European counterparts. In addition, Belgian brewing uses some local malts: Pale malt in Belgium is generally darker than British pale malt. Kilning takes place at temperatures five to ten °C lower than for British pale malt, but for longer periods; diastatic power is comparable to that of British pale malt. ASBC 4/EBC 7.
Regional differences:
Special B is a dark, intensely sweet crystal malt providing a strong malt flavor.
Biscuit malt is a lightly flavored roasted malt used to darken some Belgian beers. 45–50 EBC/25 °L.
Aromatic malt, by contrast, provides an intensely malty flavor. Kilned at 115 °C, it retains enough diastatic power to self-convert. 50–55 EBC/20 °L. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lipopolysaccharide glucosyltransferase II**
Lipopolysaccharide glucosyltransferase II:
In enzymology, a lipopolysaccharide glucosyltransferase II (EC 2.4.1.73) is an enzyme that catalyzes the chemical reaction UDP-glucose + lipopolysaccharide ⇌ UDP + alpha-D-glucosyl-lipopolysaccharideThus, the two substrates of this enzyme are UDP-glucose and lipopolysaccharide, whereas its two products are UDP and alpha-D-glucosyl-lipopolysaccharide.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:galactosyl-lipopolysaccharide alpha-D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-galactosylpolysaccharide, and glucosyltransferase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vital Information for a Virtual Age**
Vital Information for a Virtual Age:
Vital Information for a Virtual Age, also known as '¡VIVA!', is to empower high school students and assist them in serving their communities; to improve the awareness and use of quality health information resources in communities; and to create student-centered programs for community health outreach.
History:
In 2001, four juniors from Med High were selected to be part of the Peer Tutor project at Biblioteca Las Américas to teach MedlinePlus and PubMed to their peers, their families and the staff of Med High.
Goals included increasing the utilization of MedlinePlus and other health information resources and obtaining feedback about factors that are important to users of these information resources.
The project continues to grow with more peer tutors being included each year. Services expand each year as well, educating community groups about the valuable information resources offered by NLM.
Funding:
This project has been funded in whole or in part with Federal funds from the National Library of Medicine, National Institutes of Health, under Contract No. HHSN-276-2011-00007-C with the Houston Academy of Medicine-Texas Medical Center Library.
Awards:
2012 National School Library Media Program of the Year - District libraries category.
2006 National School Library Media Program of the Year - Individual Library category.
2004 U.S. National Commission on Libraries and Information Science Blue Ribbon Consumer Health Information Recognition Awards. We shared this award for libraries with the RAHC.
2004 The Institute of Museum and Library Services National Award for Museum and Library Service was awarded to the Regional Academic Health Center, in part, for their Train the Trainer approach in their partnership with Med High. 2003 Texas Library Association Project of the Year.
2003 Med High project using MedlinePlus won second place at the state Health Occupations Student Association (HOSA) meet.
Publications:
Click here to view a complete list of publications.
Helping Friends and Family Find Health Information: A Photovoice Evaluation of Teens Promoting MedlinePlus presented at MLA 2014 in a poster session.
NN/LM Evaluation Liaisons’ TeleconferenceSeptember 28, 2010 “Adolescent Health Literacy: The Importance of Credible Sources” Dr. Suad Ghaddar, Associate Director at the South Texas Border Health Disparities Center at The University of Texas-Pan American Reibman, Sara, and Heike Piornak. "Can I Trust This Website? - Read, Think, Use." Englisch Sekundarstufe I Oct. 2007: 20 pp.
Peer Power PLUS an online symposium on peer tutors, online health resources, and community outreach presented at MLA 2007 in a poster session.
Teens Promote Health Awareness ¡VIVA! Peer Tutor Summer Institutes 2005-2007 presented at MLA 2007 in a poster session.
High school peer tutors teach MedlinePlus: a model for Hispanic outreach. Warner DG, Olney CA, Wood FB, Hansen L, Bowden VM. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Georges Giralt PhD Award**
Georges Giralt PhD Award:
The Georges Giralt PhD Award is a European scientific prize for extraordinary contributions in robotics. It is yearly awarded at the European Robotics Forum by euRobotics AISBL, a non-profit organisation based in Brussels with the objective to turn robotics beneficial for Europe’s economy and society.Georges Giralt received his PhD in 1958, from the Paul Sabatier University, in the domain of electrical machines, and soon afterwards became a pioneer in robotics, in Europe and worldwide. He was especially instrumental in bringing in scientific foundations and methodology when the domain was still young, and a loose coupling of mechanical and electrical engineering, adopting the early results of automatic control.
Georges Giralt PhD Award:
The high reputation of the Georges Giralt PhD Award is based on the prominent role of the awarding institution euRobotics. With more than 250 member organisations, euRobotics represents the academic and industrial robotics community in Europe. Moreover, it provides the European robotics community with a legal entity to engage in a public/private partnership with the European Commission.Entitled for participation for the Georges Giralt PhD Award are all robotics-related dissertations which have been successfully defended at a European university.
Georges Giralt PhD Award:
The US-American counterpart is the Dick Volz Award.
Award winners:
2023 Antonio Andriella, Ribin Balachandran 2022 Antonio Loquercio, Michael Lutter 2021 Giuseppe Averta, Bernd Henze 2020 Cosimo Della Santina 2019 Grazioso Stanislao, Teodor Tomic 2018 Frank Bonnet, Daniel Leidner 2017 Johannes Englsberger 2016 Alexander Dietrich, Mark Müller 2015 Jörg Stückler 2014 Manuel Catalano, Fabien Expert, Rainer Jaekel 2013 Jens Kober 2012 Sami Haddadin 2011 Mario Pratts 2010 Ludovic Righetti 2009 Alejandro-Dizan Vasquez-Govea 2008 Cyrill Stachniss, Eduardo Rocon 2007 Pierre Lamon 2006 Martijn Wisse 2005 Juan Andrade Cetto 2004 Gilles Duchemin 2003 Ralf Koeppe 2002 Gianluca Antonelli, Jens-Steffen Gutmann | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Photoacoustic microscopy**
Photoacoustic microscopy:
Photoacoustic microscopy is an imaging method based on the photoacoustic effect and is a subset of photoacoustic tomography. Photoacoustic microscopy takes advantage of the local temperature rise that occurs as a result of light absorption in tissue. Using a nanosecond pulsed laser beam, tissues undergo thermoelastic expansion, resulting in the release of a wide-band acoustic wave that can be detected using a high-frequency ultrasound transducer. Since ultrasonic scattering in tissue is weaker than optical scattering, photoacoustic microscopy is capable of achieving high-resolution images at greater depths than conventional microscopy methods. Furthermore, photoacoustic microscopy is especially useful in the field of biomedical imaging due to its scalability. By adjusting the optical and acoustic foci, lateral resolution may be optimized for the desired imaging depth.
Photoacoustic signal:
The goal of photoacoustic microscopy is to find the local pressure rise p0 , which can be used to calculate the absorption coefficient μa according to the formula: p0=ΓηthμaF, where ηth is the percentage of light converted to heat, F is the local optical fluence (J/cm2), and the dimensionless Gruneisen parameter Γ is defined as: Γ=βκρCV, where β is the thermal coefficient of volume expansion (K−1), κ is the isothermal compressibility (Pa−1), and ρ is the density (kg/m3).Following the initial pressure rise, a photoacoustic wave propagates at the speed of sound within the medium and can be detected with an ultrasound transducer.
Image reconstruction:
One of the major benefits of photoacoustic microscopy is the simplicity of image reconstruction. A laser pulse excites tissue in the axial direction and the resulting photoacoustic waves are detected by an ultrasound transducer. The transducer then converts the mechanical energy into a voltage signal that can be read by an analog-to-digital converter for post-processing. A one-dimensional image, known as an A-line, is formed as a result of each laser pulse. Hilbert transform of an A-line reveals depth-encoded information. A 3D photoacoustic image can then be formed by combining multiple A-lines produced by 2D raster scanning.
Image reconstruction:
Synthetic Aperture Image Reconstruction Altering delays of the elements on an ultrasound transducer allows one to focus ultrasound waves similar to passing through an acoustic lens. This delay-and-sum method enables one to find the signal at each focal point. However, the lateral resolution is limited by the presence of side lobes, which appear at polar angles and are dependent on the width of each element.
Contrast:
In photoacoustic imaging modalities, including photoacoustic microscopy, contrast is based on photon excitation and is thus determined by the optical properties of the tissue. When an electron absorbs a photon, it moves to a higher energy state. Upon returning to a lower energy level, the electron undergoes either radiative or nonradiative relaxation. During radiative relaxation, the electron releases energy in the form of a photon. On the other hand, an electron undergoing nonradiative relaxation releases energy as heat. The heat then induces a pressure rise that propagates as a photoacoustic wave. Due to the fact that almost all molecules are capable of nonradiative relaxation, photoacoustic microscopy has the potential to image a wide range of endogenous and exogenous agents. By contrast, fewer molecules are capable of radiative relaxation, thus limiting fluorescence microscopy techniques such as one-photon and two-photon microscopy. Current research in photoacoustic microscopy takes advantage of both endogenous and exogenous contrast agents to gain functional information about the body, from blood saturation levels to cancer proliferation rate.
Contrast:
Endogenous Contrast Agents Endogenous contrast agents, molecules naturally occurring within the body, are useful in photoacoustic microscopy due to the fact that they may be imaged non-invasively. Endogenous agents are also non-toxic and do not affect the properties of the tissue being studied. In particular, endogenous absorbers can be classified based on their absorbing wavelengths.
Contrast:
Ultraviolet Absorbers Within the ultraviolet light range (λ = 180 to 400 nm), the primary absorber in the body is DNA and RNA. By using ultraviolet photoacoustic microscopy, DNA and RNA can be imaged in the cell nuclei without the use of fluorescence labeling. Since cancer is associated with DNA replication failure, UV photoacoustic microscopy has the potential to be used for early cancer detection.
Contrast:
Visible Light Absorbers Visible light absorbers (λ = 400 to 700 nm) include oxyhemoglobin, deoxyhemoglobin, melanin, and cytochrome c. Visible light photoacoustic microscopy is particularly useful in determining hemoglobin concentration and oxygen saturation due to the difference in absorption profiles of oxyhemoglobin and deoxyhemoglobin. Real-time analysis can then be used to determine blood flow speed and oxygen metabolism rate. In addition, photoacoustic microscopy is capable of early melanoma detection due to the high concentration of melanin found in skin cancer cells.
Contrast:
Near-Infrared Absorbers Near-Infrared absorbers (λ = 700 to 1400 nm) include water, lipids, and glucose. Photoacoustic determination of blood glucose levels can be used for treating diabetes, while studying lipid concentrations within blood vessels is important for monitoring the progression of atherosclerosis. It is still feasible to quantify and compare deoxyhemoglobin and hemoglobin concentrations at this wavelength, trading deeper tissue penetration for lower absorption.
Contrast:
Exogenous Contrast Agents Although endogenous contrasts agents are noninvasive and simpler to use, they are limited by their inherent behavior and concentration, making it difficult to monitor certain processes if optical absorption is weak. On the other hand, exogenous agents can be engineered to specifically bind to certain molecules of interest. In addition, the concentration of exogenous agents can be optimized to produce a greater signal and provide more contrast. Through selective binding, exogenous contrast agents are capable of targeting specific molecules of interest while also enhancing resulting images.
Contrast:
Organic Dyes Organic dyes, such as ICG-PEG and Evans blue, are used to enhance vasculature as well as to improve tumor imaging. In addition, dyes are easily filtered out of the body due to their small size (≤ 3 nm).
Contrast:
Nanoparticles Nanoparticles are currently being researched due to their chemical inactivity and ability to target tumor cells. These properties allow for cancer propagation to be monitored and potentially enables intraoperative cancer removal. However, more studies on short-term toxicity effects are necessary to determine if nanoparticles are suitable for clinical research. Gold nanoparticles have shown promise as a contrast agent for image-guided medicine. AuNPs have been widely used as contrast agents due to their strong and tunable optical absorption.
Contrast:
Fluorescent Proteins Fluorescent proteins have been developed for fluorescence microscopy imaging and are unique in that they can be genetically encoded and therefore do not need to be delivered into the body. Using photoacoustic microscopy, fluorescent proteins can be visualized at depths beyond the limit of typical microscopy methods. Frequency-dependent acoustic attenuation in tissue and dampening of higher frequencies limits the bandwidth of light propagation through deeper regions in tissue. Fluorescent proteins act as light source at the target region, bypassing the limitation of optical attenuation. However, the effectiveness of fluorescent proteins is limited by low fluence changes, as the light diffusion equation predicts lower than 5% increase.
Resolution:
Photoacoustic microscopy achieves greater penetration than conventional microscopy due to ultrasonic detection. As a result, axial resolution is defined acoustically and is determined by the formula: 0.88 νAΔfA, where νA is the speed of sound in the medium and ΔfA is the photoacoustic signal bandwidth. The axial resolution of the system can be improved by using a wider bandwidth ultrasound transducer as long as the bandwidth matches that of the photoacoustic signal.
Resolution:
The lateral resolution of photoacoustic microscopy depends on the optical and acoustic foci of the system. Optical-resolution photoacoustic microscopy (OR-PAM) uses a tighter optical focus than acoustic focus, while acoustic-resolution photoacoustic microscopy (AR-PAM) uses a tighter acoustic focus than optical focus.
Resolution:
Optical-Resolution Photoacoustic microscopy Due to a tighter optical focus, OR-PAM is more useful for imaging in the quasi-ballistic range of depths up to 1 mm. The lateral resolution of OR-PAM is determined by the formula: 0.51 λONAO, where λO is the optical wavelength and NAO is the numerical aperture of the optical objective lens. The lateral resolution of OR-PAM can be improved by using a shorter laser pulse and tighter focusing of the laser spot. OR-PAM systems can typically achieve a lateral resolution of 0.2 to 10 μm, allowing OR-PAM to be classified as a super-resolution imaging method.
Resolution:
Acoustic-Resolution Photoacoustic microscopy At depths greater than 1 mm and up to 3 mm, acoustic-resolution photoacoustic microscopy (AR-PAM) is more useful due to greater optical scattering. Acoustic scattering is much weaker beyond the optical diffusion limit, making AR-PAM more practical as it provides higher lateral resolution at these depths. The lateral resolution of AR-PAM is determined by the formula: 0.71 λANAA, where λA is the central wavelength of the photoacoustic wave and NAA is the numerical aperture of the ultrasound transducer. Higher lateral resolution can therefore be achieved by increasing the center frequency of the ultrasound transducer and tighter acoustic focusing. AR-PAM systems can typically achieve a lateral resolution of 15 to 50 μm.
Dark-field Confocal Photoacoustic microscopy:
By ignoring ballistic light, dark-field confocal photoacoustic microscopy reduces surface signal. This method uses a dark-field pulsed laser and high-NA ultrasonic detection, with the fiber output end coaxially aligned with the focused ultrasound transducer. Filtration of ballistic light relies on the altered shape of the excitation laser beam instead of an opaque disk, as used in conventional dark-field microscopy. The general reconstruction technique is used to convert the photoacoustic signal into one A-line, and B-line images are produced by raster scanning.
Biomedical applications:
Photoacoustic microscopy has a wide range of applications in the biomedical field. Due to its ability to image a variety of molecules based on optical wavelength, photoacoustic microscopy can be used to gain functional information about the body noninvasively. Blood flow dynamics and oxygen metabolic rates can be measured and correlated to studies of atherosclerosis or tumor proliferation. Exogenous agents can be used to bind to cancerous tissue, enhancing image contrast and aiding in surgical removal. On the same note, photoacoustic microscopy is useful in early cancer diagnosis due to the difference in optical absorption properties compared to healthy tissue. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ribonucleotide reductase inhibitor**
Ribonucleotide reductase inhibitor:
Ribonucleotide reductase inhibitors are a family of anti-cancer drugs that interfere with the growth of tumor cells by blocking the formation of deoxyribonucleotides (building blocks of DNA).
Examples include: motexafin gadolinium.
hydroxyurea fludarabine, cladribine, gemcitabine, tezacitabine, and triapine gallium maltolate, gallium nitrateTezacitabine, Tezacitabine A chemotherapy candidate nucleotide analogue that failed in clinical trials due to on-target toxicity (febrile neutropenia). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**E-social science**
E-social science:
E-social science is a more recent development in conjunction with the wider developments in e-science. It is social science using grid computing and other information technologies to collect, process, integrate, share, and disseminate social and behavioural data. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Methylene blue**
Methylene blue:
Methylthioninium chloride, commonly called methylene blue, is a salt used as a dye and as a medication. As a medication, it is mainly used to treat methemoglobinemia by converting/chemically reducing the ferric iron in hemoglobin to ferrous iron. Specifically, it is used to treat methemoglobin levels that are greater than 30% or in which there are symptoms despite oxygen therapy. It has previously been used for treating cyanide poisoning and urinary tract infections, but this use is no longer recommended.Methylene blue is typically given by injection into a vein. Common side effects include headache, vomiting, confusion, shortness of breath, and high blood pressure. Other side effects include serotonin syndrome, red blood cell breakdown, and allergic reactions. Use often turns the urine, sweat, and stool blue to green in color. While use during pregnancy may harm the baby, not using it in methemoglobinemia is likely more dangerous.Methylene blue was first prepared in 1876, by Heinrich Caro. It is on the World Health Organization's List of Essential Medicines.
Uses:
Methemoglobinemia Methylene blue is employed as a medication for the treatment of methemoglobinemia, which can arise from ingestion of certain pharmaceuticals, toxins, or broad beans. Normally, through the NADH or NADPH dependent methemoglobin reductase enzymes, methemoglobin is reduced back to hemoglobin. When large amounts of methemoglobin occur secondary to toxins, methemoglobin reductases are overwhelmed. Methylene blue, when injected intravenously as an antidote, is itself first reduced to leucomethylene blue, which then reduces the heme group from methemoglobin to hemoglobin. Methylene blue can reduce the half life of methemoglobin from hours to minutes. At high doses, however, methylene blue actually induces methemoglobinemia, reversing this pathway.
Uses:
Methylphen Cyanide poisoning Since its reduction potential is similar to that of oxygen and can be reduced by components of the electron transport chain, large doses of methylene blue are sometimes used as an antidote to potassium cyanide poisoning, a method first successfully tested in 1933 by Dr. Matilda Moldenhauer Brooks in San Francisco, although first demonstrated by Bo Sahlin of Lund University, in 1926.
Uses:
Dye or stain Methylene blue is used in endoscopic polypectomy as an adjunct to saline or epinephrine, and is used for injection into the submucosa around the polyp to be removed. This allows the submucosal tissue plane to be identified after the polyp is removed, which is useful in determining if more tissue needs to be removed, or if there has been a high risk for perforation. Methylene blue is also used as a dye in chromoendoscopy, and is sprayed onto the mucosa of the gastrointestinal tract in order to identify dysplasia, or pre-cancerous lesions. Intravenously injected methylene blue is readily released into the urine and thus can be used to test the urinary tract for leaks or fistulas.In surgeries such as sentinel lymph node dissections, methylene blue can be used to visually trace the lymphatic drainage of tested tissues. Similarly, methylene blue is added to bone cement in orthopedic operations to provide easy discrimination between native bone and cement. Additionally, methylene blue accelerates the hardening of bone cement, increasing the speed at which bone cement can be effectively applied. Methylene blue is used as an aid to visualisation/orientation in a number of medical devices, including a Surgical sealant film, TissuePatch. In fistulas and pilonidal sinuses it is used to identify the tract for complete excision. It can also be used during gastrointestinal surgeries (such as bowel resection or gastric bypass) to test for leaks.It is sometimes used in cytopathology, in mixtures including Wright-Giemsa and Diff-Quik. It confers a blue color to both nuclei and cytoplasm, and makes the nuclei more visible. When methylene blue is "polychromed" (oxidized in solution or "ripened" by fungal metabolism, as originally noted in the thesis of Dr. D. L. Romanowsky in the 1890s), it gets serially demethylated and forms all the tri-, di-, mono- and non-methyl intermediates, which are Azure B, Azure A, Azure C, and thionine, respectively. This is the basis of the basophilic part of the spectrum of Romanowski-Giemsa effect. If only synthetic Azure B and Eosin Y is used, it may serve as a standardized Giemsa stain; but, without methylene blue, the normal neutrophilic granules tend to overstain and look like toxic granules. On the other hand, if methylene blue is used it might help to give the normal look of neutrophil granules and may also enhance the staining of nucleoli and polychromatophilic RBCs (reticulocytes).A traditional application of methylene blue is the intravital or supravital staining of nerve fibers, an effect first described by Paul Ehrlich in 1887. A dilute solution of the dye is either injected into tissue or applied to small freshly removed pieces. The selective blue coloration develops with exposure to air (oxygen) and can be fixed by immersion of the stained specimen in an aqueous solution of ammonium molybdate. Vital methylene blue was formerly much used for examining the innervation of muscle, skin and internal organs. The mechanism of selective dye uptake is incompletely understood; vital staining of nerve fibers in skin is prevented by ouabain, a drug that inhibits the Na/K-ATPase of cell membranes.
Uses:
Placebo Methylene blue has been used as a placebo; physicians would tell their patients to expect their urine to change color and view this as a sign that their condition had improved. This same side effect makes methylene blue difficult to use in traditional placebo-controlled clinical studies, including those testing for its efficacy as a treatment.
Isobutyl nitrite toxicity Isobutyl nitrite is one of the compounds used as poppers, an inhalant drug that induces a brief euphoria.
Isobutyl nitrite is known to cause methemoglobinemia. Severe methemoglobinemia may be treated with methylene blue.
Uses:
Ifosfamide toxicity Another use of methylene blue is to treat ifosfamide neurotoxicity. Methylene blue was first reported for treatment and prophylaxis of ifosfamide neuropsychiatric toxicity in 1994. A toxic metabolite of ifosfamide, chloroacetaldehyde (CAA), disrupts the mitochondrial respiratory chain, leading to an accumulation of nicotinamide adenine dinucleotide hydrogen (NADH). Methylene blue acts as an alternative electron acceptor, and reverses the NADH inhibition of hepatic gluconeogenesis while also inhibiting the transformation of chloroethylamine into chloroacetaldehyde, and inhibits multiple amine oxidase activities, preventing the formation of CAA. The dosing of methylene blue for treatment of ifosfamide neurotoxicity varies, depending upon its use simultaneously as an adjuvant in ifosfamide infusion, versus its use to reverse psychiatric symptoms that manifest after completion of an ifosfamide infusion. Reports suggest that methylene blue up to six doses a day have resulted in improvement of symptoms within 10 minutes to several days. Alternatively, it has been suggested that intravenous methylene blue every six hours for prophylaxis during ifosfamide treatment in people with history of ifosfamide neuropsychiatric toxicity. Prophylactic administration of methylene blue the day before initiation of ifosfamide, and three times daily during ifosfamide chemotherapy has been recommended to lower the occurrence of ifosfamide neurotoxicity.
Uses:
Shock It has also been used in septic shock and anaphylaxis.Methylene blue consistently increases blood pressure in people with vasoplegic syndrome (redistributive shock), but has not been shown to improve delivery of oxygen to tissues or to decrease mortality.Methylene blue has been used in calcium channel blocker toxicity as a rescue therapy for distributive shock unresponsive to first line agents. Evidence for its use in this circumstance is very poor and limited to a handful of case reports.
Side effects:
Methylene blue is a monoamine oxidase inhibitor (MAOI), and if infused intravenously at doses exceeding 5 mg/kg, may precipitate serious serotonin toxicity, serotonin syndrome, if combined with any selective serotonin reuptake inhibitors (SSRIs) or other serotonin reuptake inhibitor (e.g., duloxetine, sibutramine, venlafaxine, clomipramine, imipramine).It causes hemolytic anemia in carriers of the G6PD (favism) enzymatic deficiency.
Chemistry:
Methylene blue is a formal derivative of phenothiazine. It is a dark green powder that yields a blue solution in water. The hydrated form has 3 molecules of water per unit of methylene blue.
Preparation This compound is prepared by oxidation of 4-aminodimethylaniline in the presence of sodium thiosulfate to give the quinonediiminothiosulfonic acid, reaction with dimethylaniline, oxidation to the indamine, and cyclization to give the thiazine: A green electrochemical procedure, using only dimethyl-4-phenylenediamine and sulfide ions has been recently proposed.
Light absorption properties The maximum absorption of light is near 670 nm. The specifics of absorption depend on a number of factors, including protonation, adsorption to other materials, and metachromasy - the formation of dimers and higher-order aggregates depending on concentration and other interactions:
Other uses:
Redox indicator Methylene blue is widely used as a redox indicator in analytical chemistry. Solutions of this substance are blue when in an oxidizing environment, but will turn colorless if exposed to a reducing agent. The redox properties can be seen in a classical demonstration of chemical kinetics in general chemistry, the "blue bottle" experiment. Typically, a solution is made of glucose (dextrose), methylene blue, and sodium hydroxide. Upon shaking the bottle, oxygen oxidizes methylene blue, and the solution turns blue. The dextrose will gradually reduce the methylene blue to its colorless, reduced form. Hence, when the dissolved dextrose is entirely consumed, the solution will turn blue again. The redox midpoint potential E0' is +0.01 V.
Other uses:
Peroxide generator Methylene blue is also a photosensitizer used to create singlet oxygen when exposed to both oxygen and light. It is used in this regard to make organic peroxides by a Diels-Alder reaction which is spin forbidden with normal atmospheric triplet oxygen.
Other uses:
Sulfide analysis The formation of methylene blue after the reaction of hydrogen sulfide with dimethyl-p-phenylenediamine and iron(III) at pH 0.4 – 0.7 is used to determine by photometric measurements sulfide concentration in the range 0.020 to 1.50 mg/L (20 ppb to 1.5 ppm). The test is very sensitive and the blue coloration developing upon contact of the reagents with dissolved H2S is stable for 60 min. Ready-to-use kits such as the Spectroquant sulfide test facilitate routine analyses. The methylene blue sulfide test is a convenient method often used in soil microbiology to quickly detect in water the metabolic activity of sulfate reducing bacteria (SRB). It must be noted that in this colorimetric test, methylene blue is a product formed by the reaction and not a reagent added to the system.The addition of a strong reducing agent, such as ascorbic acid, to a sulfide-containing solution is sometimes used to prevent sulfide oxidation from atmospheric oxygen. Although it is certainly a sound precaution for the determination of sulfide with an ion selective electrode, it might however hamper the development of the blue color if the freshly formed methylene blue is also reduced, as described here above in the paragraph on redox indicator.
Other uses:
Test for milk freshness Methylene blue is a dye behaving as a redox indicator that is commonly used in the food industry to test the freshness of milk and dairy products. A few drops of methylene blue solution added to a sample of milk should remain blue (oxidized form in the presence of enough dissolved O2), otherwise (discoloration caused by the reduction of methylene blue into its colorless reduced form) the dissolved O2 concentration in the milk sample is low indicating that the milk is not fresh (already abiotically oxidized by O2 whose concentration in solution decreases) or could be contaminated by bacteria also consuming the atmospheric O2 dissolved in the milk. In other words, aerobic conditions should prevail in fresh milk and methylene blue is simply used as an indicator of the dissolved oxygen remaining in the milk.
Other uses:
Water testing The adsorption of methylene blue serves as an indicator defining the adsorptive capacity of granular activated carbon in water filters. Adsorption of methylene blue is very similar to adsorption of pesticides from water, this quality makes methylene blue serve as a good predictor for filtration qualities of carbon. It is as well a quick method of comparing different batches of activated carbon of the same quality.
Other uses:
A color reaction in an acidified, aqueous methylene blue solution containing chloroform can detect anionic surfactants in a water sample. Such a test is known as an MBAS assay (methylene blue active substances assay).
The MBAS assay cannot distinguish between specific surfactants, however. Some examples of anionic surfactants are carboxylates, phosphates, sulfates, and sulfonates.
Methylene blue value of fine aggregate The methylene blue value is defined as the number of milliliter's standard methylene value solution decolorized 0.1 g of activated carbon (dry basis).
Methylene blue value reflects the amount of clay minerals in aggregate samples. In materials science, methylene blue solution is successively added to fine aggregate which is being agitated in water. The presence of free dye solution can be checked with stain test on a filter paper.
Other uses:
Biological staining In biology, methylene blue is used as a dye for a number of different staining procedures, such as Wright's stain and Jenner's stain. Since it is a temporary staining technique, methylene blue can also be used to examine RNA or DNA under the microscope or in a gel: as an example, a solution of methylene blue can be used to stain RNA on hybridization membranes in northern blotting to verify the amount of nucleic acid present. While methylene blue is not as sensitive as ethidium bromide, it is less toxic and it does not intercalate in nucleic acid chains, thus avoiding interference with nucleic acid retention on hybridization membranes or with the hybridization process itself.It can also be used as an indicator to determine whether eukaryotic cells such as yeast are alive or dead. The methylene blue is reduced in viable cells, leaving them unstained. However dead cells are unable to reduce the oxidized methylene blue and the cells are stained blue. Methylene blue can interfere with the respiration of the yeast as it picks up hydrogen ions made during the process.
Other uses:
Aquaculture Methylene blue is used in aquaculture and by tropical fish hobbyists as a treatment for fungal infections. It can also be effective in treating fish infected with ich although a combination of malachite green and formaldehyde is far more effective against the parasitic protozoa Ichthyophthirius multifiliis. It is usually used to protect newly laid fish eggs from being infected by fungus or bacteria. This is useful when the hobbyist wants to artificially hatch the fish eggs.
Other uses:
Methylene blue is also very effective when used as part of a "medicated fish bath" for treatment of ammonia, nitrite, and cyanide poisoning as well as for topical and internal treatment of injured or sick fish as a "first response".
History:
Methylene blue has been described as "the first fully synthetic drug used in medicine." Methylene blue was first prepared in 1876 by German chemist Heinrich Caro.Its use in the treatment of malaria was pioneered by Paul Guttmann and Paul Ehrlich in 1891. During this period before the first World War, researchers like Ehrlich believed that drugs and dyes worked in the same way, by preferentially staining pathogens and possibly harming them. Changing the cell membrane of pathogens is in fact how various drugs work, so the theory was partially correct although far from complete. Methylene blue continued to be used in the second World War, where it was not well liked by soldiers, who observed, "Even at the loo, we see, we pee, navy blue." Antimalarial use of the drug has recently been revived. It was discovered to be an antidote to carbon monoxide poisoning and cyanide poisoning in 1933 by Matilda Brooks.The blue urine was used to monitor psychiatric patients' compliance with medication regimes. This led to interest - from the 1890s to the present day - in the drug's antidepressant and other psychotropic effects. It became the lead compound in research leading to the discovery of chlorpromazine.
Names:
The International Nonproprietary Name (INN) of methylene blue is methylthioninium chloride.
Research:
Malaria Methylene blue was identified by Paul Ehrlich about 1891 as a possible treatment for malaria. It disappeared as an anti-malarial during the Pacific War in the tropics, since American and Allied soldiers disliked its two prominent, but reversible side effects: turning the urine blue or green, and the sclera (the whites of the eyes) blue. Interest in its use as an anti-malarial has recently been revived, especially due to its low price. Several clinical trials are in progress, trying to find a suitable drug combination. According to studies on children in Africa, it appears to have efficacy against malaria, but the attempts to combine methylene blue with chloroquine were disappointing.
Research:
Alzheimer's A Phase 3 clinical trial of LMTM (TauRx0237 or LMT-X), a derivative of methylene blue, failed to show any benefit against cognitive or functional decline in people with mild to moderate Alzheimer's disease. Disease progression for both the drug and the placebo were practically identical.
Bipolar disorder Methylene blue has been studied as an adjunctive medication in the treatment of bipolar disorder.
Infectious diseases It has been studied in AIDS-related Kaposi's sarcoma, West Nile virus, and to inactivate Staphylococcus aureus, and HIV-1. Phenothiazine dyes and light have been known to have virucidal properties for over 70 years. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**High Way Race**
High Way Race:
High Way Race is an arcade auto racing game released by Taito in April 1983.
Gameplay:
The player is taking part in a cross country auto race. The player car will accelerate automatically, up to its maximum speed of 218 kilometres per hour (135 mph), but it can be slowed by a brake button. Other cars on the track will attempt to crash into the player car, but if they cannot be dodged, the player car has the ability to jump to avoid them. The player has unlimited lives, but each collision—either with opponent cars or other obstacles—will deplete the car's fuel supply, and the game will end if the car runs out of fuel.Around the race's halfway mark, a fuel truck appears with fueling arms to its left and right. The player car can refuel by contacting either of the arms, as long as it does not collide with the truck itself.As the player reaches the end of the race course, signs on the track will count down to the finish line. The player vehicle must jump prior to the finish, otherwise it will crash into the water below, ending the game. A successful jump does not mean the race is completed, as the player must, upon landing, use the brakes to safely bring the car to a stop. Points are awarded for remaining fuel at the end of the race, and the player will advance to the next race course.
Ports:
High Way Race was released on February 24, 2022 by Hamster Corporation as part of their Arcade Archives collection for the Nintendo Switch. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Robbed-bit signaling**
Robbed-bit signaling:
In communications systems, robbed-bit signaling (RBS) is a scheme to provide maintenance and line signaling services on many T1 digital carrier circuits using channel-associated signaling (CAS). The T1 carrier circuit is a type of dedicated circuit currently employed in North America and Japan.
Context:
The T1 circuit is divided into 24 channels, each carrying 8,000 samples per second, each 8 bits long. The Super Frame (SF) consist of 12 frames of 24 channels. The DS1 designation consist of 24 frames called, Extended Super Frame (ESF). In either designation, these channels are multiplexed together and sample at 8000 bit/s. In the superframe, ten frames are utilized entirely for voice/data and two are utilized partially for voice. Hence, each of the two partial frames yields 7 × 8000 bit/s = 56 kbit/s for voice data per channel, compared to the 8 × 8000 bit/s = 64 kbit/s per channel in the other frames.
Context:
Intuitively, 5 out of 6 frames have 8-bit resolution equal to 64 kbit/s (8 bits × 8,000 samples per second = 64 kbit/s) and 1 out of every 6 frames has a 7-bit resolution (7 bits × 8,000 samples per second = 56 kbit/s). The total rate for a channel is therefore 62.666 kbit/s. The distortion effect on voice and data signals is negligible when a modem is used for modulation. However, for a 64 kbit/s digital signal the data will render errors when a data signal is transmitted. If such is the case the robbed-bit signaling should be turned off.
Bit robbing:
The robbed-bit signal scheme is used in the super frame circuit (SF). It takes the least significant bit of each channel in every sixth frame and utilizes it to convey on or off hook, and busy signal status on telephone lines. The first bit of every six is called A bit, the second bit is called B bit.RBS was developed at the time that AT&T was moving from analog trunks onto digital equipment. This permitted AT&T to run 24 digital phone lines on the same number of wires that 2 analog phone lines would have taken, saving money and improving call quality, without the high cost of frequency-division multiplexing.
Bit robbing:
As in other carrier systems, the physical properties of an actual trunk wire are missing. With analog trunks, to signal the equipment at the far end that a trunk was going to be used, equipment would "loop" the line by connecting the wires together at one end or ground start one of the wires (depending on the type of trunk), and do the opposite to return the trunk to idle. With a digital trunk, another method was needed to signal between ends.
Bit robbing:
To do this, signaling equipment "steals" the eighth bit of each channel on every sixth frame (see Super Frame and Extended Super Frame) and replaces it with signaling information. This means that the low-order bit on every sixth sample in every DS0 carried on the T1, in either direction, is replaced by signaling information. Simple PCM-encoded voice is not very sensitive to losing these data in a few of its lower-order bits, so it doesn't cause much degradation of voice quality; however, when carrying data, the difference is significant, reducing the available usable data rate by 12.5%. With full 64 kbit/s, a voice channel has a signal-to-noise ratio of 37 decibels (dB). At 56 kbit/s, a voice channel has a signal to noise ratio of 31 dB. As only every sixth least-significant bit is robbed, the signal to noise ratio will be somewhere between 31 and 37 dB. However, since individual T1 links are not in general synchronized to one another, a DS0 passing along several concatenated, unsynchronized, T1 spans may have its lower bit stolen in more than one frame, frequently making real-world performance closer to the lower-bound than the upper bound of signal-to-noise performance.
Bit robbing:
Robbed-bit signaling can have an effect on audio quality under certain circumstances. When a voice call is connected to a quiet termination, as can happen when on hold in a PBX that does not have music on hold or comfort noise enabled, and certain types of framing circuits are used, the noise due to robbed-bit signaling can be heard in the handset as a faint 1333 Hz tone, this frequency being a result of low-bit corruption at a rate of 8000 Hz / 6 = 1333 Hz. This is normally not a very noticeable problem, but if the audio path contains in-line speech compression, such as G.729, the tone can be amplified and modulated by the compression algorithm to the point of annoyance to the user.
Bit robbing:
With Super Frame framing, the robbed bits are named A and B. With Extended Super Frame, the same stream is divided into four bits, named A, B, C, and D. The meanings of these bits depend on what type of signaling is provisioned on the channel. The most common types of signaling are loop start, ground start, and E&M.
Unlike T1 systems, most telephone systems in the world use E1 systems that transparently pass all 8 bits of every sample. Those systems use a separate out-of-band channel to carry the signaling information. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hybrid electric aircraft**
Hybrid electric aircraft:
A hybrid electric aircraft is an aircraft with a hybrid electric powertrain. As the energy density of lithium-ion batteries is much lower than aviation fuel, a hybrid electric powertrain may effectively increase flight range compared to pure electric aircraft.
By May 2018, there were over 30 hybrid electric aircraft projects, and short-haul hybrid-electric airliners were envisioned from 2032.
History:
The Boeing Truss-Braced Wing subsonic concept was planned with hybrid electric propulsion.
History:
The Diamond DA36 E-Star first flew on 8 June 2011, the first flight of a series hybrid powertrain, reducing fuel consumption and emissions by up to 25%, a technology scalable to a 100-seater airliner. A small and 100 kg (220 lb) lighter Austro Engine 40 hp (30 kW) Wankel engine generates the electricity, supplemented by EADS batteries for silent take off, feeding a Siemens 70 kW (94 hp) electric motor turning the propeller.
History:
The AgustaWestland Project Zero was intended to be hybrid-electric.
History:
2011 Nasa sponsored a centennial challenge to encourage the development of the world's most fuel-efficient airplane. The contest got co-sponsored by Google and was conducted in California skies. Amongst the entrants was the world's first parallel gas-electric hybrid aircraft called the EcoEagle, built by students of Embry-Riddle Aeronautical University, two electric airplanes - the Pipistrel Taurus G4 and the eGenius.
History:
2014 Faradair Aerospace launches a Triple Box-wing hybrid electric aircraft concept Called the BEHA (Bio Electric Hybrid Aircraft) as one of the world's first regional aircraft specifically designed for hybrid electric regional flight. The UK start-up has continued development from the initial concept to the latest BEHA M1H variant, with future opportunity for unmanned and all electric variants. The E-STOL BEHA has gained support from key members of the UK Government and the airframe development has been conducted at Swansea University.
History:
2017 Zunum Aero, backed by Boeing and JetBlue, is working since 2014 on a family of 10- to 50-seat hybrid electric regional aircraft. On 5 October 2017, Zunum launched the development of a six-to-12-seat aircraft. Aiming to fly in 2020 and be delivered in 2022, it should lower operating costs by 40–80% to reach available seat miles (ASM) costs of a 78-seat Dash 8-Q400.On 28 November 2017, Airbus announced a partnership with Rolls-Royce plc and Siemens to develop the E-Fan X hybrid-electric airliner demonstrator, to fly in 2020.
History:
2018 The 1,300-shp GE Catalyst could be used in hybrid-electric propulsion: in late 2016, General Electric modified a GE F110 fighter turbofan to extract 250 kW from its HP turbine and 750 kW from its LP turbine, supported by the USAF Research Laboratory and NASA, developed and tested a 1-megawatt electric motor/generator with GE Global Research, and tested a liquid-cooled inverter converting 2,400-volt DC to three-phase AC with silicon carbide-based switches and 1.7-kW MOSFET power modules.Industry experts expects a 50+ seat hybrid-electric airliner to debut in commercial operation by 2032 for routes like London-Paris.By November of 2018, Zunum Aero offices have been closed and all 70 staff members laid off as the programme comes to an end.
History:
The EU funded the Hypstair program with €6.55 million over three years till 2016 for a TRL of 4: a Pipistrel Panthera mockup received a serial hybrid-electric powertrain, ground testing a 200-kW motor driven by batteries only, by a 100-kW generator-only and by both combined.
History:
It is followed by Mahepa project from 2017, EU-funded over four years with €9 million under the Horizon 2020 research program to reduce aviation carbon emissions by 70% in 2050, till TRL 6 before entering product development.The Panthera drivetrain will be divided in modules: electric motor thrust generator and internal combustion power generator in the nose, human-machine interface and computing, fuel and batteries in the wing. Ground testing is planned for 2019 before flight tests in 2020.
History:
The dual-fuselage, four-seat, battery-powered Pipistrel Taurus G4 received a DLR hydrogen fuel cell powertrain to fly as the HY4 in September 2016, with hydrogen tanks and batteries in the fuselages, fuel cells and motor in the central nacelle. Partners are German motor and inverter developer Compact Dynamics, Ulm University, TU Delft, Politecnico di Milano and University of Maribor.Ground and flight tests should follow those of the Panthera a couple of months later.Along their ground handling, scaling to 19- and 70-seat airliners will be studied in two configurations: more of the same size modules for electric distributed propulsion, or larger sized modules extrapolating the flight-test results, powering twin propellers. Flights will test system behavior, measure performance and reliability, and evaluate failure modes. A failure rate of one per 10 million hours is targeted, as low as in airliners, with very reliable components or with redundancy.Austrian company ScaleWings, developer of a P-51 Mustang scale replica, has developed a hybrid and redundant piston/electric engine, based on independent modules: a 1.15 L (70 cu in) four-stroke V-twin producing 80 and 120 hp (60 and 89 kW) when turbocharged, and electric motors, producing 170 to 350 hp (130 to 260 kW) combined.VoltAero is a startup company formed in September 2017 by the CTO and test pilot of the 2014 Airbus E-Fan 1.0, located in Royan and established with the support of the French Nouvelle-Aquitaine region. The company is developing a hybrid testbed based on the Cessna 337 Skymaster, which it intends to fly in late February 2019.
History:
The clean-sheet, all-composite VoltAero Cassio prototype should follow in 2020, before deliveries in late 2021 or early 2022. It will be powered by two 60 kW (80 hp) electric motors driving tractor propellers on the wing and a 170 kW (230 hp) piston engine and 150 kW (200 hp) motor driving a pusher propeller in the aft fuselage. The combination of fuel and batteries will give it a 1,200 km (650 nmi) range with nine people aboard.On 31 October 2018, Diamond Aircraft flew the HEMEP, funded by Germany’s economics ministry and the Austrian Research Promotion Agency, reaching 130 kn (240 km/h) and 3,000 ft (910 m) within 20 minutes. It is a modified DA40 with its single piston engine replaced by two Siemens 75 kW (101 hp) electric motors in the nose powered by a 110 kW (150 hp) Austro Engine AE 300 diesel or two 12 kWh (43 MJ) batteries, for a 5 h. endurance or 30 min. on batteries only.
History:
2019 By January 2019, U.S. startup Ampaire was replacing the Cessna 337 Skymaster (a push-pull aircraft) aft piston engine with an electric motor, to fly the prototype on Hawaiian Mokulele Airlines commuter routes operated with Cessna Caravans. Seven other airlines are interested by Caravan or Twin Otter conversions: Seattle’s Kenmore Air, Tropic Air of Belize, Puerto Rico-based Vieques Air Link, Southern Airways Express of Memphis, Tennessee, Guernsey’s Aurigny and Star Marianas Air, based in the Northern Mariana Islands, as well as Norway. Test flights will take place on a 28 mi (45 km) route over 15 minutes between Kahului Airport in Central Maui and Hana Airport on the East side. The hybrid prototype held its first public test flight on June 6, 2019, before scheduled service planned for 2021. Personal Airline Exchange (PAX) became the launch customer for the Ampaire Electric EEL modified six-seat Skymaster, to be certified in 2021, with an order for 50 plus 50 options.By March 2019, UTC was converting a 39-seat Bombardier Dash 8 Q100 into a hybrid-electric for demonstration flights from 2022 within its Project 804.
History:
The 2 MW (2,700 hp) design is similar to the Airbus E-Fan X program, but aims for certification and production for a subsequent commercial offer.
One 2,150 hp (1,600 kW) PW121 turboprop will be replaced by a 1 MW (1,300 hp) gas turbine joined with an electric motor of the same rating, powered by off-the-shelf lithium-ion batteries for takeoff and climb.
The turbine is used alone in cruise and drives the motor-generator to recharge the batteries in descent.
The downsized engine operates at its optimum for 30% fuel savings over 200–250 nmi (370–460 km).
History:
Range is reduced from 1,000 to 600 nmi (1,900 to 1,100 km) due to the higher empty weight and 50% lower fuel capacity.Faradair Aerospace launched its 18 seat BEHA M1H during Revolution.aero in March, London, with turboprop hybrid propulsion and 'quick change' passenger/cargo capability, targeting the CS23/Part23 commuter category regulations. The E-STOL aircraft capable of operation on runways of less than 300m with 5 tonne payload from its unique Triple box-wing configuration and quiet ducted fan pusher configuration.At the June 2019 Paris Air Show, Daher, Airbus and Safran teamed up to develop the TBM-based EcoPulse demonstrator, with half of the €22 million ($25 million) demonstration funded by the DGAC.
History:
The maiden flight is scheduled for the summer of 2022 before a hypothetical 2025-30 certification.
The aircraft’s existing engine will be supplemented by six 45 kW (60 hp) safran electric motor on the wing fed by a 100 kW (130 hp) APU or batteries.
Similar to the NASA X-57 Maxwell, the distributed propulsion reduces wingtip vortices and add low speed lift by blowing the wing, enabling a smaller, lower drag wing.A mid-May 2019 survey for UBS shows 38% of Americans and Germans said they would be likely to fly in a hybrid-electric airplane, rising to more than 50% for 18-44-year-olds.
UBS thinks hybrid aircraft for up to nine passengers over short routes below 250 nmi (460 km) could be available from 2022, and 2028 for regional airliners up to 1 h routes.
History:
UBS forecast a market for 16,000 hybrid-electric airplanes and $178-192 billion over 2028-40, mostly in general aviation, light business jets and regional aircraft with 20% lower operating costs than present 50-70 seaters.Led by Cranfield Aerospace Solutions (CAeS), Project Fresson started on 1 October 2019, to fly an electric Britten-Norman BN-2 Islander within 30 months before an EASA STC within another 6-12 months.
History:
It targets a 60 min endurance plus 30 min reserves and with energy five times cheaper than Avgas and reduced maintenance, the conversion cost could be recovered in three years and it would have a range-extender combustion engine.
Half of the £18 million ($22 million) funding come from the partners and the other half from the UK government.
Of 800 Islanders in service, around 600 are used for short flights.Loganair should use it for short interisland hops off northern Scotland.Electric EEL developer Ampaire and aircraft modification specialist Ikhana Aircraft Services study a 19 seat, diesel-electric hybrid de Havilland Canada DHC-6 Twin Otter.
It could use Ikhana's STC for an increased MTOW from 5,443 to 6,350 kg (12,000 to 14,000 lb), the study is expected to be completed by the end of 2019.
2020 Supported by Bavarian funding, the German DLR is modifying one of its two Do 228 into a hybrid-electric demonstrator.
The first fully electric flight is planned for 2020 and the first hybrid-electric flight for 2021, apparently from Cochstedt Airport.
History:
Partners include MTU Aero Engines and Siemens, of which Rolls-Royce plc is acquiring the electric propulsion unit.In April of 2020, the E-FanX programme supported by Rolls-Royce plc, Airbus and Siemens is cancelled. By July 2020, Faradair Aerospace announces relocation to Duxford Airfield, Cambridgeshire, UK in partnership with the Imperial War Museum Duxford and Gonville and Caius College, Cambridge, to develop the BEHA M1H prototype hybrid electric regional aircraft from a new bespoke prototyping facility as part of the new Duxford Avtech aerospace research and development campus. First flight is targeted for late 2023/early 2024.In November 2020, Embraer had a new design for a regional airliner avoiding an hybrid-electric drivetrain, as operating costs would increase by 15% for 5% of the required power compared to conventional turboprop.
History:
2021 The Berlin-Brandenburg Aerospace Alliance is a business cluster that includes Rolls-Royce Dahlewitz, MTU Aero Engines, aeronautical engineering specialist APUS and Stemme.
It plans the IBEFA-i6 project, a 19-seat distributed electric propulsion demonstrator to fly in 2021 with turbodiesel, gas turbine and fuel-cell generators.Separate from this i-6, the APUS i-5 will be a twin-boom testbed with tandem seating for a 4 t (8,800 lb) gross-weight.
A Rolls-Royce 250 turboshaft will drive four electric propellers through a battery, generators, convertors, and power controls.
Supported by the German state of Brandenburg and the Brandenburg University of Technology, flights should begin after 2021.
2022 In 2019, the National Research Council Canada started to convert a Cessna Skymaster, a push-pull configuration, the Hybrid Electric Aircraft Testbed. It first flew on 7 February 2022.
2023 On 19 January 2023, ZeroAvia flew its Dornier 228 testbed with one turboprop replaced by a prototype hydrogen-electric powertrain in the cabin, consisting of two fuel cells and a lithium-ion battery for peak power. The aim is to have a certifiable system by 2025 to power airframes carrying up to 19 passengers over 300 nmi (560 km). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Isovector**
Isovector:
In particle physics, isovector refers to the vector transformation of a particle under the SU(2) group of isospin. An isovector state is a triplet state with total isospin 1, with the third component of isospin either 1, 0, or -1, much like a triplet state in the two-particle addition of Spin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KBTBD7**
KBTBD7:
Kelch repeat and BTB domain-containing protein 7 is a protein that in humans is encoded by the KBTBD7 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fetus**
Fetus:
A fetus or foetus (; PL: fetuses, feti, foetuses, or foeti) is the unborn offspring that develops from an animal embryo. Following embryonic development the fetal stage of development takes place. In human prenatal development, fetal development begins from the ninth week after fertilization (or eleventh week gestational age) and continues until birth. Prenatal development is a continuum, with no clear defining feature distinguishing an embryo from a fetus. However, a fetus is characterized by the presence of all the major body organs, though they will not yet be fully developed and functional and some not yet situated in their final anatomical location.
Etymology:
The word fetus (plural fetuses or feti) is related to the Latin fētus ("offspring", "bringing forth", "hatching of young") and the Greek "φυτώ" to plant. The word "fetus" was used by Ovid in Metamorphoses, book 1, line 104.The predominant British, Irish, and Commonwealth spelling is foetus, which has been in use since at least 1594. The spelling with -oe- arose in Late Latin, in which the distinction between the vowel sounds -oe- and -e- had been lost. This spelling is the most common in most Commonwealth nations, except in the medical literature, where the fetus is used. The more classical spelling fetus is used in Canada and the United States. In addition, fetus is now the standard English spelling throughout the world in medical journals. The spelling faetus was also used historically.
Development in humans:
Weeks 9 to 16 (2 to 3.6 months) In humans, the fetal stage starts nine weeks after fertilization. At the start of the fetal stage, the fetus is typically about 30 millimetres (1+1⁄4 in) in length from crown-rump, and weighs about 8 grams. The head makes up nearly half of the size of the fetus. Breathing-like movements of the fetus are necessary for the stimulation of lung development, rather than for obtaining oxygen. The heart, hands, feet, brain, and other organs are present, but are only at the beginning of development and have minimal operation.At this point in development, uncontrolled movements and twitches occur as muscles, the brain, and pathways begin to develop.
Development in humans:
Weeks 17 to 25 (3.6 to 6.6 months) A woman pregnant for the first time (nulliparous) typically feels fetal movements at about 21 weeks, whereas a woman who has given birth before will typically feel movements by 20 weeks. By the end of the fifth month, the fetus is about 20 cm (8 in) long.
Development in humans:
Weeks 26 to 38 (6.6 to 8.6 months) The amount of body fat rapidly increases. Lungs are not fully mature. Neural connections between the sensory cortex and thalamus develop as early as 24 weeks of gestational age, but the first evidence of their function does not occur until around 30 weeks, when minimal consciousness, dreaming, and the ability to feel pain emerges. Bones are fully developed but are still soft and pliable. Iron, calcium, and phosphorus become more abundant. Fingernails reach the end of the fingertips. The lanugo, or fine hair, begins to disappear until it is gone except on the upper arms and shoulders. Small breast buds are present in both sexes. Head hair becomes coarse and thicker. Birth is imminent and occurs around the 38th week after fertilization. The fetus is considered full-term between weeks 37 and 40 when it is sufficiently developed for life outside the uterus. It may be 48 to 53 cm (19 to 21 in) in length when born. Control of movement is limited at birth, and purposeful voluntary movements continue to develop until puberty.
Development in humans:
Variation in growth There is much variation in the growth of the human fetus. When the fetal size is less than expected, the condition is known as intrauterine growth restriction also called fetal growth restriction; factors affecting fetal growth can be maternal, placental, or fetal.Maternal factors include maternal weight, body mass index, nutritional state, emotional stress, toxin exposure (including tobacco, alcohol, heroin, and other drugs which can also harm the fetus in other ways), and uterine blood flow.
Development in humans:
Placental factors include size, microstructure (densities and architecture), umbilical blood flow, transporters and binding proteins, nutrient utilization, and nutrient production.
Development in humans:
Fetal factors include the fetal genome, nutrient production, and hormone output. Also, female fetuses tend to weigh less than males, at full term.Fetal growth is often classified as follows: small for gestational age (SGA), appropriate for gestational age (AGA), and large for gestational age (LGA). SGA can result in low birth weight, although premature birth can also result in low birth weight. Low birth weight increases the risk for perinatal mortality (death shortly after birth), asphyxia, hypothermia, polycythemia, hypocalcemia, immune dysfunction, neurologic abnormalities, and other long-term health problems. SGA may be associated with growth delay, or it may instead be associated with absolute stunting of growth.
Viability:
Fetal viability refers to a point in fetal development at which the fetus may survive outside the womb. The lower limit of viability is approximately 5+3⁄4 months gestational age and is usually later.There is no sharp limit of development, age, or weight at which a fetus automatically becomes viable. According to data from 2003 to 2005, survival rates are 20–35% for babies born at 23 weeks of gestation (5+3⁄4 months); 50–70% at 24–25 weeks (6 – 6+1⁄4 months); and >90% at 26–27 weeks (6+1⁄2 – 6+3⁄4 months) and over. It is rare for a baby weighing less than 500 g (1 lb 2 oz) to survive.When such premature babies are born, the main causes of mortality are that the respiratory system and the central nervous system are not completely differentiated. If given expert postnatal care, some preterm babies weighing less than 500 g (1 lb 2 oz) may survive, and are referred to as extremely low birth weight or immature infants.Preterm birth is the most common cause of infant mortality, causing almost 30 percent of neonatal deaths. At an occurrence rate of 5% to 18% of all deliveries, it is also more common than postmature birth, which occurs in 3% to 12% of pregnancies.
Circulatory system:
Before birth The heart and blood vessels of the circulatory system form relatively early during embryonic development, but continue to grow and develop in complexity in the growing fetus. A functional circulatory system is a biological necessity since mammalian tissues can not grow more than a few cell layers thick without an active blood supply. The prenatal circulation of blood is different from postnatal circulation, mainly because the lungs are not in use. The fetus obtains oxygen and nutrients from the mother through the placenta and the umbilical cord.Blood from the placenta is carried to the fetus by the umbilical vein. About half of this enters the fetal ductus venosus and is carried to the inferior vena cava, while the other half enters the liver proper from the inferior border of the liver. The branch of the umbilical vein that supplies the right lobe of the liver first joins with the portal vein. The blood then moves to the right atrium of the heart. In the fetus, there is an opening between the right and left atrium (the foramen ovale), and most of the blood flows from the right into the left atrium, thus bypassing pulmonary circulation. The majority of blood flow is into the left ventricle from where it is pumped through the aorta into the body. Some of the blood moves from the aorta through the internal iliac arteries to the umbilical arteries and re-enters the placenta, where carbon dioxide and other waste products from the fetus are taken up and enter the mother's circulation.Some of the blood from the right atrium does not enter the left atrium, but enters the right ventricle and is pumped into the pulmonary artery. In the fetus, there is a special connection between the pulmonary artery and the aorta, called the ductus arteriosus, which directs most of this blood away from the lungs (which are not being used for respiration at this point as the fetus is suspended in amniotic fluid).
Circulatory system:
Postnatal development With the first breath after birth, the system changes suddenly. Pulmonary resistance is reduced dramatically, prompting more blood to move into the pulmonary arteries from the right atrium and ventricle of the heart and less to flow through the foramen ovale into the left atrium. The blood from the lungs travels through the pulmonary veins to the left atrium, producing an increase in pressure that pushes the septum primum against the septum secundum, closing the foramen ovale and completing the separation of the newborn's circulatory system into the standard left and right sides. Thereafter, the foramen ovale is known as the fossa ovalis.
Circulatory system:
The ductus arteriosus normally closes within one or two days of birth, leaving the ligamentum arteriosum, while the umbilical vein and ductus venosus usually closes within two to five days after birth, leaving, respectively, the liver's ligamentum teres and ligamentum venosus.
Immune system:
The placenta functions as a maternal-fetal barrier against the transmission of microbes. When this is insufficient, mother-to-child transmission of infectious diseases can occur.
Maternal IgG antibodies cross the placenta, giving the fetus passive immunity against those diseases for which the mother has antibodies. This transfer of antibodies in humans begins as early as the fifth month (gestational age) and certainly by the sixth month.
Developmental problems:
A developing fetus is highly susceptible to anomalies in its growth and metabolism, increasing the risk of birth defects. One area of concern is the lifestyle choices made during pregnancy. Diet is especially important in the early stages of development. Studies show that supplementation of the person's diet with folic acid reduces the risk of spina bifida and other neural tube defects. Another dietary concern is whether breakfast is eaten. Skipping breakfast could lead to extended periods of lower than normal nutrients in the maternal blood, leading to a higher risk of prematurity, or birth defects.
Developmental problems:
Alcohol consumption may increase the risk of the development of fetal alcohol syndrome, a condition leading to intellectual disability in some infants. Smoking during pregnancy may also lead to miscarriages and low birth weight (2,500 grams (5 pounds 8 ounces). Low birth weight is a concern for medical providers due to the tendency of these infants, described as "premature by weight", to have a higher risk of secondary medical problems.
Developmental problems:
X-rays are known to have possible adverse effects on the development of the fetus, and the risks need to be weighed against the benefits.Congenital disorders are acquired before birth. Infants with certain congenital heart defects can survive only as long as the ductus remains open: in such cases the closure of the ductus can be delayed by the administration of prostaglandins to permit sufficient time for the surgical correction of the anomalies. Conversely, in cases of patent ductus arteriosus, where the ductus does not properly close, drugs that inhibit prostaglandin synthesis can be used to encourage its closure, so that surgery can be avoided.
Developmental problems:
Other heart birth defects include ventricular septal defect, pulmonary atresia, and tetralogy of Fallot.
An abdominal pregnancy can result in the death of the fetus and where this is rarely not resolved it can lead to its formation into a lithopedion.
Fetal pain:
The existence and implications of fetal pain are debated politically and academically. According to the conclusions of a review published in 2005, "Evidence regarding the capacity for fetal pain is limited but indicates that fetal perception of pain is unlikely before the third trimester." However, developmental neurobiologists argue that the establishment of thalamocortical connections (at about 6+1⁄2 months) is an essential event with regard to fetal perception of pain. Nevertheless, the perception of pain involves sensory, emotional and cognitive factors and it is "impossible to know" when pain is experienced, even if it is known when thalamocortical connections are established. Some authors argue that fetal pain is possible from the second half of pregnancy. Evidence suggests that the perception of pain in the fetus occurs well before late gestation Whether a fetus has the ability to feel pain and suffering is part of the abortion debate. In the United States, for example, anti-abortion advocates have proposed legislation that would require providers of abortions to inform pregnant women that their fetuses may feel pain during the procedure and that would require each person to accept or decline anesthesia for the fetus.
Legal and social issues:
Abortion of a human pregnancy is legal and/or tolerated in most countries, although with gestational time limits that normally prohibit late-term abortions.
Other animals:
A fetus is a stage in the prenatal development of viviparous organisms. This stage lies between embryogenesis and birth. Many vertebrates have fetal stages, ranging from most mammals to many fish. In addition, some invertebrates bear live young, including some species of onychophora and many arthropods.
Other animals:
The fetuses of most mammals are situated similarly to the human fetus within their mothers. However, the anatomy of the area surrounding a fetus is different in litter-bearing animals compared to humans: each fetus of a litter-bearing animal is surrounded by placental tissue and is lodged along one of two long uteri instead of the single uterus found in a human female.
Other animals:
Development at birth varies considerably among animals, and even among mammals. Altricial species are relatively helpless at birth and require considerable parental care and protection. In contrast, precocial animals are born with open eyes, have hair or down, have large brains, and are immediately mobile and somewhat able to flee from, or defend themselves against, predators. Primates are precocial at birth, with the exception of humans.The duration of gestation in placental mammals varies from 18 days in jumping mice to 23 months in elephants. Generally speaking, fetuses of larger land mammals require longer gestation periods.
Other animals:
The benefits of a fetal stage means that young are more developed when they are born. Therefore, they may need less parental care and may be better able to fend for themselves. However, carrying fetuses exerts costs on the mother, who must take on extra food to fuel the growth of her offspring, and whose mobility and comfort may be affected (especially toward the end of the fetal stage).
Other animals:
In some instances, the presence of a fetal stage may allow organisms to time the birth of their offspring to a favorable season. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Circular reference**
Circular reference:
A circular reference is a series of references where the last object references the first, resulting in a closed loop.
Simple example:
A newcomer asks a local where the town library is. "Just in front of the post office," says the local. The newcomer nods, and follows up: "But where is the post office?" "Why, that's simple," replies the local. "It's just behind the library!"
In language:
A circular reference is not to be confused with the logical fallacy of a circular argument. Although a circular reference will often be unhelpful and reveal no information, such as two entries in a book index referring to each other, it is not necessarily so that a circular reference is of no use. Dictionaries, for instance, must always ultimately be a circular reference since all words in a dictionary are defined in terms of other words, but a dictionary nevertheless remains a useful reference. Sentences containing circular references can still be meaningful: Her brother gave her a kitten; his sister thanked him for it.is circular, but not without meaning. Indeed, it can be argued that self-reference is a necessary consequence of Aristotle's Law of non-contradiction, a fundamental philosophical axiom. In this view, without self-reference, logic and mathematics become impossible, or at least, lack usefulness.
In computer programming:
Circular references can appear in computer programming when one piece of code requires the result from another, but that code needs the result from the first. For example, the two functions, posn and plus1 in the following Python program comprise a circular reference: Circular references like the above example may return valid results if they have a terminating condition. If there is no terminating condition, a circular reference leads to a condition known as livelock or infinite loop, meaning it theoretically could run forever.
In computer programming:
In ISO Standard, SQL circular integrity constraints are implicitly supported within a single table. Between multiple tables circular constraints (e.g. foreign keys) are permitted by defining the constraints as deferrable (See CREATE TABLE for PostgreSQL and DEFERRABLE Constraint Examples for Oracle). In that case the constraint is checked at the end of the transaction not at the time the DML statement is executed. To update a circular reference, two statements can be issued in a single transaction that will satisfy both references once the transaction is committed.
In computer programming:
Circular references can also happen between instances of data of a mutable type, such as in this Python script: The print(mydict) function will output {'this': 'that', 'these': 'those', 'myself': {...}}, where {...} indicates a circular reference, in this case, to the mydict dictionary.
In spreadsheets:
Circular references also occur in spreadsheets when two cells require each other's result. For example, if the value in Cell A1 is to be obtained by adding 5 to the value in Cell B1, and the value in Cell B1 is to be obtained by adding 3 to the value in Cell A1, no values can be computed. (Even if the specifications are A1:=B1+5 and B1:=A1-5, there is still a circular reference. It does not help that, for instance, A1=3 and B1=-2 would satisfy both formulae, as there are infinitely many other possible values of A1 and B1 that can satisfy both instances.) Circular reference in worksheets can be a very useful technique for solving implicit equations such as the Colebrook equation and many others, which might otherwise require tedious Newton-Raphson algorithms in VBA or use of macros.A distinction should be made with processes containing a circular reference between those that are incomputable and those that are an iterative calculation with a final output. The latter may fail in spreadsheets not equipped to handle them but are nevertheless still logically valid. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stability criterion**
Stability criterion:
In control theory, and especially stability theory, a stability criterion establishes when a system is stable. A number of stability criteria are in common use: Circle criterion Jury stability criterion Liénard–Chipart criterion Nyquist stability criterion Routh–Hurwitz stability criterion Vakhitov–Kolokolov stability criterion Barkhausen stability criterionStability may also be determined by means of root locus analysis.
Although the concept of stability is general, there are several narrower definitions through which it may be assessed: BIBO stability Linear stability Lyapunov stability Orbital stability | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MySQL Connector/ODBC**
MySQL Connector/ODBC:
MySQL Connector/ODBC, once known as MyODBC, is computer software from Oracle Corporation. It is an ODBC interface and allows programming languages that support the ODBC interface to communicate with a MySQL database. MySQL Connector/ODBC was originally created by MySQL AB.
History:
3.51 - ANSI version only.
5.1 - Unicode version only. Suitable for use with any MySQL server version since MySQL 4.1, including MySQL 5.0, 5.1, and 6.0.
5.2 - ANSI and Unicode versions available at install time.
5.3 - ANSI and Unicode versions available at install time. Conforms to the ODBC 3.8 specification. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Noopolitik**
Noopolitik:
In political science, Noopolitik, formed by a combination of the Greek words νόος nóos ("knowledge") and πολιτικός politikós (πολίτης polítēs "citizen", from πόλις pólis "city"), is the network-based geopolitics of knowledge. The term was invented by defense experts John Arquilla and David Ronfeldt in a 1999 RAND Corporation study and often appears in connection with that of smart power.
Difference with Realpolitik:
Noopolitics is an informational strategy of manipulating international processes through the forming in the general public, by means of mass media, of positive or negative attitudes to the external or internal policies of a state or block of states, to create a positive or negative image of ideas and promulgated moral values.
Versus Foucault's Biopolitics:
Tiziana Terranova (2007) describes the use of the term 'noopolitics' by Maurizio Lazzarato (2004). "'Noopolitics' supplements the biopolitics of the species described by Foucault" (Terranova 2007, 139). "Against the militarization of communication accomplished by new techniques of power, it is possible to think about the constitution of such publics as counter-weapons, which work by expressing, inventing and creating possible worlds where the moment of resistance (the 'no' by which one refuses to watch, listen or believe) is the starting point for an affirmative activity" (Terranova 2007, 140). Noyer & Juanals (2008) have also discussed Noopolitik as a means of social control. especially in connection with RAND's Byting back program which was published as research into counterinsurgency.
In the knowledge economy and the BRICS:
While the term initially appeared in association with the concept of the US Revolution in Military Affairs, Noopolitik has also come to describe an interest in the knowledge economy and in particular innovation and R&D to leverage growth and political reach in international relations. Thus Noopolitik may be defined as the use of innovation and knowledge to leverage political intercourses by other means at the international level. Such "knowledge race" may be either a means of asserting political independence or of generating a sudden gap in the geopolitical balance of power. The attitude of the People's Republic of China and the ANZUS in the Pacific Ocean has been described as such by Idriss J. Aberkane (2011).
In the knowledge economy and the BRICS:
Therefore, comparable to the Heartland, the “Heartocean” inevitably disputed by powers of which none may prevail alone in the future without a decisive innovation will become the scene of a commensurate Great Game with the two same grand stakes. This will be the fate (or doom) of multilateralism on the one side and the global knowledge race on the other. (...). As stated by Seth Cropsey (2010) commenting on China’s noopolitik move of deploying new anti-ship missiles in the ocean “keeping the Pacific pacific”. The two issues are well summarized.
In the knowledge economy and the BRICS:
For the People's Republic of China Professor Li Xiguang of Tsinghua University described the stakes of Smart power for the People's Republic of China in a 2010 article on Noopolitik in the Global Times: Soft power is the power of making people love you. Hard power is the ability to making people fear you. Over the last 500 years, all the world powers gained their hegemony through hard power, but the US has gained its hegemony through combining hard power and soft power, both striking at and assimilating its opponents.The US has built its soft power by making its values and political system, such as the US interpretation and definition of democracy, freedom and human rights, into supposedly universal values.
In the knowledge economy and the BRICS:
Idriss Aberkane analyzes Noopolitik as a defining stance of the People's Republic of China's economic policy in which he concludes "Maintaining 'Leap and Bound' creativity could be an efficient way for China to neutralize popular frustration. What must be acknowledged is that the PRC has moved from a 'growth panacea' policy, to a policy of 'knowledge panacea.' This best sums up its Noopolitik." Emphasizing China’s deeply-rooted desire for technological independence, Segal refers to the PRC’s efforts as an “Innovation Wall,” which is the willingness to innovate as independently as possible from the rest of the world to simply and systematically leap ahead of any other country. Needless to say, the scope of the rising fog of war in world economic and R&D competition is particularly daunting for the Euro-Atlantic community. The Chinese phrase for national innovation zizhu chuangxin was notably coined in a 2006 state report titled “Guidelines on National Medium- and Long-Term Program for Science and Technology Development.” If China decides to foster an innovation of its own, and the ideas were published in Mandarin language journals it would provide a barrier against other linguistic communities, and a thicker fog of war might rise in knowledge-based economic warfare.
Sources:
John Arquilla & David Ronfeldt: "The Emergence of Noopolitik: Toward an American Information Strategy", Rand 1999 Terranova, Tiziana . “Futurepublic: On Information Warfare, Bio-racism and Hegemony as Noopolitics.” Theory, Culture & Society 24.3 (2007): 125–145.
Lazzarato, Maurizio (2004) La politica dell’evento. Cosenza: Rubbettino. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rare Diseases Clinical Research Network**
Rare Diseases Clinical Research Network:
The Rare Diseases Clinical Research Network (RDCRN) is an initiative of the US Office of Rare Diseases Research (ORDR). RDCRN is funded by the ORDR, the National Center for Advancing Translational Sciences and collaborating institute centers. The RDCRN is designed to advance medical research on rare diseases by providing support for clinical studies and facilitating collaboration, study enrollment and data sharing. Through the RDCRN consortia, physician scientists and their multidisciplinary teams work together with patient advocacy groups to study more than 200 rare diseases at sites across the nation.
Rare Diseases Clinical Research Network:
Established by Congress under the Rare Diseases Act in 2002, the RDCRN has included more than 350 sites in the United States and more than 50 in 22 other countries. To date, they have encompassed 237 research protocols and included more than 56,000 participants in studies ranging from immune system disorders and rare cancers to heart and lung disorders, brain development diseases and more.
History:
The following is a timeline of the Rare Diseases Clinical Research Network: As a result of the Rare Diseases Act of 2002, on February 27, 2003, the ORDR (in conjunction with the National Center for Research Resources (NCRR), the General Clinical Research Consortium (GCRC) Program, and other NIH Institutes) requested applications for a Rare Diseases Clinical Research Network.
History:
On November 3, 2003, the NIH established the Rare Diseases Clinical Research Network with a Data Technology Coordinating Center and the first Rare Disease Clinical Research Consortia (RDCRCs). The founding members of the RDCRN were:Rare Disease Clinical Research Center for New Therapies and New Diagnostics, Principal Investigator: Dr. Arthur L. Beaudet (Baylor College of Medicine, Houston, TX) Vasculitis Clinical Research Network, Principal Investigator: Dr. Peter A. Merkel (University Pennsylvania, Philadelphia, PA) Rare Lung Diseases Consortium, Principal Investigator: Dr. Bruce C. Trapnell (Children's Hospital Medical Center, Cincinnati, OH) Rare Diseases Clinical Research Center for Urea Cycle Disorders, Principal Investigator: Dr. Mark L. Batshaw (Children's National Medical Center, Washington, DC) Bone Marrow Failure Clinical Research Center, Principal Investigator: Dr. Jaroslaw P. Maciejewski (The Cleveland Clinic Foundation, Cleveland, OH) Nervous System Channelopathies Pathogenesis and Treatment, Principal Investigator: Dr. Robert C. Griggs (University of Rochester, Rochester, NY) The Natural History of Rare Genetic Steroid Disorders, Principal Investigator: Dr. Maria New (Weill Medical College of Cornell University, New York, NY) The Data and Technology Coordinating Center, Principal Investigator: Dr. Jeffrey P. Krischer (H. Lee Moffitt Cancer Center and Research Institute, University of South Florida, Tampa, FL)On February 8, 2009, the ORDR partnered with 10 other NIH Institutes to release two requests for resubmissions for the RDCRN.
History:
On October 5, 2009, the NIH announced funding for 19 rare disease clinical research consortia and a Data Management Coordinating Center through the ORDR, along with the National Institute of Neurological Disorders and Stroke (NINDS), the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the National Heart, Lung, and Blood Institute (NHLBI), the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), the National Institute of Allergy and Infectious Diseases (NIAID), the National Institute of Dental and Craniofacial Research (NIDCR), and the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS).
History:
On October 8, 2014, the NIH announced additional funding of $29 million.
History:
On October 3, 2019, the NIH announced funding of $38 million for 20 rare diseases clinical research consortia and a new Data Management and Coordinating Center through the National Center for Advancing Translational Science's Office of Rare Diseases Research, along with the National Institute of Allergy and Infectious Diseases, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute of Neurological Disorders and Stroke, the National Heart, Lung, and Blood Institute, the National Institute of Arthritis and Musculoskeletal and Skin Diseases, the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of Dental and Craniofacial Research, the National Institute of Mental Health and the Office of Dietary Supplements.
Rare Diseases Clinical Research Consortia:
In its fourth funding cycle, the Rare Diseases Clinical Research Network (RDCRN) consists of 20 Rare Diseases Clinical Research Consortia (RDCRCs) and a Data Management and Coordinating Center (DMCC).The RDCRCs, the DMCC, and their Principal Investigators are located at the following institutions: Brain Vascular Malformation Consortium (BVMC), Helen Kim, M.P.H., Ph.D., University of California San Francisco, San Francisco, CA.
Brittle Bone Disorders Consortium (BBD), Brendan Lee, M.D., Ph.D., Baylor College of Medicine, Houston, TX.
Clinical Research in Amyotrophic Lateral Sclerosis and Related Disorders for Therapeutic Development (CReATe), Michael Benatar, M.D., Ph.D., University of Miami Miller School of Medicine, Miami, FL.
Congenital and Perinatal Infections Consortium (CPIC), David Kimberlin, M.D., University of Alabama at Birmingham, Birmingham, AL.
Consortium of Eosinophilic Gastrointestinal Disease Researchers (CEGIR), Marc E. Rothenberg, M.D., Ph.D., Cincinnati Children's Hospital Medical Center, Cincinnati, OH.
Developmental Synaptopathies Consortium (DSC), Mustafa Sahin, M.D., Ph.D., Boston Children's Hospital, Boston, MA.
Dystonia Coalition (DC), Hyder A. Jinnah, M.D., Ph.D., Emory University, Atlanta, GA.
Frontiers in Congenital Disorders of Glycosylation Consortium (FCDGC), Eva Morava-Kozicz, M.D., Ph.D., Mayo Clinic, Rochester, NY.
Genetic Disorders of Mucociliary Clearance Consortium (GDMC), Stephanie Davis, M.D., The University of North Carolina at Chapel Hill and Thomas Ferkol, M.D., Washington University in St. Louis.
Global Leukodystrophy Initiative Clinical Trials Network (GLIA-CTN), Adeline L. Vanderver, M.D., Children's Hospital of Philadelphia; S. Ali Fatemi, M.D., M.B.A., Kennedy Krieger Institute; and Florian S. Eichler, M.D., Massachusetts General Hospital.
Inherited Neuropathies Consortium (INC), Michael E. Shy, M.D., University of Iowa, Iowa City.
Lysosomal Disease Network (LDN), Chester B. Whitley, M.D., Ph.D., University of Minnesota, Minneapolis, MN.
Myasthenia Gravis Rare Disease Network (MGNet). Henry J. Kaminski, M.D., George Washington University, Washington, DC.
Nephrotic Syndrome Network (NEPTUNE), Matthias Kretzler, M.D., University of Michigan, Ann Arbor, MI.
North American Mitochondrial Disease Consortium (NAMDC), Michio Hirano, M.D., Columbia University, New York, NY.
Phenylalanine Families and Researchers Exploring Evidence (PHEFREE), Cary Harding, M.D., Oregon Health & Science University, Portland, OR.
Porphyrias Consortium (PC), Robert J. Desnick, Ph.D., M.D., Icahn School of Medicine at Mount Sinai, New York, NY.
Primary Immune Deficiency Treatment Consortium (PIDTC), Jennifer M. Puck, M.D., University of California San Francisco and Donald B. Kohn, M.D., University of California Los Angeles.
Urea Cycle Disorders Consortium (UCDC), Andrea L. Gropman, M.D., FAAP, FACMG, Children's National Medical Center, Washington, DC.
Vasculitis Clinical Research Consortium (VCRC), Peter A. Merkel, M.D., M.P.H., University of Pennsylvania, Philadelphia, PA.
Rare Diseases Clinical Research Consortia:
Data Management and Coordinating Center (DMCC), Eileen King, Ph.D., Maurizio Macaluso, M.D., Ph.D., and Michael Wagner, Ph.D., Cincinnati Children's Hospital Medical Center, Cincinnati, OH.The RDCRN’s Data Management and Coordinating Center (DMCC) is hosted by Cincinnati Children's Hospital Medical Center in Cincinnati, OH. The DMCC manages shared resources and data from the RDCRN research studies. The DMCC emphasizes the standardization of data, increased data sharing and broad dissemination of research findings."
RDCRN Contact Registry:
The RDCRN Contact Registry is a patient contact registry sponsored by the National Institutes of Health (NIH). The RDCRN Contact Registry collects and stores the contact information of people who want to participate in RDCRN-sponsored research or learn more about RDCRN research. It connects patients with researchers in order to advance rare diseases research. Future research may produce helpful information for those with rare diseases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pourri**
Pourri:
Pourri (also written ~Pourri) is a company that devises and sells fragrant sprays for toilets. They are the makers of Poo-Pourri. These are made of essential oils and other natural compounds, which coat the surface of the water and, the manufacturer claims, hold in bad odors.The name of the company is a pun on potpourri.
History:
In 2007, after nine months of trying sprays, Suzy Batiz, founded the company and spent $25,000 of her own to begin making Poo-Pourri. The company was advertised by word of mouth for the first six years.In 2013, the company's advertisement video, Girls Don't Poop, starring Bethany Woodruff, made its debut and was seen more than 31 million times. In April 2014, Poo-Pourri was available at 9,000 stores, including CVS, Bed Bath & Beyond, Ulta, ACE and True Value.By January 2016, the company had sold over 17 million bottles of Poo-Pourri, and that October, their new online video team, 'Number 2 Productions', sent out the video, How to Poop at a Party. In 2016, Poo-Pourri was valued at $300 million.By May 2019, the company had sold over 60 million bottles of Poo-Pourri and its videos had over 350 million combined views.In 2022, the company rebranded from Poo~Pourri to ~Pourri and branched out into other categories of odor elimination.
Recognition:
USA Today called Girls Don't Poop one of 2013's worst advertisements.In May 2014, at the 18th Annual Webby Awards ceremony, Poo-Pourri won the People's Voice Award in the Consumer Goods category and Edison Award in the Innovative Services category. That year, Giuliana Rancic picked Poo-Pourri as her 'top gift for coworkers' for E! News' Holiday Gift Guide. In 2015 and 2016, the Inc. 5,000 list included Poo-Pourri, and in 2016, the spray was one of the magazine's 27 Coolest Products. In November 2016, Kathy Lee Gifford listed Poo-Pourri as one of her 'favorite things' on the Today Show. In 2019, Batiz was recognized on Forbes Richest Self-Made Women list. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Super Shot**
Super Shot:
A mini-basketball game found in many arcades, Super Shot consists of a basket, that usually moves back and forth, and four to five basketballs to shoot. There are four different modes which affect the rate at which the basket moves. Each shot is worth either two or three points, depending on the distance between the shooter and the basket. There is also a time limit for each game, usually between 40 and 60 seconds.
Super Shot:
Super Shot typically costs anywhere between 25 cents a game to $2.00, depending on the arcades. It comes in a variety of different colors such as purple-orange, black-red, blue-yellow, CEC customized, etc. It also has different customizable options like add BONUS TIME, or BONUS TICKETS, etc.
History:
Hoop Shot, a basketball skill-toss electro-mechanical game manufactured by Doyle & Associates, was released for arcades in 1985. It became a hit, inspiring numerous imitators within a year. Arcade basketball games of this type became popular in the late 1980s. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diseases of despair**
Diseases of despair:
A disease of despair is one of three classes of behavior-related medical conditions that increase in groups of people who experience despair due to a sense that their long-term social and economic prospects are bleak. The three disease types are drug overdose (including alcohol overdose), suicide, and alcoholic liver disease.
Diseases of despair:
Diseases of despair, and the resulting deaths of despair, are high in the Appalachia region of the United States, especially, in Pennsylvania, West Virginia, and Delaware. The prevalence increased markedly during the first decades of the 21st century, especially among middle-aged and older working class White Americans starting in 2010, followed by an increase in mortality for Hispanic Americans in 2011 and African Americans in 2014. It gained media attention because of its connection to the opioid epidemic. For 2018, some 158,000 U.S. citizens died from these causes, compared to 65,000 in 1995.Deaths of despair have increased sharply during the COVID-19 pandemic and associated recession, with a 10% to 60% increase above pre-pandemic levels. Life expectancy in the United States declined further to 76.4 years in 2021, with the main drivers being the COVID-19 pandemic along with deaths from drug overdoses, suicides and liver disease.
Definitions:
The concept of despair in any form can not only affect an individual person, but can also arise in and spread through social communities.There are four basic types of despair. Cognitive despair denotes thoughts connected to defeat, guilt, hopelessness and pessimism. It may make a person perceive other people's actions as hostile and discount the value of long-term outcomes. Emotional despair refers to feelings of sadness, irritability, loneliness and apathy and may partly impede the process of creating and nourishing interpersonal relationships. The term behavioural despair describes risky, reckless and self-destructive acts reflecting little to no consideration of the future (such as self-harm, reckless driving, drug use, risky sexual behaviours and others). Lastly, biological despair relates to dysfunction or dysregulation of the body's stress reactive system and/or to hormonal instability.Being under the influence of despair for an extended amount of time may lead to the development of one or more of the diseases of despair, such as suicidal thoughts or drug and alcohol abuse. If an individual has a disease of despair, there is an increased risk of death of despair, usually classified as a suicide, drug or alcohol overdose, or liver failure.
Risk factors:
Unstable mental health, depression, suicidal thoughts and addiction to drugs and alcohol affect people of every age, every ethnicity, and every demographic group in every country in the world. However, data show that in recent years these problems are on the rise, especially among the US White non-Hispanic men and women in midlife. Since the beginning of the millennium, this particular group of people is the single one in the world which experienced continual increase in mortality and morbidity while US Black non-Hispanics and US Hispanics, as well as all subgroups of populations in other rich countries (such as countries from the EU, Japan, Australia and others), show the exact opposite trend. Moreover, men and women having no more than high school education and those living in rural areas are more affected by this phenomenon than their peers who are college-educated and live in urban areas.
Recent trends in numbers:
Mortality and morbidity rates in the United States have been decreasing for decades. Between 1970 and 2013, mortality rates fell by 44% and morbidity was on a decline even among the elderly. After 1998, mortality rates in other rich countries have been declining by 2% a year; midlife mortality fell by more than 200 per 100,000 for Black non-Hispanics and by more than 60 per 100,000 for Hispanics during the 1998–2013 period. The infamous AIDS epidemic was brought under control – in 2018, only 37,968 people received an HIV diagnosis in the USA and its 6 dependent areas, which is an overall 7% decrease compared with the year 2014. Cardiovascular disease and cancer, the two biggest killers in middle age, are also on a decline, even though the still growing problem with obesity is not getting under control yet. Despite all of these satisfactory numbers, White non-Hispanic population exhibits an increase in premature deaths, especially in those caused by suicide, drug overdose and alcoholic liver disease.
Recent trends in numbers:
There are two main factors driving this trend. Firstly, the data show the US White non-Hispanic population significantly differs from populations in other countries. For example, in 2015, drug, alcohol and suicide mortality was more than two times higher among US White non-Hispanics in comparison to people from the United Kingdom, Sweden or Australia. In comparison to US Black non-Hispanics, the mortality and morbidity rates are still lower; nevertheless the gap between these groups is narrowing quickly and, for example, for people aged 30–34 the difference between these two ethnicities has almost completely diminished. Also, white non-Hispanics aged 50–54 with no more than a high school diploma reached almost 1,000 premature deaths per 100,000 in the year 2015, whereas the average for all White non-Hispanics regardless of their education was only around 500 deaths per 100,000. Therefore, the factor of education probably negatively correlates with the probability of developing a disease of despair (that means higher education correlates with lower probability of developing a disease of despair).Secondly, the excess premature deaths are, as stated above, caused primarily by suicide, poisonings or drug overdoses and other causes connected especially to alcoholism such as chronic liver diseases. The proportion of these causes of death (in comparison to deaths caused by assaults, cancer, cardiovascular diseases, HIV and motor vehicle crashes) in population white non-Hispanic people aged 25–44 is increased by 210%. It is also worth noting that the highest rates are to be discovered among people living in rural areas. For example, during the years 1999–2015, the rate of deaths of despair increased twice as much as the rate of other causes of deaths in the population of White non-Hispanics aged 30–44 living in rural areas. In total, death rates in rural subpopulations for all ethnicities increased among those aged 25–64 years by 6%. As a result of these findings, it is possible to assume that living in rural areas is also connected to the diseases and deaths of despair.Suicides reached record levels in the United States in 2022, with 49,369 suicide deaths. Since 2011, roughly 540,000 people have died by suicide in the United States.
COVID-19 pandemic:
The COVID-19 pandemic is the most severe global pandemic since the 1918 Spanish flu outbreak with lockdowns, social and economic disturbances and a sharp rise in unemployment.Preliminary studies indicate an aggravation of depression, anxiety, drug overdoses, and suicidal ideation following the beginning of the COVID-19 pandemic. Though certain health aspects like stress can be concurrent with the crisis, other biopsychosocial risk factors such as job loss, housing precarity, and food insecurity can manifest over time. This range of social determinants, commonly experienced during an economic downturn, can induce and aggravate a sense of despair. Loneliness, which is associated with despair, was also aggravated by social isolation practices put in place during the COVID-19 pandemic, which may contribute to a rise in diseases of despair.A preliminary review of 70 published studies conducted in 17 countries concerning the potential impacts of COVID-19 on deaths of despair indicates that women, ethnic minorities and younger age groups, may have suffered disproportionately more than other groups.
COVID-19 pandemic:
Drug overdoses Preliminary indications in Canada and the United States demonstrate that the trajectory of drug overdose-related deaths was exacerbated. In Canada, drug overdose-related deaths stabilized prior to the onset of COVID-19, but increased after the onset of COVID-19. In the United States, drug overdose-related deaths increased prior to and accelerated after the onset of COVID-19.More specifically, the opioid overdose crisis worsened within the three years, from 2017 to 2020, in Wisconsin. As a result of the difficulty in daily life and for individuals to ensure their health and safety amidst such a dangerous and widespread pandemic, and due to the challenges faced by people on a wide range of issues environmentally, socially, economically, and mentally, it is quite obvious as to why the drug problems around the globe have been aggravated. Particularly in Milwaukee County, Wisconsin, it was found that the pandemic had remarkably escalated the number of monthly overdose deaths, due to opioids. In addition, it was found that the worst of these drug impacts seemed to primarily occur in poor and urban neighborhoods, especially affecting Black and Hispanic communities. Despite this, even wealthy and prosperous, White communities within the suburbs, also faced an increase in the number of overdose deaths.
Causes:
The factors that seem to exacerbate diseases of despair are not fully known, but they are generally recognized as including a worsening of economic inequality and feeling of hopelessness about personal financial success. This can take many forms and appear in different situations. For example, people feel inadequate and disadvantaged when products are marketed to them as being important, but these products repeatedly prove to be unaffordable for them. This increase in rates of mental distress and diseases of despair have been attributed to the flaws in contemporary capitalism and policies associated with the ideology of neoliberalism, which seeks to release markets from all restrictions and reduce or eliminate government assistance programs. The overall loss of employment in affected geographic regions, and stagnant wages and deteriorating working conditions along with the decline of labor unions and the welfare state, are widely hypothesized factors. As such, some scholars have characterized deaths of despair as driven by austerity policies and privatization as "social murder".The changes in the labor market also affect social connections that might otherwise provide protection, as people at risk for this problem are less likely to get married, more likely to get divorced, and more likely to experience social isolation. However, some experts claim the correlation between income and mortality/morbidity rate is only coincidental and may not be associated with deaths for all groups. Anne Case and Angus Deaton argue that "after 1999, blacks with a college education experienced even more severe percentage declines in income than did whites in the same education group. Yet black mortality rates have fallen steadily, at rates between 2 and 3 percent per year for all age groups." Many other examples from Europe also show that decreased incomes and/or increased unemployment do not, in general, correlate with increased mortality rates. They argue that the ultimate cause is the sense that life is meaningless, unsatisfying, or unfulfilling, rather than strictly the basic economic security that makes these higher order feelings more likely. In a later work Case and Deaton assert that in the United States, much more so than in peer countries such as those of Western Europe, globalization and technological advancement dramatically shifted political power towards capital and away from labor by empowering corporations and weakening labor unions. As such, other rich countries, while facing challenges associated with globalization and technological change, did not experience a "long-term stagnation of wages, nor an epidemic of deaths of despair."Recent data show that diseases of despair pose a complex threat to modern society and that they are not correlated only to the economic strength of an individual. Social connections, level of education, place of residence, medical condition, mental health, working opportunities, subjective perception of one's own future – all of these play a role in determining whether the individual will develop diseases of despair or not. Additionally, the younger generations are more and more influenced by social media and other modern technologies, which may have unexpected and unfavourable effects on their lives as well. For example, according to a study from 2016, the use of social media "was significantly associated with increased depression."
Contrasted with diseases of poverty:
Diseases of despair differ from diseases of poverty because poverty itself is not the central factor. Groups of impoverished people with a sense that their lives or their children's lives will improve are not affected as much by diseases of despair. Instead, this affects people who have little reason to believe that the future will be better. As a result, this problem is distributed unevenly, for example by affecting working-class people in the United States more than working-class people in Europe, even when the European economy was weaker. It also affects White people more than racially disadvantaged groups, possibly because working-class White people are more likely to believe that they are not doing better than their parents did, while non-White people in similar economic situations are more likely to believe that they are better off than their parents.
Effects:
Starting in 1998, a rise in deaths of despair has resulted in an unexpected increase in the number of middle-aged White Americans dying (the age-specific mortality rate). By 2014, the increasing number of deaths of despair had resulted in a drop in overall life expectancy. Anne Case and Angus Deaton propose that the increase in mid-life mortality is the result of cumulative disadvantages that have occurred over decades, and that solving it will require patience and perseverance for many years, rather than a quick fix that produces immediate results. The number of deaths of despair in the United States has been estimated at 150,000 per year in 2017.Even though the main cause of diseases of despair may not be purely economical, the consequences of this phenomenon are, in terms of money, expensive. According to a report from 2016, alcohol misuse, misuse of illegal drugs and non-prescribed medications, treatment of associated disorders and lost productivity cost the U.S. more than $400 billion every year. About 40 percent of those costs were paid by government, which implies a huge cost of alcohol and drug misuse to taxpayers. Another study claims even higher costs of around $1.5 trillion in economic loss, loss of productivity, and societal harm.
Terminology:
The phrase diseases of despair has been criticized for medicalizing problems that are primarily social and economic, and for underplaying the role of specific drugs, such as OxyContin, in increasing deaths. While the disease model of addiction has a strong body of empirical support, there is weak evidence for biological markers of suicidal thoughts and behaviors and no evidence that suicide fits a disease model. The use of the phrase diseases of despair to describe suicide in medical literature is more reflective of the medical model than suicidal thoughts and behaviors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pick-up (filmmaking)**
Pick-up (filmmaking):
In filmmaking, a pick-up is a small, relatively minor shot filmed or recorded after the fact to augment footage already shot. When entire scenes are redone, it is referred to as a re-shoot or additional photography.
On set:
During principal photography, the director may choose to ask for another take (meaning that every movable object and person in the scene returns to their starting positions and the entire shot is recorded all over again), or may ask for a pick-up shot of only the faulty portion of an otherwise satisfactory take. In the latter situation, the script supervisor is expected to record in their notes that a pick-up shot was called for (so the film editor can understand and correctly edit the resulting footage) and also help prompt or "cue" the relevant actor by reading the last line before that actor's line. It is increasingly common for a director to not immediately call "cut" after a blooper, but instead leave the camera rolling and call for a pick-up, which makes pick-up shots an exception to the normal rule that a script supervisor does not cue actors while the camera is rolling.When a pick-up shot is created in this manner to be edited into the middle of an existing shot, the script supervisor must ensure the director also creates "bridge shots" to bridge what would otherwise look like jarring jump cuts from the master shot to the pick-up shot and back. These can be close-ups, cutaways, or shots of the same scene from different angles.
Later editing:
Pick-up shots and re-shoots can also occur after principal photography is complete—after continuity, logic, or quality issues are identified during the film editing process. In other words, they can occur months after the sets have been struck, the costumes and props have been stored, and all the cast and most of the crew have moved on to other projects. In deciding whether to proceed, the director and producer must carefully balance the substantial expense of reuniting key cast and crew members on set against whether pick-ups or re-shoots are absolutely necessary to fix plot holes (or worse) in the final cut. Pick-ups and reshoots themselves can pose significant continuity issues. For example, if the original costumers and makeup artists are unavailable to participate (and if rented costumes and wigs were returned and original makeup supplies were entirely used up), then those crew members' replacements must study their predecessors' work and precisely match whatever was used during the original film shoot. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paula method**
Paula method:
The Paula Method is a proposed alternative to Kegel exercises. The idea is that by strengthening your sphincter muscles (eye muscle: orbicularis oculi and mouth muscle: orbicularis oris), the contractions would also strengthen the sphincter muscles in the pelvic floor. Evidence to support its use is lacking. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fowler's syndrome**
Fowler's syndrome:
Fowler's syndrome (urethral sphincter relaxation disorder) is a rare disorder in which the urethral sphincter fails to relax to allow urine to be passed normally in younger women with abnormal electromyographic activity detected.
Presentation:
Urinary retention is a relatively uncommon presentation in young women. Fowler's syndrome primarily presents in women between menarche and menopause. The peak age of onset is 26 yrs. It is seen in about one third of the women who experience urinary retention. The predominant complaint is the inability to urinate for a day or more with no urgency to urinate, in spite of a large bladder volume of more than 1 liters. Normally a person feels the need to urinate at a bladder volume of 400-500ml. The person usually has a progressively increasing lower abdominal pain. The condition can be associated with Polycystic ovary Syndrome and Endometriosis.Alternatively, women with Fowler's Syndrome can present with impaired voiding, voiding difficulties with or without incomplete bladder emptying, may have increased urinary frequency and occasionally impairment in urination and increased frequency of urination, but rarely become incontinent.Women with Fowler's Syndrome often find catheterisation extremely painful.Fowler's Syndrome can be a disabling condition. 50% of women with Fowler’s Syndrome suffer from unexplained chronic pain, including chronic abdominopelvic, back, leg, or widespread pain.Women with Fowler's Syndrome can suffer lifelong with debilitating effects on quality of life.
Cause:
The exact cause of Fowler's syndrome is not yet known.It may occur spontaneously, or following an event such as a surgical procedure or childbirth. Use of opiates also trigger urinary retention.There is not usually any prior history of urological abnormalities in childhood.One hypothesis is that it is due to an abnormality in muscle membrane, possibly hormonally dependent channelopathy. causing excessive excitability of the external urethral sphincter which prevents the adequate relaxation of the muscle necessary for voiding .Another hypothesis in that Fowler's Syndrome is due to an up-regulation of spinal cord enkephalins and that opiates may compound the functional abnormalities.It has also been hypothesised that there are both local pelvic floor and central neurological causations.
Diagnosis:
Urodynamic testing including Cystometry and Urethral Pressure Profilometry.
Diagnosis:
Women with Fowler's syndrome are often found to have an abnormally elevated urethral pressure profile, increased urethral sphincter volume.The diagnosis is done by testing the Electromyography (EMG) of the external striated urethral sphincter Women with Fowler's Syndrome characteristically show abnormal electromyography of the urethral sphincter. The usual findings are complex repetitive discharges without and with deceleration (decelerating bursts), suggesting an impairment in sphincter muscle relaxation.
Treatment:
Sacral neuromodulation is the only treatment that has been found to restore voiding in women with Fowler's Syndrome. It delivers an electric current to the neural reflexes associated with lower urinary tract function via stimulation of the S3 spinal nerve root.Although success rate is about 70%, there can be complications and it has a relatively high re-intervention rate.Sacral Neuromodulation is thought to work because the sensory parts of the brain (the periaqueductal grey) which receives sensory signals from the Lower Urinary Tract becomes activated in women with Fowler’s syndrome when the device is switched on; the neuromodulation overriding the negative feedback from the sacral nerves.Other treatment options are: Sphincter injections of botulinum toxin.
Treatment:
Catheterisation. Women with Fowler's Syndrome may report difficulties in performing self catheterisation therefore an indwelling catheter such as a suprapubic catheter may be required.
History:
This disease was described first by Fowler et al in 1985. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Galton's problem**
Galton's problem:
Galton's problem, named after Sir Francis Galton, is the problem of drawing inferences from cross-cultural data, due to the statistical phenomenon now called autocorrelation. The problem is now recognized as a general one that applies to all nonexperimental studies and to experimental design as well. It is most simply described as the problem of external dependencies in making statistical estimates when the elements sampled are not statistically independent.
Galton's problem:
Asking two people in the same household whether they watch TV, for example, does not give you statistically independent answers. The sample size, n, for independent observations in this case is one, not two. Once proper adjustments are made that deal with external dependencies, then the axioms of probability theory concerning statistical independence will apply. These axioms are important for deriving measures of variance, for example, or tests of statistical significance.
Origin:
In 1888, Galton was present when Sir Edward Tylor presented a paper at the Royal Anthropological Institute. Tylor had compiled information on institutions of marriage and descent for 350 cultures and examined the associations between these institutions and measures of societal complexity. Tylor interpreted his results as indications of a general evolutionary sequence, in which institutions change focus from the maternal line to the paternal line as societies become increasingly complex. Galton disagreed, pointing out that similarity between cultures could be due to borrowing, could be due to common descent, or could be due to evolutionary development; he maintained that without controlling for borrowing and common descent one cannot make valid inferences regarding evolutionary development. Galton's critique has become the eponymous Galton's Problem,: 175 as named by Raoul Naroll, who proposed the first statistical solutions.
Origin:
By the early 20th century unilineal evolutionism was abandoned and along with it the drawing of direct inferences from correlations to evolutionary sequences. Galton's criticisms proved equally valid, however, for inferring functional relations from correlations. The problem of autocorrelation remained.
Solutions:
Statistician William S. Gosset in 1914 developed methods of eliminating spurious correlation due to how position in time or space affects similarities. Today's election polls have a similar problem: the closer the poll to the election, the less individuals make up their mind independently, and the greater the unreliability of the polling results, especially the margin of error or confidence limits. The effective n of independent cases from their sample drops as the election nears. Statistical significance falls with lower effective sample size.
Solutions:
The problem pops up in sample surveys when sociologists want to reduce the travel time to do their interviews, and hence they divide their population into local clusters and sample the clusters randomly, then sample again within the clusters. If they interview n people in clusters of size m the effective sample size (efs) would have a lower limit of 1 + (n − 1) / m if everyone in each cluster were identical. When there are only partial similarities within clusters, the m in this formula has to be lowered accordingly. A formula of this sort is 1 + d (n − 1) where d is the intraclass correlation for the statistic in question. In general, estimation of the appropriate efs depends on the statistic estimated, as for example, mean, chi-square, correlation, regression coefficient, and their variances.
Solutions:
For cross-cultural studies, Murdock and White estimated the size of patches of similarities in their sample of 186 societies. The four variables they tested – language, economy, political integration, and descent – had patches of similarities that varied from size three to size ten. A very crude rule of thumb might be to divide the square root of the similarity-patch sizes into n, so that the effective sample sizes are 58 and 107 for these patches, respectively. Again, statistical significance falls with lower effective sample size.
Solutions:
In modern analysis spatial lags have been modelled in order to estimate the degree of globalization on modern societies.Spatial dependency or auto-correlation is a fundamental concept in geography. Methods developed by geographers that measure and control for spatial autocorrelation do far more than reduce the effective n for tests of significance of a correlation. One example is the complicated hypothesis that "the presence of gambling in a society is directly proportional to the presence of a commercial money and to the presence of considerable socioeconomic differences and is inversely related to whether or not the society is a nomadic herding society." Tests of this hypothesis in a sample of 60 societies failed to reject the null hypothesis. Autocorrelation analysis, however, showed a significant effect of socioeconomic differences.How prevalent is autocorrelation among the variables studied in cross-cultural research? A test by Anthon Eff on 1700 variables in the cumulative database for the Standard Cross-Cultural Sample, published in World Cultures, measured Moran's I for spatial autocorrelation (distance), linguistic autocorrelation (common descent), and autocorrelation in cultural complexity (mainline evolution). "The results suggest that ... it would be prudent to test for spatial and phylogenetic autoccorrelation when conducting regression analyses with the Standard Cross-Cultural Sample." The use of autocorrelation tests in exploratory data analysis is illustrated, showing how all variables in a given study can be evaluated for nonindependence of cases in terms of distance, language, and cultural complexity. The methods for estimating these autocorrelation effects are then explained and illustrated for ordinary least squares regression using again the Moran I significance measure of autocorrelation.
Solutions:
When autocorrelation is present, it can often be removed to get unbiased estimates of regression coefficients and their variances by constructing a respecified dependent variable that is "lagged" by weightings on the dependent variable on other locations, where the weights are degree of relationship. This lagged dependent variable is endogenous, and estimation requires either two-stage least squares or maximum likelihood methods.
Resources:
A public server, if used externally at http://SocSciCompute.ss.uci.edu Archived 2016-02-20 at the Wayback Machine, offers ethnographic data, variables and tools for inference with R scripts by Dow (2007) and Eff and Dow (2009) in an NSF supported Galaxy (http://getgalaxy.org) framework (https://www.xsede.org) for instructors, students and researchers to do "CoSSci Galaxy" cross-cultural research modeling Archived 2016-02-20 at the Wayback Machine with controls for Galton's problem using Standard Cross-Cultural Sample variables at https://web.archive.org/web/20160402201432/https://dl.dropboxusercontent.com/u/9256203/SCCScodebook.txt.
Opportunities:
In anthropology, where Tylor's problem was first recognized by the statistician Galton in 1889, it is still not widely recognized that there are standard statistical adjustments for the problem of patches of similarity in observed cases and opportunities for new discoveries using autocorrelation methods. Some cross-cultural researchers (see, e.g., Korotayev and de Munck 2003) have begun to realize that evidence of diffusion, historical origin, and other sources of similarity among related societies or individuals should be renamed Galton's Opportunity and Galton's Asset rather than Galton's Problem. Researchers now use longitudinal, cross-cultural, and regional variation analysis routinely to analyze all the competing hypotheses: functional relationships, diffusion, common historical origin, multilineal evolution, co-adaptation with environment, and complex social interaction dynamics.
Controversies:
Within anthropology, Galton's problem is often given as a cause to reject comparative studies altogether. Since the problem is a general one, common to the sciences and statistical inference generally, this particular criticism of cross-cultural or comparative studies – and there are many – is one that, logically speaking, amounts to a rejection of science and statistics altogether. Any data collected and analyzed by ethnographers, for example, is equally subject to Galton's problem, understood in its most general sense. A critique of the anticomparative critique is not limited to statistical comparison since it would apply as well to the analysis of text. That is, the analysis and use of text in argumentation is subject to critique as to the evidential basis of inference. Reliance purely on rhetoric is no protection against critique as to the validity of argument and its evidentiary basis.
Controversies:
There is little doubt, however, that the community of cross-cultural researchers have been remiss in ignoring Galton's problem. Expert investigation of this question shows results that "strongly suggest that the extensive reporting of naïve chi-square independence tests using cross-cultural data sets over the past several decades has led to incorrect rejection of null hypotheses at levels much higher than the expected 5% rate.": 247 The investigator concludes that "Incorrect theories that have been 'saved' by naïve chi-square tests with comparative data may yet be more rigorously tested another day.": 270 Once again, the adjusted variance of a cluster sample is given as one multiplied by 1 + d (k + 1) where k is the average size of a cluster, and a more complicated correction is given for the variance of contingency table correlations with r rows and c columns. Since this critique was published in 1993, and others like it, more authors have begun to adopt corrections for Galton's problem, but the majority in the cross-cultural field have not. Consequently, a large proportion of published results that rely on naive significance tests and that adopt the P < 0.05 rather than a P < 0.005 standard are likely to be in error because they are more susceptible to type I error, which is to reject the null hypothesis when it is true.
Controversies:
Some cross-cultural researchers reject the seriousness of Galton's problem because, they argue, estimates of correlations and means may be unbiased even if autocorrelation, weak or strong, is present. Without investigating autocorrelation, however, they may still mis-estimate statistics dealing with relationships among variables. In regression analysis, for example, examining the patterns of autocorrelated residuals may give important clues to third factors that may affect the relationships among variables but that have not been included in the regression model. Second, if there are clusters of similar and related societies in the sample, measures of variance will be underestimated, leading to spurious statistical conclusions. for example, exaggerating the statistical significance of correlations. Third, the underestimation of variance makes it difficult to test for replication of results from two different samples, as the results will more often be rejected as similar. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mountain chain**
Mountain chain:
A mountain chain is a row of high mountain summits, a linear sequence of interconnected or related mountains, or a contiguous ridge of mountains within a larger mountain range. The term is also used for elongated fold mountains with several parallel chains ("chain mountains").
While in mountain ranges, the term mountain chain is common, in hill ranges a sequence of hills tends to be referred to a ridge or hill chain.
Mountain chain:
Elongated mountain chains occur most frequently in the orogeny of fold mountains, (that are folded by lateral pressure), and nappe belts (where a sheetlike body of rock has been pushed over another rock mass). Other types of range such as horst ranges, fault block mountain or truncated uplands rarely form parallel mountain chains. However, if a truncated upland is eroded into a high table land, the incision of valleys can lead to the formations of mountain or hill chains.
Formation of parallel mountain chains:
The chain-like arrangement of summits and the formation of long, jagged mountain crests – known in Spanish as sierras ("saws") – is a consequence of their collective formation by mountain building forces. The often linear structure is linked to the direction of these thrust forces and the resulting mountain folding which in turn relates to the fault lines in the upper part of the earth's crust, that run between the individual mountain chains. In these fault zones, the rock, which has sometimes been pulverised, is easily eroded, so that large river valleys are carved out. These, so called longitudinal valleys reinforce the trend, during the early mountain building phase, towards the formation of parallel chains of mountains.
Formation of parallel mountain chains:
The tendency, especially of fold mountains (e. g. the Cordilleras) to produce roughly parallel chains is due to their rock structure and the propulsive forces of plate tectonics. The uplifted rock masses are either magmatic plutonic rocks, easily shaped because of their higher temperature, or sediments or metamorphic rocks, which have a less robust structure, that are deposited in the synclines. As a result of orogenic movements, strata of folded rock are formed that are crumpled out of their original horizontal plane and thrust against one another. The longitudinal stretching of the folds takes place at right angles to the direction of the lateral thrusting. The overthrust folds of a nappe belt (e.g. the Central Alps) are formed in a similar way.
Formation of parallel mountain chains:
Although the fold mountains, chain mountains and nappe belts around the world were formed at different times in the earth's history, all during their initial mountain building phases, they are nevertheless morphologically similar. Harder rock forms continuous arêtes or ridges that follow the strike of the beds and folds. The mountain chains or ridges therefore run approximately parallel to one another. They are only interrupted by short, usually narrow, transverse valleys, which often form water gaps. During the course of earth history, erosion by water, ice and wind carried away the highest points of the mountain crests and carved out individual summits or summit chains. Between them, notches were formed that, depending on altitude and rock-type, form knife-edged cols or gentler mountain passes and saddles.
Dominant rocks and mountain forms:
Nappe or fold mountains, with their roughly parallel mountain chains, generally have a common geological age, but may consist of various types of rock. For example, in the Central Alps, granitic rocks, gneisses and metamorphic slate are found, while to the north and south, are the Limestone Alps. The Northern Limestone Alps are, in turn, followed by soft flysch mountains and the molasse zone. The type of rock influences the appearance of the mountain ranges very markedly, because erosion leads to very different topography depending on the hardness of the rock and its petrological structure. In addition to height and climate, other factors are the layering of the rock, its gradient and aspect, the types of waterbody and the lines of dislocation. For hard rock massifs, rugged rock faces (e.g. in the Dolomites) and mighty scree slopes are typical. By contrast, flysch or slate forms gentler mountain shapes and kuppen or domed mountaintops, because the rock is not porous, but easily shaped.
Literature:
Wissen heute: Geologie. Kaiser-Verlag, Florence/Klagenfurt, 1995 Der geologische Aufbau Österreichs. Geologische Bundesanstalt, Springer-Verlag Vienna/ New York PanGeo, Erdwissenschaften in Österreich. Conference proceedings, 200 pp., Sessions on the Neogene, TRANSALP I and II. Univ. Salzburg, 2005 Fischer-Lexikon Geographie, pp. 101–129, Frankfurt, 1959 Großer Weltatlas, Enzyklopädischer Teil (mountain building, folds and faults, the rock cycle). Publ. ÖAMTC, Vienna, ~1980 André Cailleux: Der unbekannte Planet: Anatomie der Erde. Kindlers Universitätsbibliothek, Munich, 1968, Chapters 1 and 3 Gebirge, in: Lueger, Otto: Lexikon der gesamten Technik und ihrer Hilfswissenschaften, Vol. 4 Stuttgart, Leipzig, 1906, pp. 316-317. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virtual scientific community**
Virtual scientific community:
A virtual scientific community is a group of people, often researchers and students, who share multiple resources related to the scientific field, and whose main medium of communication is the internet. Examples of such communities include the Computational Intelligence and Machine Learning Portal or the Biomedical Informatics Research Network.There are numerous scientific repositories and websites in existence that, while useful, do not meet the definition of a virtual scientific community. Examples of such are data and scientific literature repositories as well as open access journals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Accident data recorder**
Accident data recorder:
The accident data recorder (ADR, German commonly abbr.: UDS, also accident (data) writer), is an independent electronic device that records before, during, and after a traffic accident relevant data and thus resembles a flight recorder.
Accident data recorder:
It can be installed in motor vehicles (cars, trucks, buses, motorcycles, trams, and special vehicles) on a voluntary basis in order to obtain more accurate information about the events in an accident. In some countries there are regulations for mandatory installation in different vehicles. The accident data recorder constantly records various data of the vehicle (such as speed, direction of travel, longitudinal and transverse vehicle acceleration, status of the lights, turn signals and braking, etc.) and records them for some time before they are automatically cleared.
Accident data recorder:
In the case of an accident (this is detected by a strong acceleration of the vehicle as a result of an impulse), certain periods of time (usually in the two-digit seconds range) remain permanently stored before and after an event. This makes it much easier to reconstruct events after an accident, so that if necessary the question of fault can be clarified.
Accident data recorder:
Many vehicles of authorities (such as police or ambulance) are equipped with it, as it often comes to disputes with collisions during driving with blues and twos about compliance with regulations. A side effect of vehicles equipped with UDS is that drivers behave more cautiously on the road. According to a survey by the EU Transport Commission, UDS users experienced a 20 to 30 percent decline in traffic accidents.The accident data recorder is often used by experts or institutions in crash tests as a measuring device.
Accident data recorder:
The installation (also later) costs about 700 euros and can lead to a deduction for some insurance policies. The accident data recorder can be read by an expert via interface cable. The accident data recorder (older generations) has a switch with which the driver can delete the stored data immediately after an accident so as not to burden himself with the later question of guilt. However, this feature may be disabled, for example for use in company vehicles.
Technology:
Accident data recorders work to measure the accelerations, depending on the equipment in two or three spatial directions, with micromechanical sensors. Often several sensor systems with different resolutions are used in order to be able to log the driving dynamic processes on the one hand and the collision dynamics themselves on the other. Higher-class systems also offer a possibility for measuring the rotational movements as well as the vehicle speed. The latter can e.g. be calculated from the signal of the vehicle's wheel speed sensor. The recording of any signals that are available on the vehicle's own CAN bus, is possible with higher-class devices as well as the detection of a GPS signal for position and speed determination. Depending on the manufacturer, about 20 to 30 s are recorded before and 10 to 15 s after an event.
Technology:
Are known today (as of 2018) in the German-speaking countries essentially two suitable for retrofitting accident data recorder. Blacktrack Ltd. offers a low-cost solution, which is mainly used by the insurance industry (e.g. AXA Winterthur in Switzerland). By contrast, the UDS-AT developed by the company consortium Peter Systemtechnik GmbH and Kast GmbH offers extended possibilities of recording and integration into a vehicle.Residual path recording devices (RAG) from Mobatime AG are external devices that rely on existing on-board signals (distance, speed, operating states of status inputs) and store them in a ring buffer for at least the last 12 km. In contrast to an accident data recorder, they do not have their own measuring sensors.
Evaluation:
Reading out the data of an accident data recorder requires special software.
Evaluation:
The evaluation and interpretation of measurement data of a traffic accident require special knowledge in the areas of vehicle dynamics, accident reconstruction, metrology and last but not least the accident data storage technology itself. For UDS there is a separate order area for expert witnesses in Germany.The picture on the right shows the (unprocessed) data curves of a real accident recorded with a UDS. The measured accelerations, the speed and various status channels are plotted over time. It can be seen, for example, that vehicle special signals were switched on prior to the collision and the driver was previously still applying the brake.
History:
The nowadays known accident data recorder was invented by Mannesmann Kienzle GmbH, which applied for a patent in 1992. The development began in Germany in the early 1980s with the two companies MBB and Kienzle, who pursued different concepts. In general, the black box was taken as a model, which was developed in the early 1950s in Australia by David Warren. Already in 1973, General Motors applied for a patent for a "vehicle crash recorder". Mannesmann Kienzle delivered the first accident data recorder in early 1993.
Motorsport:
In motorsport, accident data recorders (ADR) must be used in various series as specified by the FIA. Starting with the 2015 season, the use of an ADR in the Formula 4 championship was compulsory. In the higher series, the use has been mandatory for some time. In addition to the data of the acceleration sensors mounted on the vehicle, the loads on the driver are also measured with an in-ear accelerometer.Due to the higher speeds in formula sports compared to road traffic, the sensors have a measuring range of ± 150 g with a resolution of 0.1 g. 2 s are recorded before an event. With an event duration of 30 s, 10 events can be stored.
Demarcation:
Both in a flight recorder and in a UDS, the constantly recorded data runs in a ring buffer. However, the flight recorder usually records for longer periods of 17 to 25 hours. In contrast, the UDS only saves a few seconds before and after an event is triggered (e.g. collision) permanently.
Demarcation:
The term drive data recorder is generally understood to mean a continuous and permanently available recording of data and signals during the operation of a vehicle, independently of an accident. Such systems are often used in locomotives or trams. Often, however, an electronic logbook is referred to as drive data recorder. Dashcams are sometimes referred to as drive data recorder or video event data recorder (VEDR).
Demarcation:
A so-called Event Data Recorder (EDR) is not an accident data recorder in the sense of an autonomous, more or less vehicle-independent device, since an EDR is usually an additional electronic module in an existing control device (e.g. from the Airbag) in a car. EDRs rely exclusively on on-board signals, while UDS have their own inertial sensors. Vehicles with airbag systems store accident-relevant data (impact accelerations, belt buckle conditions, seat positions, trip times) in the internal memory of the tripping electronics. However, the data size varies depending on the manufacturer and only extends over a few seconds or fractions. NHTSA regulations call for uniform data sets for all systems manufactured from 2010 onwards. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bubbler cylinder**
Bubbler cylinder:
A Bubbler cylinder is a component of a unit for the metal organic chemical vapor deposition (MOCVD). They are devices that are used for conveying electronic grade metalorganic compounds from a liquid or solid precursor into a usable vapor.
Apparatus:
The container of the Bubblers is similar to the construction of a gas washing bottle and used for the protected storage of metalorganic compounds to exclude air (oxygen, moisture). The bubbler has a supply pipe and a sampling tube. The inlet tube ends just before the bottom of the tube. In this tube, an inert gas is introduced, which bubbles through the liquid chemical; a solid chemical will sublime. The mixture of the controlled inert gas and the vaporized chemical leaves the cylinder into a downstream reaction vessel. The temperature is controlled by a thermostat, so that a defined, constant steam pressure can be achieved. The supply of the often expensive and sensitive chemical is controlled by the regulated flow of inert gas and the temperature of the bubbler, resulting in a given vapor pressure of the chemical.
Apparatus:
The tube between the bubbler and the reactor has to have a higher temperature than the bubble, otherwise the precursor would condense in the tube and therefore uncontrolled droplets would be passed into the reaction vessel. If this happens with a solid precursor, it could plug the line.
Application:
For example, during the production of high brightness light emitting diodes, gallium or other group III-V elements are epitaxially deposited onto a single crystal silicon substrate. The gallium is introduced into the MOVPE reactor chamber via a vapor. This vapor is generated by bubbling an inert carrier gas (such as nitrogen or argon) through a cylinder with a dip tube through a metalorganic precursor, such as trimethylgallium. The inert carrier gas and the metalorganic vapor is then introduced into the MOVPE (or MOCVD) reactor chamber during the production of high brightness LEDs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jejunal arteries**
Jejunal arteries:
The jejunal arteries are four-five branches of the superior mesenteric artery which supply blood to the jejunum. They arise from the left side of the superior mesenteric artery. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Extensor pollicis brevis muscle**
Extensor pollicis brevis muscle:
In human anatomy, the extensor pollicis brevis is a skeletal muscle on the dorsal side of the forearm. It lies on the medial side of, and is closely connected with, the abductor pollicis longus. The extensor pollicis brevis (EPB) belongs to the deep group of the posterior fascial compartment of the forearm.[1] It is a part of the lateral border of the anatomical snuffbox.
Structure:
The extensor pollicis brevis arises from the ulna distal to the abductor pollicis longus, from the interosseous membrane, and from the dorsal surface of the radius. Its direction is similar to that of the abductor pollicis longus, its tendon passing the same groove on the lateral side of the lower end of the radius, to be inserted into the base of the first phalanx of the thumb.
Structure:
Variation Absence; fusion of tendon with that of the extensor pollicis longus or abductor pollicis longus muscle.
Function:
In a close relationship to the abductor pollicis longus, the extensor pollicis brevis both extends and abducts the thumb at the carpometacarpal and metacarpophalangeal joints. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spelling in Gwoyeu Romatzyh**
Spelling in Gwoyeu Romatzyh:
The spelling of Gwoyeu Romatzyh (GR) can be divided into its treatment of initials, finals and tones. GR uses contrasting unvoiced/voiced pairs of consonants to represent aspirated and unaspirated initials in Chinese: for example b and p represent IPA [p] and [pʰ]. The letters j, ch and sh represent two different series of initials: the alveolo-palatal and the retroflex sounds. Although these spellings create no ambiguity in practice, readers more familiar with Pinyin should pay particular attention to them: GR ju, for example, corresponds to Pinyin zhu, not ju (which is spelled jiu in GR).
Spelling in Gwoyeu Romatzyh:
Many of the finals in GR are similar to those used in other romanizations. Distinctive features of GR include the use of iu for the close front rounded vowel spelled ü or simply u in Pinyin. Final -y represents certain allophones of i: GR shy and sy correspond to Pinyin shi and si respectively.
Spelling in Gwoyeu Romatzyh:
The most striking feature of GR is its treatment of tones. The first tone is represented by the basic form of each syllable, the spelling being modified according to precise but complex rules for the other three tones. For example the syllable spelled ai (first tone) becomes air, ae and ay in the other tones. A neutral (unstressed) tone can optionally be indicated by preceding it with a dot or full stop: for example perng.yeou "friend".
Spelling in Gwoyeu Romatzyh:
Rhotacization, a common feature of Mandarin (especially Beijing) Chinese, is marked in GR by the suffix -l. Owing to the rather complex orthographical details, a given rhotacized form may correspond to more than one non-rhotacized syllable: for example, jiel can mean either "today" (from jin) or "chick" (from ji).
A number of frequently-occurring morphemes have abbreviated spellings in GR. The most common of these, followed by their Pinyin equivalents, are: -g (-ge), -j (-zhe), -m (-me), sh (shi) and -tz (-zi).
Basic forms:
GR, like Pinyin, uses contrasting unvoiced/voiced pairs of consonants to represent aspirated and unaspirated sounds in Chinese. For example b and p represent IPA [p] and [pʰ] (p and p' in Wade-Giles). Another feature of GR surviving in Pinyin is the representation of words (usually of two syllables) as units: e.g. Gwoyeu rather than the Wade-Giles Kuo2-yü3.
The basic features of GR spelling are shown in the following tables of initials and finals, the latter referring to the basic T1 forms. Many of the spelling features are the same as in Pinyin; differences are highlighted in the tables and discussed in detail after the second table. The rules of tonal spelling follow in a separate section.
In the tables Pinyin spellings are given only where they differ from GR, in which case they appear in (parentheses). The tables also give the pronunciation in [brackets].
Initials Key GR differs from Pinyin alveolo-palatal consonants (GR differs from Pinyin) retroflex consonants (GR coincides with Pinyin) Finals Key GR differs from Pinyin Spelling GR basic (T1) spellings are compared to the spelling conventions of Pinyin in the table below. A separate table, after the tonal rules, compares spellings using all four tones.
Basic forms:
Alveolar and retroflex series The letter j and the digraphs ch and sh represent two different series of sounds. When followed by i they correspond to the alveolo-palatal sounds (Pinyin j, q, and x); otherwise they correspond to the retroflex sounds (Pinyin zh, ch, and sh). In practice this feature creates no ambiguity, because the two series of consonants are in complementary distribution. Nevertheless it does make the correspondence between GR and Pinyin spellings difficult to follow. In some cases they agree (chu is the same syllable in both systems); but in other cases they differ—sometimes confusingly so (for example, GR ju, jiu and jiou correspond to Pinyin zhu, ju and jiu respectively).
Basic forms:
This potential for confusion can be seen graphically in the table of initials, where the bold letters j, ch and sh cut across the highlighted division between alveolo-palatal and retroflex.
Basic forms:
Other differences from Pinyin GR also differs from Pinyin in its transcription of vowels and semivowels: GR uses iu for the close front rounded vowel (IPA y) spelled ü or in many cases simply u in Pinyin. (The contracted Pinyin iu is written iou in GR.) Final -y represents the [ɨ] allophone of i: GR shy and sy correspond to Pinyin shi and si respectively.
Basic forms:
No basic forms in GR begin with w- or y-: Pinyin ying and wu are written ing and u in GR (but only in T1).Other important GR spellings which differ from Pinyin include: GR writes au for Pinyin ao (but see the rule for T3).
el corresponds to Pinyin er (-r being reserved to indicate T2). The most important use of -(e)l is as a rhotacization suffix.
GR uses ts for Pinyin c and tz for Pinyin z.
-uen and -uei correspond to the contracted Pinyin forms -un and -ui.
GR also has three letters for dialectal sounds: v (万 in extended Zhuyin), ng (兀), and gn (广).As in Pinyin, an apostrophe is used to clarify syllable divisions. Pin'in, the GR spelling of the word "Pinyin", is itself a good example: the apostrophe shows that the compound is made up of pin + in rather than pi + nin.
Pinyin comparison: basic forms The following list summarizes the differences between GR and Pinyin spelling. The list is in GR alphabetical order (click the button next to the heading to change to Pinyin order).
Tonal rules:
Note: In this section the word "tone" is abbreviated as "T": thus T1 stands for Tone 1, or first tone, etc.Wherever possible GR indicates tones 2, 3 and 4 by respelling the basic T1 form of the syllable, replacing a vowel with another having a similar sound (i with y, for example, or u with w). But this concise procedure cannot be applied in every case, since the syllable may not contain a suitable vowel for modification. In such cases a letter (r or h) is added or inserted instead. The precise rule to be followed in any specific case is determined by the rules given below.A colour-coded rule of thumb is given below for each tone: the same colours are used below in a list of provinces. Each rule of thumb is then amplified by a comprehensive set of rules for that tone. These codes are used in the rules: V = a vowel NV = a non-vowel (either a consonant or zero in the case of an initial vowel) ⇏ = "but avoid forming [the specified combination]"Pinyin equivalents are given in brackets after each set of examples. To illustrate the GR tonal rules in practice, a table comparing Pinyin and GR spellings of some Chinese provinces follows the detailed rules.
Tonal rules:
Tone 1: basic form Initial sonorants (l-/m-/n-/r-): insert -h- as second letter. rheng, mha (rēng, mā) Otherwise use the basic form.Tone 2: i/u → y/w; or add -r Initial sonorants: use basic form. reng, ma (réng, má) NVi → NVy ( + -i if final). chyng, chyan, yng, yan, pyi (qíng, qián, yíng, yán, pí) NVu → NVw ( + -u if final). chwan, wang, hwo, chwu (chuán, wáng, huó, chú) Tone 3: i/u → e/o; or double vowel Vi or iV → Ve or eV (⇏ee). chean, bae, sheau (qiǎn, bǎi, xiǎo), but not gee When both i and u can be found, only the first one changes, i.e. jeau, goai, sheu (jiǎo, guǎi, xǔ), not jeao, goae, sheo For basic forms starting with i-/u-, change the starting i-/u- to e-/o- and add initial y-/w-. yean, woo, yeu (yǎn, wǒ, yǔ) Otherwise double the (main) vowel. chiing, daa, geei, huoo, goou (qǐng, dǎ, gěi, huǒ, gǒu)Tone 4: change/double final letter; or add -h Vi → Vy. day, suey (dài, suì) Vu → Vw (⇏iw). daw, gow (dào, gòu), but not chiw -n → -nn. duann (duàn) -l → -ll. ell (èr) -ng → -nq. binq (bìng) Otherwise add h. dah, chiuh, dih (dà, qù, dì) For basic forms starting with i-/u-, replace initial i-/u- with y-/w-, in addition to the necessary tonal change. yaw, wuh (yào, wù)Neutral tone (轻声 Chingsheng / qīngshēng) A dot (usually written as a period or full stop) may be placed before neutral tone (unstressed) syllables, which appear in their original tonal spelling: perng.yeou, dih.fang (péngyou, dìfang). Y.R. Chao used this device in the first eight chapters of the Mandarin Primer, restricting it thereafter to new words on their first appearance. In A Grammar of Spoken Chinese he introduced a subscript circle (o) to indicate an optional neutral tone, as in bujyodaw, "don't know" (Pinyin pronunciation bùzhīdào or bùzhīdao).
Tonal rules:
Any GR syllables beginning u- or i- must be T1: in T2, T3 and T4 these syllables all begin with w- or y- respectively. An example in all four tones is the following: ing, yng, yiing, yinq (Pinyin ying).
Rime table The term rime, as used by linguists, is similar to rhyme. See Rime table.
Tonal rules:
Pinyin comparison: all tones This table illustrates the GR tonal rules in use by listing some Chinese provinces in both GR and Pinyin (to switch to Pinyin alphabetical order, click the button next to the heading). The tonal spelling markers or "clues" are highlighted using the same colour-coding scheme as above. Note that T1 is the default tone: hence Shinjiang (Xīnjiāng), for example, is spelled using the basic form of both syllables.
Tonal rules:
GR tone key Tone 1 (basic form: unmarked) Tone 2 Tone 3 Tone 4
Rhotacization:
Erhua (兒化), or the rhotacized or retroflex ending, is indicated in GR by -l rather than -r, which is already used as a T2 marker. The appropriate tonal modification is then applied to the basic rhotacized form: for example shell (Pinyin shìr) from the basic form shel, and deal (diǎnr) from the basic form dial. In the fourth tone, certain syllables don't double the l but are instead spelled by first writing the non-rhotacized syllable in the fourth tone and then adding l: (-i/y)awl, (-i/y)owl, (-i/y/-u/w/)anql, (-i/y/w)enql, (-i/y)onql, ehl (from e’l, the basic rhotacized form of e; compare ell from el, which is both the basic rhotacized form of en, ei, and y and a basic Mandarin syllable).
Rhotacization:
Most other romanization systems preserve the underlying form, but GR transcribes the surface form as pronounced. These are the principles followed to create the basic form of a rhotacized syllable in GR:’ -l is added to the final's basic non-rhotacized form -y becomes -e- i becomes ie-, and iu becomes iue- in becomes ie-, and iun becomes iue-; in all other cases, -n disappears without trace ing becomes ieng- final asyllabic -i (found in (i/u)ai and (u)ei) disappears with the final e, an apostrophe is added before the -l, i.e. e’l, er’l, ee’l (to separate them from el, erl, eel), except in the fourth tone, where the spelling is ehl (as this is sufficient to separate it from ell) with the finals ie and iue, an apostrophe is added in the first and second tones only, i.e. ie’l, ye’l, -ieel/yeel, -iell/yell and iue’l, yue’l, -yeuel/-euel, -iuell/yuellThus, the basic rhotacized final el corresponds to the basic non-rhotacized finals en, ei, and -y and is also a basic Mandarin syllable uel corresponds to uen and uei iel corresponds to i and in; in the third and fourth tones, it also corresponds to ie iuel corresponds to iu and iun; in the third and fourth tones, it also corresponds to iue al corresponds to a, an, and ai ial corresponds to ia, ian, and iai ual corresponds to ua, uan, and uaiAs a consequence, the one-to-one correspondence between GR and Pinyin is broken, since one GR rhotacized form may correspond to several Pinyin forms. For example, jiel corresponds to both jīr and jīnr (both pronounced [t͡ɕjɚ˥]), and jial corresponds to both jiār and jiānr (both pronounced [t͡ɕjaɚ̯˥]).
Tone sandhi:
The most important manifestation of tone sandhi in Mandarin is the change of a T3 syllable to T2 when followed by another T3 syllable (T3 + T3 → T2 + T3). GR does not reflect this change in the spelling: the word for "fruit" is written shoeiguoo, even though the pronunciation is shweiguoo. Four common words with more complicated tone sandhi (also ignored in the spelling) are mentioned below under Exceptions.
Abbreviations:
A number of frequently-occurring morphemes have abbreviated spellings in GR. The commonest of these, followed by their Pinyin equivalents, are: -g (-ge) -j (-zhe) -m (-me)occurs in sherm (shénme), jemm/tzemm (zhème) and tzeem (zěnme)sh (shi)also in compounds such as jiowsh (jiùshi), dannsh (dànshi), etc.-tz (-zi)
Reduplication:
In its original form GR used the two "spare" letters of the alphabet, v and x, to indicate reduplication. This mimicked the method by which the Japanese writing system indicates repeated Kanji characters with an iteration mark (々). In GR the letter x indicates that the preceding syllable is repeated (shieh.x = shieh.shieh, "thank you"), vx being used when the preceding two syllables are repeated (haoshuo vx! = haoshuo haoshuo! "you're too kind!").This concise but completely unphonetic, and hence unintuitive, device appears in Chao's Mandarin Primer and all W. Simon's texts (including his Chinese-English Dictionary). Eventually, however, it was silently discarded even by its inventor: in Chao's Grammar as well as his Sayable Chinese all reduplicated syllables are written out in full in their GR transcription.
Exceptions:
The following words and characters do not follow the rules of GR: The name Romatzyh (which strictly speaking should be "Luomaatzyh") follows international usage (Roma).
The characters 一 ("one"), 七 ("seven"), 八 ("eight"), and 不 ("no/not") are always written i, chi, ba, and bu, respectively, regardless of the tone in which they are pronounced. In other words changes due to tone sandhi are not reflected in GR. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Double-muscled cattle**
Double-muscled cattle:
Double-muscled cattle refers to breeds of cattle that carry one of seven known mutations that limits and reduces the activity of the myostatin protein. Normally, myostatin limits the number of muscle fibers present at birth, and interfering with activity of this protein causes animals to be born with higher numbers of muscle fibers, consequently augmenting muscle growth. Additionally, these mutations reduce the superficial and internal fat deposits, causing the meat to be less marbled and lower in fat content. Animals homozygous for myostatin mutation (inheriting a mutant copy of myostatin from both sire and dam) also have improved meat tenderness in some cuts of meat. The enlarged muscles of dam and calf at birth leads to increased difficulty of calving, and in some breeds frequently necessitates birth by cesarean section.
Double-muscled cattle:
Double-muscling historically has also been known as myofibre hyperplasia, doppellender, muscular hypertrophy, a groppa doppia, and culard.
History:
Some breeds of cattle do not possess the myostatin gene that helps regulate muscle growth. This causes them to have more muscle mass and yields more meat for the cattle farmers. Two of the breeds that possess the double muscle gene are the Piedmontese and the Parthenais. The Piedmontese was discovered in Italy 1897, and the Parthenais were found in France in 1893. The Belgian Blue is another cattle that can lack myostatin and have double muscles. The Belgian Blue originates from central and upper Belgium. The breed was established in the early 20th century. The Belgian Blue was once divided into two strains, one for beef and the other for milk. The Belgian Blue is now primarily beef. The Belgian Blue is relatively new to the U.S. but has gained acceptance from breeders.Myostatin was discovered by Se-Jin Lee and Alexander McPherron in 1997. They found that myostatin was lacking in mice and causes the size of the mice to increase by two or three times the size of mice that did not lack the myostatin. Later that year McPherron and Lee also saw that Piedmontese and Belgian Blue cattle were hypermuscular. The cattle had naturally occurring disruption of myostatin locus. Lee went on to extensively study myostatin. During this research he noted the loss of white fat that occurred when hyper muscularity by myostatin would happen. He also showed that myostatin was sufficient to cause a phenotype reminiscent of cachexia. "Dr. Lee has shown that other molecules in the TGF-B pathways, notably the activins and follistatin, also regulate muscle mass." Lee's contributions also demonstrated so potential that myostatin could be therapeutic, the clinical setting that myostatin blockade would be useful has not yet been found but it may be beneficial in some areas. People are now trying to use myostatin as a medicine. "The research has produced several muscle-building drugs now being tested in people with medical problems, including muscular dystrophy, cancer and kidney disease."Double-muscled breeding is done to get more meat and less fat. Backfat is generally found to be less in double-muscled cattle than in cattle with normal muscling. Animals that are double-muscled have a higher carcass yield but this does come with new problems for the cattle. The meat from double muscled cattle is tenderer. "There is a persisting trend to improve carcass quality in specialized beef breeds. A higher meat yield and more lean meat are desirable for the meat industry."
Controversy:
The enlarged muscles of dam and calf at birth leads to increased difficulty of calving, and in some breeds frequently necessitates birth by cesarean section. Affected breeds include: Belgian Blue Piedmontese Parthenais Maine Anjou Limousin | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cervicocranial syndrome**
Cervicocranial syndrome:
Cervicocranial syndrome or (Craniocervical Junction Syndrome: CCJ syndrome) is a neurological illness. It is a combination of symptoms that are caused by an abnormality in the neck. The bones of the neck that are affected are cervical vertebrae (C1 - C7). This syndrome can be identified by confirming cervical bone shifts, collapsed cervical bones or misalignment of the cervical bone leading to improper functioning of cervical spinal nerves.Greenberg Regenerative Medicine | Bryn Mawr, Pennsylvania Cervicocranial syndrome is either congenital or acquired (as a result of injury or disease). Some examples of diseases that could result in cervicocranial syndrome are Chiari disease, Klippel-Feil malformation osteoarthritis, and trauma. Treatment options include neck braces, pain medication and surgery. The quality of life for individuals suffering from CCJ syndrome can improve through surgery.
Signs and Symptoms:
Cervicocranial syndrome has a wide range of symptoms. These symptoms often include: Vertigo Chronic Headache aka Cephalea Tinnitus Facial Pain Ear Pain Dysphagia Carotidynia Neck Pain (ex: during movement such extension and flexion) Syncope Sinus congestion Neck Crepitus Sound Loss of vision Involuntary eye-movement Severe Fatigue Chest Pain Brain Fog
Cause:
The cause of cervicocranial syndrome is either due to a defect (genetic mutation or development of diseases later in life) or an injury pertaining to the neck: cervical area, that damages the spinal nerves traveling through the cervical region resulting in ventral subluxation. Examples of cases that can result in cervicocranial syndrome are: car accidents, trauma, osteoarthritis, tumor, degenerative pathology and other numerous causes of vertebral instability. There is no single cause that can mainly cause cervicocranial syndrome.
Cause:
Genetic The genes GDF6, GDF3, MEOX1 used as examples, encode for making proteins that help with development. For example GDF6 gene plays an important role in bone development and joint formation. The mutation in these genes can result in Klippel-Feil syndrome. As a result of having congenital Klippel-Feil syndrome, the spinal anatomy of the individual will present abnormal fusion of any two of the seven cervical bones in the neck. This is considered to be an anomaly of cervical bones. It affects the functioning of cervical spinal nerves (C1 - C8) because of compression on the spinal cord. Spinal stenosis also adds damage to the spinal cord resulting in symptoms that are caused by cervicocranial syndrome.
Cause:
Trauma Traumatic injuries are caused when external forces damage the cervical spine, giving rise to various symptoms. In a motor vehicle accident, the vehicle jerks the neck forward and backward resulting in cervical spine damage. This is called whiplash. The neurological and biological symptoms resulting from neck trauma emerge as a culmination of clinically isolated or combined symptoms caused by cervicocranial syndrome.
Pathophysiology:
The body is innervated by spinal nerves that branch off from the spinal cord. This innervation enables the brain to receive sensory inputs and send motor outputs. There are 8 cervical spinal nerves of the peripheral nervous system. Cervical spinal nerves C1, C2 and C3 help control the movements of the head and neck. Cervical spinal nerve C4 helps control upward shoulder movements. Cervical spinal nerve C3, C4 and C5 help power the diaphragm and aid in breathing. Cervical spinal nerve C6 helps in wrist extension and some functioning of biceps. Cervical spinal nerve C7 controls triceps and wrist extension. Cervical spinal nerve C8 helps control the hand. The cervicocranial syndrome occurs when symptoms arise due to cervical vertebrae damage (misalignment, collapse, shift or disease, such as tumor) resulting in the improper functioning of the cervical spinal nerves.
Pathophysiology:
Examples of Cervicocranial Syndrome Pathophysiology Chordoma The craniocervical junction region comprises C1 (Atlas), C2 (Axis) and the lower part of the skull: occipital bone. In case of tumor: chordoma, in the craniocervical junction region, this leads to pressure on the cervical spinal nerves, which results in their improper functioning of the cervical spinal nerves. Hence, leading to symptoms of cervicocranial syndrome. To decompress the pressure on the nerves, the tumor is removed and the foramen through which the spinal nerve roots travel through is enlarged to allow the nerves to pass through so that symptoms of cervicocranial syndrome can be reduced and the nerves are sending signals.
Pathophysiology:
Atlanto-Occipital Assimilation When the occipital bone and the atlas (C1) are fused together in a condition called atlanto-occipital assimilation, it causes improper functioning of the cervical spinal nerves due to the vascular compression. Surgical procedure can decompress the nerves and reduce symptoms.
Pathophysiology:
Trauma Traumatic injuries are caused when external forces damage the cervical spine, giving rise to various symptoms. In a car accident, the vehicle jerks the neck forward and backward resulting in cervical spine damage resulting in a whiplash. As a result, the cervical spine become misaligned and produces direct spinal cord irritation creating tighter muscles on one side of the body Neck braces can help temporarily. Surgery is required if needed. Non-surgical treatment, to realign spinal misalignment, is corrected by a chiropractor.
Diagnosis:
Once there is an onset of the symptoms in the patient, the patients are screened through cervical-spinal imaging techniques: X-ray, CT, MRI. [1] The scanning technique points out any cervical vertebrae defects and misalignments. (Image 1. and 2.) When cervicocranial syndrome is caused as a result of a genetic disease, then family history and genetic testing aid in making an accurate diagnosis of cervicocranial syndrome.
Prevention/Treatment:
The treatment options vary since there are numerous causes of cervicocranial syndrome. General treatments include: Pressure release via realignment of the vertebrae Pain medication: acetominophen, aspirin, or ibuprofen Manipulation of neck by Chiropractor : For example, vertigo symptoms can be relieved Neck braces to avoid movement of neck and provide stability Physical therapy Injection: Combination (anesthetic and cortisone) drug to help alleviate the pain Surgery to restore function and form of the spine Cervical spinal cord stimulation (cSCS) When cervicocranial syndrome is caused by a mutation in genes and it runs in the family due to other co-morbidities, genetic counseling helps patients cover risks, prevention and expectations of caring and passing genes to a newborn.
Prognosis:
The prognosis of an individual living with cervicocranial syndrome varies because of the multiple causes such as co-morbidities and varied trauma. Instability of the cervical spine can cause endangerment of patients and their neurological integrity. Correction and decompression cervical spinal surgeries significantly increase quality of life and reduce symptoms. Post-surgery, 93 to 100 percent patients report reduced cervicocranial syndrome symptoms such as neck pain.
Epidemiology:
Cervicocranial syndrome significantly affects the aging world population and is associated with significant morbidity. It affects men and women equally when occurring due to atlanto-occipital assimilation. Increased incidences among low-socioeconomic groups and among groups that do not have access to healthcare show subsequently higher rates of morbidity and mortality.
Research Directions:
Cervicocranial syndrome can be caused with or as a result of numerous neurological problems so not one single disease can be pinpointed. Further research can explore the common neurological problems causing cervicocranial syndrome and look at various treatments including therapeutic ones.
Research Directions:
For example, a study, "The influence of cranio-cervical rehabilitation in patients with myofascial temporomandibular pain disorders," explored the therapeutic options of physical therapy and concluded that 88% from a total of 98 patients (79 female and 19 male), felt reduced pain. On the contrary another study, "The efficacy of manual therapy and therapeutic exercise in patients with chronic neck pain: A narrative review" conducted in 2018, concluded that there is a lack of evidence that support therapeutic exercise to reduce neck pain via manipulation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sulindac**
Sulindac:
Sulindac is a nonsteroidal anti-inflammatory drug (NSAID) of the arylalkanoic acid class that is marketed as Clinoril. Imbaral (not to be confused with mebaral) is another name for this drug. Its name is derived from sul(finyl)+ ind(ene)+ ac(etic acid) It was patented in 1969 and approved for medical use in 1976.
Medical uses:
Like other NSAIDs, it is useful in the treatment of acute or chronic inflammatory conditions. Sulindac is a prodrug, derived from sulfinylindene, that is converted in the body to the active NSAID. More specifically, the agent is converted by liver enzymes to a sulfide that is excreted in the bile and then reabsorbed from the intestine. This is thought to help maintain constant blood levels with reduced gastrointestinal side effects. Some studies have shown sulindac to be relatively less irritating to the stomach than other NSAIDs except for drugs of the COX-2 inhibitor class. The exact mechanism of its NSAID properties is unknown, but it is thought to act on enzymes COX-1 and COX-2, inhibiting prostaglandin synthesis.
Medical uses:
Its usual dosage is 150-200 milligrams twice per day, with food. It should not be used by persons with a history of major allergic reactions (urticaria or anaphylaxis) to aspirin or other NSAIDs, and should be used with caution by persons having pre-existing peptic ulcer disease. Sulindac is much more likely than other NSAIDs to cause damage to the liver or pancreas, though it is less likely to cause kidney damage than other NSAIDs.
Medical uses:
Sulindac seems to have a property, independent of COX-inhibition, of reducing the growth of polyps and precancerous lesions in the colon, especially in association with familial adenomatous polyposis, and may have other anti-cancer properties.
Adverse effects:
In October 2020, the U.S. Food and Drug Administration (FDA) required the drug label to be updated for all nonsteroidal anti-inflammatory medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. They recommend avoiding NSAIDs in pregnant women at 20 weeks or later in pregnancy.
Society and culture:
Litigation In September 2010 a federal jury in New Hampshire awarded $21 million to Karen Bartlett, a woman who developed Stevens–Johnson syndrome/Toxic epidermal necrolysis as a result of taking a generic brand of sulindac manufactured by Mutual Pharmaceuticals for her shoulder pain. Ms. Bartlett sustained severe injuries including the loss of over 60% of her surface skin and permanent near-blindness. The case had been appealed to the United States Supreme Court, where the main issue was whether federal law preempts Ms. Bartlett's claim. On June 24, 2013, the Supreme Court ruled 5–4 in favor of Mutual Pharmaceuticals, throwing out the earlier $21 million jury verdict.
Society and culture:
Synthesis Rxn of p-fluorobenzyl chloride (1) with the anion of diethylmethyl malonate (2) gives intermediate diester (3), saponification of which and subsequent decarboxylation leads to 4. {Alternatively it can be formed by Perkin reaction between p-fluorobenzaldehyde and propionic anhydride in the presence of NaOAc, followed by catalytic hydrogenation of the olefinic bond using a palladium on carbon catalyst.} Polyphosphoric acid (PPA) cyclization leads to 5-fluoro-2-methyl-3-indanone (4). A Reformatsky reaction with zinc amalgam and bromoacetic ester leads to carbinol (5), which is then dehydrated with tosic acid to indene 6. {Alternatively, this step can be performed in a Knoevenagel condensation with cyanoacetic acid, which is then further decarboxylated.} The active methylene group is condensed with p-methylthiobenzaldehyde, using sodium methoxide as catalyst, and then saponified to give Z (7) which in turn oxidized with sodium metaperiodate to sulfoxide 8, the antiinflammatory agent sulindac. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Isophthalonitrile**
Isophthalonitrile:
Isophthalonitrile is an organic compound with the formula C6H4(CN)2. Two other isomers exist, phthalonitrile and terephthalonitrile. All three isomers are produced commercially by ammoxidation of the corresponding xylene isomers. Isophthalonitrile is a colorless or white solid with low solubility in water. Hydrogenation of isophthalonitrile affords m-xylylenediamine, a curing agent in epoxy resins and a component of some urethanes.
Safety:
LD50 (rat, oral) is 288 mg/kg. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Photon diffusion equation**
Photon diffusion equation:
Photon diffusion equation is a second order partial differential equation describing the time behavior of photon fluence rate distribution in a low-absorption high-scattering medium.
Its mathematical form is as follows.
where Φ is photon fluence rate (W/cm2), ∇ is del operator, μa is absorption coefficient (cm−1), D is diffusion constant, v is the speed of light in the medium (m/s), and S is an isotropic source term (W/cm3).
Its main difference with diffusion equation in physics is that photon diffusion equation has an absorption term in it.
Application:
Medical Imaging The properties of photon diffusion as explained by the equation is used in diffuse optical tomography. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trimethyldiphenylpropylamine**
Trimethyldiphenylpropylamine:
Trimethyldiphenylpropylamine (N,N,1-Trimethyl-3,3-diphenylpropylamine) is a drug used for functional gastrointestinal disorders. Its tradename is Recipavrin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**3-Hydroxypropionate bicycle**
3-Hydroxypropionate bicycle:
The 3-hydroxypropionate bicycle, also known as the 3-hydroxypropionate pathway, is a process that allows some bacteria to generate 3-hydroxypropionate usingcarbon dioxide. In this pathway CO2 is fixed (i.e. incorporated) by the action of two enzymes, acetyl-CoA carboxylase and propionyl-CoA carboxylase. These enzymes generate malonyl-CoA and (S)-methylmalonyl-CoA, respectively. Malonyl-CoA, in a series of reactions, is further split into acetyl-CoA and glyoxylate. Glyoxylate is incorporated into beta-methylmalyl-coA which is then split, again through a series of reactions, to release pyruvate as well as acetate, which is used to replenish the cycle. This pathway has been demonstrated in Chloroflexus, a nonsulfur photosynthetic bacterium; however, other studies suggest that 3-hydroxypropionate bicycle is used by several chemotrophic archaea. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Grand Tour (data visualisation)**
Grand Tour (data visualisation):
The Grand Tour is a technique originally developed by Daniel Asimov 1980–85, which is used to explore multivariate statistical data by means of an animation. The animation, or "movie", consists of a series of distinct views of the data as seen from different directions, displayed on a computer screen, that appear to change continuously and that get closer and closer to all possible views. This allows a human- or computer-based evaluation of these views, with the goal of detecting patterns that will convey useful information about the data.
Grand Tour (data visualisation):
This technique is like what many museum visitors do when they encounter a complicated abstract sculpture: They walk around it to view it from all directions, in order to understand it better. The human visual system perceives visual information as a pattern on the retina, which is 2-dimensional. Thus walking around the sculpture to understand it better creates a temporal sequence of 2-dimensional images in the brain.
Grand Tour (data visualisation):
The multivariate data that is the original input for any grand tour visualization is a (finite) set of points in some high-dimensional Euclidean space. This kind of set arises naturally when data is collected. Suppose that for some population of 1000 people, each person is asked to provide their age, height, weight, and number of nose hairs. Thus to each member of the population there is associated an ordered quadruple of numbers. Since n-dimensional Euclidean space is defined as all ordered n-tuples of numbers, this means that the data on 1000 people maybe be thought of as 1000 points in 4-dimensional Euclidean space.
Grand Tour (data visualisation):
The grand tour converts the spatial complexity of the multivariate data set into temporal complexity by using the relatively simple 2-dimensional views of the projected data as the individual frames of the movie. (These are sometimes called "data views".) The projections will ordinarily be chosen so as not to change too fast, which means that the movie of the data will appear continuous to a human observer.
Grand Tour (data visualisation):
A grand tour "method" is an algorithm for assigning a sequence of projections onto (usually) 2-dimensional planes to any given dimension of Euclidean space. This allows any particular multivariate data set to be projected onto that sequence of 2-dimensional planes and thereby displayed on a computer screen one after the other, so that the effect is to create a movie of the data.
Grand Tour (data visualisation):
(Note that, once the data has been projected onto a given 2-plane, then in order to display it on a computer screen, it is necessary to choose the directions in that 2-plane that will correspond to the horizontal and vertical directions on the computer screen. This is typically a minor detail. But the choice of horizontal and vertical directions should ideally be done so as to minimize any unnecessary apparent "spinning" of the 2-dimensional data view.)
Technical description:
Each "view" (i.e., frame) of the animation is an orthogonal projection of the data set onto a 2-dimensional subspace (of the Euclidean space Rp where the data resides). The subspaces are selected by taking small steps along a continuous curve, parametrized by time, in the space of all 2-dimensional subspaces of Rp (known as the Grassmannian G(2,p)). To display these views on a computer screen, it is necessary to pick one particular rotated position of each view (in the plane of the computer screen) for display. This causes the positions of the data points on the computer screen to appear to vary continuously. Asimov showed that these subspaces can be selected so as to make the set of them (up to time t) increasingly close to all points in G(2,p), so that if the grand tour movie were allowed to run indefinitely, the set of displayed subspaces would correspond to a dense subset of G(2,p).
Software:
The tourr R package implements geodesic interpolation and basis generation functions that allow you to create new tour methods from R.
The datatour Python package allows you to see your data in its native dimension. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Poppies (Mary Oliver poem)**
Poppies (Mary Oliver poem):
"Poppies" is an inner dialogue poem written by Mary Oliver. The poem is focused on elements of nature, a common thread within Oliver's poetry, and calls readers to focus on the instruction that nature might supply.
Synopsis and structure:
The poem is heterometric in nature; its lines switch between iambic and trochaic trimeter, tetrameter, and dimeter. It is divided into nine distinct stanzas, each stanza as a quatrain with four lines. There are a total of thirty-six lines in the entire poem. There are five distinct sections to the poem, each turn is given through the use of a period at the end of the section.
Publication history:
"Poppies" has been published in two poetry compilations. The first, New and Selected Poems: Volume One, was released in 1992 through Beacon Press. A second, Devotions: The Selected Poems of Mary Oliver, was published in 2017 through Penguin Press. Reviews for both collections were positive and the books received praise from Stephen Dobyns of The New York Times Book Review, Rita Dove, of The Washington Post, and Elizabeth Lund, also of The Washington Post, among others. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ofoto (scanner software)**
Ofoto (scanner software):
Ofoto was an application program that automated the task of scanning images and cleaning up the resulting digital image. Created by Light Source Digital Images, it was first released in 1991 bundled with the Apple OneScanner. The program garnered rave reviews, and was followed by a color version 2.0 with Mac and Windows versions. Version 2.0 was widely bundled with scanners from a number of companies, notably Canon. Development and sales were discontinued on 1 August 1996.
Ofoto (scanner software):
The assets of Light Source were purchased by Xrite, and the trademark on Ofoto later expired. The name and some of the artwork were later used for the online photography site, Ofoto, later known as Kodak Gallery. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sleeveface**
Sleeveface:
Sleeveface is an internet phenomenon wherein one or more persons obscure or augment body parts with record sleeve(s), causing an illusion. Sleeveface has become popular on social networking sites.The precise origin of the concept is unknown. A collection of photographs was posted online at Waxidermy.com in early 2006, though earlier examples of 'Sleevefacing' include a Mad Magazine cover and a sketch on The Adam and Joe Show with Gary Numan holding a record sleeve to his face. Other cases include John Hiatt's 1979 Slug Line album on which he is holding a sleeve (showing his face) in front of his face. and the back of the 1982 album Picture This by Huey Lewis and the News, where Huey is holding the front side of the album (showing his face) in front of his face. The artwork for J Rocc's 12" single 'Play This (One)' features men holding various LP sleeves over their faces.The term 'Sleeveface' was coined in April 2007 by Cardiff resident Carl Morris after pictures were taken of him and his friends holding record sleeves to their faces whilst Djing in a Cardiff bar. His friend John Rostron posted them on the internet and created a group on the nascent Facebook social networking site. From this point, the craze started to become more widely known.
Sleeveface:
Sleeveface contributors regularly hold Sleeveface parties across the world.Sleeveface contributors have helped organise Sleeveface workshops for children. One such workshop took place at the National Museum Cardiff in November 2008 as part of the city's annual Sŵn Festival.There is also a Sleevefacer iPhone app available that allows a user to access album artwork from a music library and sleeveface on an iPhone, iPad or iPod Touch. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Standalone program**
Standalone program:
A standalone program, also known as a freestanding program, is a computer program that does not load any external module, library function or program and that is designed to boot with the bootstrap procedure of the target processor – it runs on bare metal. In early computers like the ENIAC without the concept of an operating system, standalone programs were the only way to run a computer. Standalone programs are usually written in assembly language for a specific hardware.
Standalone program:
Later standalone programs typically were provided for utility functions such as disk formatting. Also, computers with very limited memory may use standalone programs, i.e. most computers until the mid-1950s and later still embedded processors.
Standalone program:
Standalone programs are now mainly limited to SoC's or microcontrollers (where battery life, price, and data space are at premiums) and critical systems. In extreme cases every possible set of inputs and errors must be tested and thus every potential output known; fully independent [separate physical suppliers and programing teams] yet fully parallel system-state monitoring; or where the attack surface must be minimized; an operating system would add unacceptable complexity and uncertainty (examples include industrial operator safety interrupts, commercial airlines, medical devices, ballistic missile launch controls and lithium-battery charge controllers in consumer devices [fire hazard and chip cost of approximately 10 cents]). Resource limited microcontrollers can also be made more tolerant of varied environmental conditions than the more powerful hardware needed for an operating system; this is possible because the much lower clock frequency, pin spacing, lack of large data buses (e.g. DDR4 RAM modules) and limited transistor count allowance for wider design margins and thus the potential for more robust electrical and physical properties both in circuit layout and material choices. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multiple endocrine neoplasia type 1**
Multiple endocrine neoplasia type 1:
Multiple endocrine neoplasia type 1 (MEN-1) is one of a group of disorders, the multiple endocrine neoplasias, that affect the endocrine system through development of neoplastic lesions in pituitary, parathyroid gland and pancreas. Individuals suffering from this disorder are prone to developing multiple endocrine and nonendocrine tumors. It was first described by Paul Wermer in 1954.
Signs and symptoms:
Parathyroid Hyperparathyroidism is present in ≥ 90% of patients. Asymptomatic hypercalcemia is the most common manifestation: about 25% of patients have evidence of nephrolithiasis or nephrocalcinosis. In contrast to sporadic cases of hyperparathyroidism, diffuse hyperplasia or multiple adenomas are more common than solitary adenomas.
Signs and symptoms:
Pancreas Pancreatic islet cell tumors are today the major cause of death in persons with MEN-1. Tumors occur in 60-80% of persons with MEN-1 and they are usually multicentric. Multiple adenomas or diffuse islet cell hyperplasia commonly occurs. About 30% of tumors are malignant and have local or distant metastases. About 10-15% of islet cell tumors originate from a β-cell, secrete insulin (insulinoma), and can cause fasting hypoglycemia. β-cell tumors are more common in patients < 40 years of age.
Signs and symptoms:
Most islet cell tumors secrete pancreatic polypeptide, the clinical significance of which is unknown. Gastrin is secreted by many non–β-cell tumors (increased gastrin secretion in MEN 1 also often originates from the duodenum). Increased gastrin secretion increases gastric acid, which may inactivate pancreatic lipase, leading to diarrhea and steatorrhea. Increased gastrin secretion also leads to peptic ulcers in > 50% of MEN 1 patients. Usually the ulcers are multiple or atypical in location, and often bleed, perforate, or become obstructed. Peptic ulcer disease may be intractable and complicated. Among patients presenting with Zollinger-Ellison syndrome, 20 to 60% have MEN 1.
Signs and symptoms:
A severe secretory diarrhea can develop and cause fluid and electrolyte depletion with non–β-cell tumors. This complex, referred to as the watery diarrhea, hypokalemia and achlorhydria syndrome (VIPoma) has been ascribed to vasoactive intestinal polypeptide, although other intestinal hormones or secretagogues (including prostaglandins) may contribute. Hypersecretion of glucagon, somatostatin, chromogranin, or calcitonin, ectopic secretion of ACTH resulting in Cushing's syndrome, and hypersecretion of somatotropin–releasing hormone (causing acromegaly) sometimes occur in non–β-cell tumors. All of these are rare in MEN 1.Nonfunctioning pancreatic tumors also occur in patients with MEN 1 and may be the most common type of pancreatoduodenal tumor in MEN 1. The size of the nonfunctioning tumor correlates with risk of metastasis and death.
Signs and symptoms:
Pituitary Pituitary tumors occur in 15 to 42% of MEN 1 patients. From 25 to 90% are prolactinomas. About 25% of pituitary tumors secrete growth hormone or growth hormone and prolactin. Excess prolactin may cause galactorrhea, and excess growth hormone causes acromegaly clinically indistinguishable from sporadically occurring acromegaly. About 3% of tumors secrete ACTH, producing Cushing's disease. Most of the remainder are nonfunctional. Local tumor expansion may cause visual disturbance, headache, and hypopituitarism. Pituitary tumors in MEN 1 patients appear to be larger and behave more aggressively than sporadic pituitary tumors.
Signs and symptoms:
Other manifestations Adenomas of adrenal glands occurs occasionally in MEN 1 patients. Hormone secretion is rarely altered as a result, and the significance of these abnormalities is uncertain. Carcinoid tumors, particularly those derived from the embryologic foregut (lungs, thymus), occur in isolated cases. Multiple subcutaneous and visceral lipomas, angiofibromas, and collagenomas may also occur.
Genetic:
People with multiple endocrine neoplasia type 1 are born with one mutated copy of the MEN1 gene in each cell. Then, during their lifetime, the other copy of the gene is mutated in a small number of cells. These genetic changes result in no functional copies of the MEN1 gene in selected cells, allowing the cells to divide with little control and form tumors. This is known as Knudson's two-hit hypothesis and is a common feature seen with inherited defects in tumor suppressor genes. Oncogenes can become neoplastic with only one activating mutation, but tumor suppressors inherited from both mother and father must be damaged before they lose their effectiveness. The exception to the "two-hit hypothesis" occurs when suppressor genes exhibit dose-response, such as ATR. The exact function of MEN1 and the protein, menin, produced by this gene is not known, but following the inheritance rules of the "two-hit hypothesis" indicates that it acts as a tumor suppressor.
Diagnosis:
In a diagnostic workup individuals with a combination of endocrine neoplasias suggestive of the MEN1 syndrome are recommended to have a mutational analysis of the MEN1 gene if additional diagnostic criteria are sufficiently met, mainly including: age <40 years positive family history including a first degree relative proving to have the MEN1 gene.
Diagnosis:
multifocal or recurrent neoplasia two or more organ systems affected Types Multiple endocrine neoplasia or MEN is part of a group of disorders that affect the body's network of hormone-producing glands (the endocrine system). Hormones are chemical messengers that travel through the bloodstream and regulate the function of cells and tissues throughout the body. Multiple endocrine neoplasia involves tumors in at least two endocrine glands; tumors can also develop in other organs and tissues. These growths can be noncancerous (benign) or cancerous (malignant). If the tumors become cancerous, some cases can be life-threatening.
Diagnosis:
The two major forms of multiple endocrine neoplasia are called type 1 and type 2. These two types are often confused because of their similar names. However, type 1 and type 2 are distinguished by the genes involved, the types of hormones made, and the characteristic signs and symptoms.
Diagnosis:
These disorders greatly increase the risk of developing multiple cancerous and noncancerous tumors in glands such as the parathyroid, pituitary, and pancreas. Multiple endocrine neoplasia occurs when tumors are found in at least two of the three main endocrine glands (parathyroid, pituitary, and pancreatico-duodenum). Tumors can also develop in organs and tissues other than endocrine glands. If the tumors become cancerous, some cases can be life-threatening. The disorder affects 1 in 30,000 people.
Diagnosis:
Although many different types of hormone-producing tumors are associated with multiple endocrine neoplasia, tumors of the parathyroid gland, pituitary gland, and pancreas are most frequent in multiple endocrine neoplasia type 1. MEN1-associated overactivity of these three endocrine organs are briefly described here: Overactivity of the parathyroid gland (hyperparathyroidism) is the most common sign of this disorder. Hyperparathyroidism disrupts the normal balance of calcium in the blood, which can lead to kidney stones, thinning of the bones (osteoporosis), high blood pressure (hypertension), loss of appetite, nausea, weakness, fatigue, and depression.
Diagnosis:
Neoplasia in the pituitary gland can manifest as prolactinomas whereby too much prolactin is secreted, suppressing the release of gonadotropins, causing a decrease in sex hormones such as testosterone. Pituitary tumor in MEN1 can be large and cause signs by compressing adjacent tissues.
Diagnosis:
Pancreatic tumors associated with MEN-1 usually form in the beta cells of the islets of Langerhans, causing over-secretion of insulin, resulting in low blood glucose levels (hypoglycemia). However, many other tumors of the pancreatic Islets of Langerhans can occur in MEN-1. One of these, involving the alpha cells, causes over-secretion of glucagon, resulting in a classic triad of high blood glucose levels (hyperglycemia), a rash called necrolytic migratory erythema, and weight loss. Gastrinoma causes the over-secretion of the hormone gastrin, resulting in the over-production of acid by the acid-producing cells of the stomach (parietal cells) and a constellation of sequelae known as Zollinger-Ellison syndrome. Zollinger-Ellison syndrome may include severe gastric ulcers, abdominal pain, loss of appetite, chronic diarrhea, malnutrition, and subsequent weight loss. Other non-beta islet cell tumors associated with MEN1 are discussed below.
Treatment:
The treatment of choice of parathyroid tumors is open bilateral exploration with subtotal (3/4) or total parathyroidectomy. Autoimplantation may be considered in case of a total parathyroidectomy. Optimal timing for this operation has not yet been established but it should be performed by an experienced endocrine surgeon.Endocrine pancreatic tumor are treated with surgery and cytotoxic drugs in case of malignant disease.
Treatment:
Pituitary tumors are treated with surgery (acromegaly and Mb. Cushing) or medicine (prolactinomas).
Culture and society:
In the video game Trauma Team, Gabriel Cunningham's son, Joshua Cunningham, is diagnosed with Wermer's syndrome.
It is also mentioned in the South Korean drama "Medical Top Team", as Dr. Choi Ah Jin (Oh Yeon-seo) is diagnosed with MEN-1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.