text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Dublin Core**
Dublin Core:
The Dublin Core, also known as the Dublin Core Metadata Element Set (DCMES), is a set of fifteen main metadata items for describing digital or physical resources. The Dublin Core Metadata Initiative (DCMI) is responsible for formulating the Dublin Core; DCMI is a project of the Association for Information Science and Technology (ASIS&T), a non-profit organization. Dublin Core has been formally standardized internationally as ISO 15836 by the International Organization for Standardization (ISO) and as IETF RFC 5013 by the Internet Engineering Task Force (IETF), as well as in the U.S. as ANSI/NISO Z39.85 by the National Information Standards Organization (NISO).The core properties are part of a larger set of DCMI Metadata Terms. "Dublin Core" is also used as an adjective for Dublin Core metadata, a style of metadata that draws on multiple Resource Description Framework (RDF) vocabularies, packaged and constrained in Dublin Core application profiles.The resources described using the Dublin Core may be digital resources (video, images, web pages, etc.) as well as physical resources such as books or works of art.
Dublin Core:
Dublin Core metadata may be used for multiple purposes, from simple resource description to combining metadata vocabularies of different metadata standards, to providing interoperability for metadata vocabularies in the linked data cloud and Semantic Web implementations.
Background:
"Dublin" refers to Dublin, Ohio, USA where the schema originated during the 1995 invitational OCLC/NCSA Metadata Workshop, hosted by the OCLC (known at that time as Online Computer Library Center), a library consortium based in Dublin, and the National Center for Supercomputing Applications (NCSA). "Core" refers to the metadata terms as "broad and generic being usable for describing a wide range of resources". The semantics of Dublin Core were established and are maintained by an international, cross-disciplinary group of professionals from librarianship, computer science, text encoding, museums, and other related fields of scholarship and practice.In 1999, the first Dublin Core encoding standard was in HTML. Starting in 2000, the Dublin Core community focused on "application profiles" – the idea that metadata records would use Dublin Core together with other specialized vocabularies to meet particular implementation requirements. During that time, the World Wide Web Consortium's work on a generic data model for metadata, the Resource Description Framework (RDF), was maturing. As part of an extended set of DCMI metadata terms, Dublin Core became one of the most popular vocabularies for use with RDF, more recently in the context of the linked data movement.The Dublin Core Metadata Initiative (DCMI) provides an open forum for the development of interoperable online metadata standards for a broad range of purposes and of business models. DCMI's activities include consensus-driven working groups, global conferences and workshops, standards liaison, and educational efforts to promote widespread acceptance of metadata standards and practices. In 2008, DCMI separated from OCLC and incorporated as an independent entity.Currently, any and all changes that are made to the Dublin Core standard, are reviewed by a DCMI Usage Board within the context of a DCMI Namespace Policy (DCMI-NAMESPACE). This policy describes how terms are assigned and also sets limits on the amount of editorial changes allowed to the labels, definitions, and usage comments.
Levels of the standard:
The Dublin Core standard originally included two levels: Simple and Qualified. Simple Dublin Core comprised 15 elements; Qualified Dublin Core included three additional elements (Audience, Provenance and RightsHolder), as well as a group of element refinements (also called qualifiers) that could refine the semantics of the elements in ways that may be useful in resource discovery.
Levels of the standard:
Since 2012, the two have been incorporated into the DCMI Metadata Terms as a single set of terms using the RDF data model. The full set of elements is found under the namespace http://purl.org/dc/terms/. Because the definition of the terms often contains domains and ranges, which may not be compatible with the pre-RDF definitions used for the original 15 Dublin Core elements, there is a separate namespace for the original 15 elements as previously defined: http://purl.org/dc/elements/1.1/.
Levels of the standard:
Dublin Core Metadata Element Set The original DCMES Version 1.1 consists of 15 metadata elements, defined this way in the original specification: Contributor – "An entity responsible for making contributions to the resource".
Coverage – "The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant".
Creator – "An entity primarily responsible for making the resource".
Date – "A point or period of time associated with an event in the lifecycle of the resource".
Description – "An account of the resource".
Format – "The file format, physical medium, or dimensions of the resource".
Identifier – "An unambiguous reference to the resource within a given context".
Language – "A language of the resource".
Publisher – "An entity responsible for making the resource available".
Relation – "A related resource".
Rights – "Information about rights held in and over the resource".
Source – "A related resource from which the described resource is derived".
Subject – "The topic of the resource".
Title – "A name given to the resource".
Levels of the standard:
Type – "The nature or genre of the resource".Each Dublin Core element is optional and may be repeated. The DCMI has established standard ways to refine elements and encourage the use of encoding and vocabulary schemes. There is no prescribed order in Dublin Core for presenting or using the elements. The Dublin Core became a NISO standards, Z39.85, and IETF RFC 5013 in 2007, ISO 15836 standard in 2009 and is used as a base-level data element set for the description of learning resources in the ISO/IEC 19788-2 Metadata for learning resources (MLR) – Part 2: Dublin Core elements, prepared by the ISO/IEC JTC 1/SC 36.
Levels of the standard:
Full information on element definitions and term relationships can be found in the Dublin Core Metadata Registry.
Encoding examples <meta name="DC.Format" content="video/mpeg; 10 minutes" /> <meta name="DC.Language" content="en" /> <meta name="DC.Publisher" content="publisher-name" /> <meta name="DC.Title" content="HYP" /> Example of use [and mention] by WebCite On the "archive form" web page for WebCite it says, in part: "Metadata (optional): These are Dublin Core elements. [...]".
Levels of the standard:
Qualified Dublin Core (Superseded in 2008 by the DCMI Metadata Terms.) Subsequent to the specification of the original 15 elements, an ongoing process to develop exemplary terms extending or refining the DCMES was begun. The additional terms were identified, generally in working groups of the DCMI, and judged by the DCMI Usage Board to be in conformance with principles of good practice for the qualification of Dublin Core metadata elements.
Levels of the standard:
Element refinements make the meaning of an element narrower or more specific. A refined element shares the meaning of the unqualified element, but with a more restricted scope. The guiding principle for the qualification of Dublin Core elements, colloquially known as the Dumb-Down Principle, states that an application that does not understand a specific element refinement term should be able to ignore the qualifier and treat the metadata value as if it were an unqualified (broader) element. While this may result in some loss of specificity, the remaining element value (without the qualifier) should continue to be generally correct and useful for discovery.
Levels of the standard:
In addition to element refinements, Qualified Dublin Core includes a set of recommended encoding schemes, designed to aid in the interpretation of an element value. These schemes include controlled vocabularies and formal notations or parsing rules. A value expressed using an encoding scheme may thus be a token selected from a controlled vocabulary (for example, a term from a classification system or set of subject headings) or a string formatted in accordance with a formal notation, for example, "2000-12-31" as the ISO standard expression of a date. If an encoding scheme is not understood by an application, the value may still be useful to a human reader.
Levels of the standard:
Audience, Provenance and RightsHolder are elements, but not part of the Simple Dublin Core 15 elements. Use Audience, Provenance and RightsHolder only when using Qualified Dublin Core. DCMI also maintains a small, general vocabulary recommended for use within the element Type. This vocabulary currently consists of 12 terms.
DCMI Metadata Terms The DCMI Metadata Terms lists the current set of the Dublin Core vocabulary. This set includes the fifteen terms of the DCMES (in italic), as well as the qualified terms. Each term has a unique URI in the namespace http://purl.org/dc/terms, and all are defined as RDF properties.
Syntax:
Syntax choices for metadata expressed with the Dublin Core elements depend on context. Dublin Core concepts and semantics are designed to be syntax independent and apply to a variety of contexts, as long as the metadata is in a form suitable for interpretation by both machines and people.
The Dublin Core Abstract Model provides a reference model against which particular Dublin Core encoding guidelines can be compared, independent of any particular encoding syntax. Such a reference model helps implementers get a better understanding of the kinds of descriptions they are trying to encode and facilitates the development of better mappings and translations between different syntaxes.
Notable applications:
One Document Type Definition based on Dublin Core is the Open Source Metadata Framework (OMF) specification. OMF is in turn used by Rarian (superseding ScrollKeeper), which is used by the GNOME desktop and KDE help browsers and the ScrollServer documentation server.
PBCore is also based on Dublin Core. The Zope CMF's Metadata products, used by the Plone, ERP5, the Nuxeo CPS Content management systems, SimpleDL, and Fedora Commons also implement Dublin Core. The EPUB e-book format uses Dublin Core metadata in the OPF file.The Australian Government Locator Service (AGLS) metadata standard is an application profile of Dublin Core.: 5 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interlock protocol**
Interlock protocol:
In cryptography, the interlock protocol, as described by Ron Rivest and Adi Shamir, is a protocol designed to frustrate eavesdropper attack against two parties that use an anonymous key exchange protocol to secure their conversation. A further paper proposed using it as an authentication protocol, which was subsequently broken.
Brief history:
Most cryptographic protocols rely on the prior establishment of secret or public keys or passwords. However, the Diffie–Hellman key exchange protocol introduced the concept of two parties establishing a secure channel (that is, with at least some desirable security properties) without any such prior agreement. Unauthenticated Diffie–Hellman, as an anonymous key agreement protocol, has long been known to be subject to man in the middle attack. However, the dream of a "zipless" mutually authenticated secure channel remained.
Brief history:
The Interlock Protocol was described as a method to expose a middle-man who might try to compromise two parties that use anonymous key agreement to secure their conversation.
How it works:
The Interlock protocol works roughly as follows: Alice encrypts her message with Bob's key, then sends half her encrypted message to Bob.
Bob encrypts his message with Alice's key and sends half of his encrypted message to Alice.
How it works:
Alice then sends the other half of her message to Bob, who sends the other half of his.The strength of the protocol lies in the fact that half of an encrypted message cannot be decrypted. Thus, if Mallory begins her attack and intercepts Bob and Alice's keys, Mallory will be unable to decrypt Alice's half-message (encrypted using her key) and re-encrypt it using Bob's key. She must wait until both halves of the message have been received to read it, and can only succeed in duping one of the parties if she composes a completely new message.
The Bellovin/Merritt Attack:
Davies and Price proposed the use of the Interlock Protocol for authentication in a book titled Security for Computer Networks. But an attack on this was described by Steven M. Bellovin & Michael Merritt. A subsequent refinement was proposed by Ellison.The Bellovin/Merritt attack entails composing a fake message to send to the first party. Passwords may be sent using the Interlock Protocol between A and B as follows: A B Ea,b(Pa)<1>-------> <-------Ea,b(Pb)<1> Ea,b(Pa)<2>-------> <-------Ea,b(Pb)<2> where Ea,b(M) is message M encrypted with the key derived from the Diffie–Hellman exchange between A and B, <1>/<2> denote first and second halves, and Pa/Pb are the passwords of A and B.
The Bellovin/Merritt Attack:
An attacker, Z, could send half of a bogus message—P?--to elicit Pa from A: A Z B Ea,z(Pa)<1>------> <------Ea,z(P?)<1> Ea,z(Pa)<2>------> Ez,b(Pa)<1>------> <------Ez,b(Pb)<1> Ez,b(Pa)<2>------> <------Ez,b(Pb)<2> At this point, Z has compromised both Pa and Pb. The attack can be defeated by verifying the passwords in parts, so that when Ea,z(P?)<1> is sent, it is known to be invalid and Ea,z(Pa)<2> is never sent (suggested by Davies). However, this does not work when the passwords are hashed, since half of a hash is useless, according to Bellovin. There are also several other methods proposed in, including using a shared secret in addition to the password. The forced-latency enhancement can also prevent certain attacks.
Forced-Latency Interlock Protocol:
A modified Interlock Protocol can require B (the server) to delay all responses for a known duration: A B Ka-------------> <-------------Kb Ea,b(Ma)<1>----> <----Ea,b(Mb)<1> (B delays response a fixed time, T) Ea,b(Ma)<2>----> <----Ea,b(Mb)<2> (delay again) <----------data Where "data" is the encrypted data that immediately follows the Interlock Protocol exchange (it could be anything), encoded using an all-or-nothing transform to prevent in-transit modification of the message. Ma<1> could contain an encrypted request and a copy of Ka. Ma<2> could contain the decryption key for Ma<1>. Mb<1> could contain an encrypted copy of Kb, and Mb<2> could contain the decryption key for Mb<1> and the response, such as OK, or NOT FOUND, and the hash digest of the data.
Forced-Latency Interlock Protocol:
MITM can be attempted using the attack described in the Bellovin paper (Z being the man-in-the-middle): A Z B Ka------------->Kz-------------> <---------------Kz<-----------Kb Ea,z(Ma)<1>----> <----Ea,z(Mz)<1> (delayed response) Ea,z(Ma)<2>----> Ez,b(Ma)<1>-----> <-----Ez,b(Mb)<1> (delayed response) <----Ea,z(Mz)<2> Ez,b(Ma)<2>-----> <-----Ez,b(Mb)<2> (delayed response) <------------data <----------data In this case, A receives the data approximately after 3*T, since Z has to perform the interlocking exchange with B. Hence, the attempted MITM attack can be detected and the session aborted.
Forced-Latency Interlock Protocol:
Of course, Z could choose to not perform the Interlock Protocol with B (opting to instead send his own Mb) but then the session would be between A and Z, not A, Z, and B: Z wouldn't be in the middle. For this reason, the interlock protocol cannot be effectively used to provide authentication, although it can ensure that no third party can modify the messages in transit without detection. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dual systems model**
Dual systems model:
The dual systems model, also known as the maturational imbalance model, is a theory arising from developmental cognitive neuroscience which posits that increased risk-taking during adolescence is a result of a combination of heightened reward sensitivity and immature impulse control. In other words, the appreciation for the benefits arising from the success of an endeavor is heightened, but the appreciation of the risks of failure lags behind.
Dual systems model:
The dual systems model hypothesizes that early maturation of the socioemotional system (including brain regions like the striatum) increases adolescents' attraction for exciting, pleasurable, and novel activities during a time when cognitive control systems (including brain regions like the prefrontal cortex) are not fully developed and thus cannot regulate these appetitive, and potentially hazardous, impulses. The temporal gap in the development of the socioemotional and cognitive control systems creates a period of heightened vulnerability to risk-taking during mid-adolescence. In the dual systems model, "reward sensitivity" and "cognitive control" refer to neurobiological constructs that are measured in studies of brain structure and function. Other models similar to the dual systems model are the maturational imbalance model, the driven dual systems model, and the triadic model.The dual systems model is not free from controversy, however. It is highly contested and debated within developmental psychology and neuroscientific fields whether or not when the prefrontal cortex is said to be fully or efficiently developed. Most longitudinal evidence suggests that myelination of gray matter in the frontal lobe is a very long process and may be continuing until well into middle age or greater, and major facets of the brain are recorded to reach mature levels in one's mid-teens, including the parts that are responsible for response inhibition and impulse control, suggesting that many later age markers may ultimately be arbitrary.
Historical perspective:
The dual systems model arose out of evidence from developmental cognitive neuroscience providing insight into how patterns of brain development could explain aspects of adolescent decision-making. In 2008, Laurence Steinberg's laboratory at Temple University and BJ Casey's laboratory at Cornell separately proposed similar dual systems theories of adolescent risky decision-making. Casey et al. termed their model the maturational imbalance model.The majority of evidence for the dual systems model comes from fMRI. However, in 2020 the model gained support from a study looking at brain tissue structural measures. Volumetric analysis and mechanical property measures from magnetic resonance elastography showed that individual differences in tissue microstructural development correlated with adolescent risk taking, such that individuals whose risk taking centers were more structurally developed relative to their cognitive control centers, were at greater likelihood to take risks.
Models:
Maturational imbalance model Both the dual systems model and the maturational imbalance model conceive of a slower developing cognitive control system that matures through late adolescence. The dual systems model proposes an inverted-U shape development of the socioemotional system, such that reward responsivity increases in early adolescence and declines thereafter. The maturational imbalance model portrays a socioemotional system that reaches its peak around mid-adolescence and then plateaus into adulthood. Further, the dual systems model proposes that the development of the cognitive control and socioemotional systems is independent whereas the maturational imbalance proposes that the maturation of the cognitive control system leads to dampening of socioemotional responsivity.
Models:
Driven dual systems model Recently, another variation of the dual systems model was proposed called the "driven dual systems model". This model proposes an inverted-U shaped trajectory of socioemotional system responsivity, similar to the dual systems model, but hypothesizes a cognitive control trajectory that plateaus in mid-adolescence. This cognitive control trajectory differs from that proposed by the dual systems model and maturational imbalance model which continues to increase into the early 20s. Similar to the driven dual systems model, a model has been proposed including a hyperactive socioemotional system that undermines the regulatory ability of the cognitive control system. These later models hypothesize that cognitive control development is completed by mid-adolescence and attribute increased risk-taking during adolescence to the hyperarousal of the socioemotional system. The dual systems model and maturational imbalance model propose that cognitive control development continues into early adulthood and that increased risk-taking in adolescence is attributable to a developmental imbalance where the socioemotional system is at its peak of development but the cognitive control system developmental trajectory lags behind.
Models:
Triadic model The "triadic model", which includes a third brain system responsible for emotion processing and primarily implicating the amygdala. The triadic model proposes that this emotion system increases impulsivity during adolescence by increasing the perceived cost of delaying decision-making. This model posits that impulsivity and risk seeking in adolescence is due to a combination of hyperactive reward systems causing adolescents to approach appetitive stimuli, emotion processing systems causing adolescents to enhance perceived costs of delaying behaviors and reduce avoidance of potentially negative stimuli, and an underdeveloped cognitive control system that is unable to regulate reward-seeking behaviors.
Adolescent risk-taking:
Risk-taking in certain, but not all, domains peaks during adolescence. Most notably, mortality and morbidity rates increase significantly from childhood to adolescence despite the fact that physical and mental capabilities increase during this period. The primary cause for this increase in mortality/morbidity among adolescents is preventable injury. According to the Center for Disease Control, in 2014 about 40% of all adolescent deaths (ages 15–19 years) were caused by unintentional accidents. From 1999 to 2006, almost one-half of all adolescent deaths (ages 12–19 years) were due to accidental injury. Of these unintentional injuries, about 2/3 are due to motor vehicle accidents, followed by unintentional poisoning, unintentional drowning, other land transportation accidents, and unintentional discharge of firearms.The dual systems model proposes that mid-adolescence is the time of highest biological propensity for risk-taking, but that older adolescents may exhibit higher levels of real-world risk-taking (e.g., binge drinking is most common during the early 20s) not due to greater propensity for risk-taking but due to greater opportunity. For example, individuals in their early 20s compared to mid-adolescence have less adult supervision, greater financial resources, and greater legal privileges. The dual systems model looks to experimental paradigms in developmental neuroscience for evidence of this greater biological propensity for risk-taking.There is also a consistent relation between age and crime with adolescents and young adults being more likely to engage in violent and non-violent crime. These findings are linked to increases in sensation-seeking, which is the tendency to seek out novel, exciting, and rewarding stimuli, during adolescence, and continued development of impulse control, which is the ability to regulate one's behavior. The dual systems model points to brain development as a mechanism for this association.
Adolescent risk-taking:
Reward seeking Across many species including humans, rodents, and nonhuman primates, adolescents demonstrate peaks in reward-seeking behaviors. For example, adolescent rats are more sensitive than adult rats to rewarding stimuli and show enhanced behavioral responses to novelty and peers. Adolescent humans show peaks in self-reported sensation-seeking, increased neural activation to monetary and social rewards, greater temporal discounting of delayed rewards, and heightened preferences for primary rewards (e.g., sweet substances).Sensation-seeking is a type of reward seeking involving the tendency to seek out novel, exciting, and rewarding stimuli. Sensation-seeking has been found to increase in preadolescence, peak in mid-adolescence, and decline in early adulthood.
Adolescent risk-taking:
Impulsivity Impulsivity has been found to exhibit a different developmental trajectory than reward or sensation seeking. Impulsivity gradually declines with age in a linear fashion. Around mid-adolescence when impulsivity and sensation-seeking are at their peak is the theoretical peak age for risk-taking according to the dual systems model.
Adolescent risk-taking:
Social influence Adolescent risk-taking is more likely to occur in the presence of peers compared to adults. Animal studies have found that adolescent mice, but not adult mice, consume more alcohol in the presence of peers than when alone. In humans, the presence of peers has been found to result in increased activation in the striatum and orbitofrontal cortex risk-taking, and activation in these regions predicted subsequent risk-taking among adolescents but not adults. Age differences in activation of the striatum and frontal cortex have been interpreted to suggest heightened risk-taking in the presence of peers is due to the influence of peers on reward processing rather than the influence of peers on cognitive control.
Socioemotional system:
The term socioemotional brain network or system (also known as the ventral affective system) refers to the striatum as well as the medial and orbital prefrontal cortices.
Socioemotional system:
Dopamine Evidence from rodent studies indicates the dopaminergic system, the pathway connecting the ventral tegmental area to the nucleus accumbens and olfactory tubercle, plays a critical role in the brain's reward circuitry and the dopamine-rich striatum has been implicated as a key contributor to reward sensitivity in the brain.During puberty, the dopaminergic system undergoes significant reorganization. Increased dopamine projections from mesolimbic areas (e.g., the striatum) to the prefrontal cortex have been observed during mid- and late-adolescence. These projections are pruned/decline in early adulthood. Adolescent-specific peaks in dopamine receptors in the striatum have been observed in humans and rodents. Additionally, dopamine concentrations projecting to the prefrontal cortex increase into adolescence as do the dopamine projections from the prefrontal cortex to the striatum (namely the nucleus accumbens).
Socioemotional system:
Hyper- versus hypo-sensitivity to reward The striatum has been linked to reward processing, learning, and motivation.
Hyperactivity Neuroimaging studies using functional magnetic resonance imaging (fMRI) have shown that the ventral striatum is more active among adolescents compared to children and adults when receiving monetary rewards, primary rewards, and social rewards. Peaks in striatal activity as associated with increased self-reported risk-taking.
Socioemotional system:
Hypoactivity Some studies have found that striatum activity is blunted compared to children and adults when anticipating rewards, which has been linked to greater risk-taking behaviors. The theory linking this hypoactivation to greater risk-taking is that adolescents experience less gratifying experience from anticipating rewards and they are therefore motivated to seek out more reward-inducing experiences to achieve the same level of reward sensation as other age groups.
Socioemotional system:
Current consensus Although evidence exists for both adolescent hyper-responsiveness to rewards and hypo-responsiveness to rewards, the field of developmental neuroscience has generally converged on the view of hyper-responsiveness. In other words, that is, that adolescents are motivated, in part, to engage in greater reward-seeking behaviors because of developmental changes in the striatum that contribute to hypersensitivity to reward.
Cognitive control system:
The cognitive control system refers to the lateral prefrontal, lateral parietal, and anterior cingulate cortices. The most commonly investigated region is the prefrontal cortex which undergoes substantial development throughout adolescence. The development of the prefrontal cortex has been implicated in the ability to regulate behavior and engage in inhibitory control.As a result of synaptic pruning and myelination of the prefrontal cortex, improvements in executive functions have been observed during adolescence.
Cognitive control system:
Synaptic pruning During development, the brain undergoes overproduction of neurons and their synaptic connections and then prunes those that are unnecessary for optimal functioning. This developmental process results in grey matter reduction over development. During adolescence, this pruning process is specialized with some areas losing approximately half of their synaptic connections but others showing little change. Total grey matter volume undergoes substantial pruning starting around puberty. The process of grey matter loss (i.e., maturation) occurs differentially in different brain regions with frontal and occipital poles losing grey matter early, but the prefrontal cortex losing grey matter only at the end of adolescence.
Cognitive control system:
Myelination In addition to synaptic pruning, the brain undergoes myelination, which influences the speed of information flow across brain regions. Myelination involves neuronal axons connecting certain brain areas to become insulated with a white, fatty substance called myelin that increases the speed and efficiency of transmission along axons. Myelination increases dramatically during adolescence. Myelination contributes to the developmental thinning or reduction in grey matter in the prefrontal cortex during adolescence.
Cognitive control system:
Links to inhibitory control Evidence supporting the dual systems model theory of delayed maturation of the cognitive control system is supported by evidence of structural changes like cortical thinning as well as less diffuse activation of frontal regions during inhibitory control tasks from adolescence to adulthood. Regardless of age, increased activation of the prefrontal cortex is related to better performance on response inhibition tasks.
Experimental paradigms:
Reward tasks Three primary experimental paradigms are used to study reward behavior in adolescents (1) passive receipt of reward, (2) reward conditional on task performance, and (3) decision-making selecting different types of reward options.
Experimental paradigms:
Passive exposure tasks Passive exposure tasks generally involve exposing the participant to pleasant stimuli (e.g., monetary reward, attractive faces). These paradigms also involve exposure to negative stimuli for the purposes of comparison (e.g., monetary loss, angry faces). Although these tasks are more commonly used to investigate emotion processing rather than reward, some studies have used a slot-machine passive task to target reward circuitry in the brain. Faces have also been used as reward for motivational paradigms. Passive exposure tasks have been found to activate the striatum and orbitofrontal cortex, with striatal activation greater in adolescents in response to rewarding stimuli but orbitofrontal activation greater in adults in response to negative stimuli.
Experimental paradigms:
Performance contingent tasks Reward tied to task performance typically involves participants being asked to complete a task in order to obtain a reward (and sometimes to avoid losing a reward). Task performance is not necessarily directly related to the reward. Examples of this type of task are the Pirate's paradigm, monetary incentive delay (MID) task, Iowa Gambling Task, Balloon Analogue Risk Task (BART), and Columbia Card Task, among others. Differences in activation to anticipation of reward versus preparation to try and achieve reward have been reported on performance related reward tasks.
Experimental paradigms:
Decision-making tasks Reward decision-making tasks involve participants being asked to choose among different options of reward. Sometimes the rewards differ on probability, magnitude, or type of reward (e.g., social versus monetary). These tasks are typically conceived to not have a correct or incorrect response, but rather to have decision-making based on the participants' preference. Examples of decision making tasks include delay discounting tasks and the Driving Game. During feedback on decision-making tasks, greater striatal activation to rewarding outcomes has been observed in adolescents compared to adults.
Experimental paradigms:
Response inhibition tasks Common response inhibition tasks are the Go/No-Go, Flanker, Stroop, Stop Signal, and anti-saccade tasks. Individuals who perform well on these tasks generally activate the prefrontal cortex to a greater extent than individuals who perform poorly on these tasks. Performance on these tasks improves with age.
Go/No-Go task The Go/No-Go task requires participants to respond, usually by pressing a button or a key on a computer keyboard, to a designated cue or withhold a response, by not pressing the button/key, to a different designated cue. Variants of this task include alphabet letters, shapes, and faces.
Flanker task The Flanker task typically involves presentation of a target flanked by non-target stimuli that is either in the same direction as the target (congruent) or in the opposite direction of a target (incongruent) or neither direction (neutral). Participants have to respond to the direction of the target ignoring the non-target stimuli.
Stroop tasks Stroop tasks require participants to respond to one facet of the presented stimuli (e.g., read the word) but ignore another competing facet (e.g., ignore a contradictory color).
Experimental paradigms:
Stop signal task The Stop Signal task is similar to the Go/No-Go task in that participants see a cue indicating a go trial. For stop trials, participants see the go cue but then are presented with the stop signal (typically a sound) indicating they should not respond to the go trial. Presenting the stop signal after the go cue makes this task more difficult than traditional Go/No-Go tasks.
Experimental paradigms:
Anti-saccade task Anti-saccade tasks typically require participants to fixate on a motionless target. A stimulus is then presented on one side of the target and the participant is asked to make a saccade (either move their eyes or respond with a button press) in the direction away from the stimulus.
Legal relevance:
Adolescent developmental immaturity and culpability were central to three US Supreme Court cases: Roper v. Simmons, Graham v. Florida, and Miller v. Alabama. Prior to Roper in 2005, the Supreme Court had relied on common sense standards to determine adolescent culpability. For example, in Thompson v. Oklahoma, the Court prohibited capital punishment for individuals under the age of 16 stating that "Contemporary standards of decency confirm our judgment that such a young person is not capable of acting with the degree of culpability that can justify the ultimate penalty." In Roper, however, the Court looked to developmental science as rationale for abolishing capital punishment for juveniles. In 2010, the Court ruled life without parole was unconstitutional for juveniles in Graham and in 2012 the Court ruled that States could not mandate life without parole for juveniles even in the case of homicide in Miller. In Miller, the Court stated "It is increasingly clear that adolescent brains are not yet fully mature in regions and systems related to higher-order executive functions such as impulse control, planning ahead, and risk avoidance."
Criticism:
Lack of empirical evidence Most criticism of the dual systems model arises from one continual error; the lack of actual evidence proving a casual relation between youth misbehavior and a dysfunctional brain. Despite countless studies of maturation of the adolescent brain, there has never been a notable study that necessarily confirms that cognitive control is immature. In fact, according to most available research, cognitive control likely has a pleateau in the mid-teens. A 1995 study by Linda S. Siegel of the Ontario Institute for Studies in Education found that "working memory peaks at fifteen or sixteen". This finding was reinforced in a 2015 study on peaks in cognitive functioning of the brain. Additionally, a 2004 study indicated that "response inhibition" and "processing speed" reached adult levels by the age of fourteen and fifteen, respectively. Inhibitory control is defined as the capacity voluntarily to inhibit or regulate prepotent attentional or behavioral responses. Inhibitory control involves the ability to focus on relevant stimuli in the presence of irrelevant stimuli and to override strong but inappropriate behavioral tendencies. Knowing when this faculty reaches maturity could inform discussion on the matter.
Criticism:
Prefrontal cortex pruning is also recorded to level off by age 15, and has been seen to continue as late as into the sixth decade of life. White matter is recorded to increase up until around the age of 45, and then it is lost via progressive aging. If myelination continues into one's forties and fifties, this could potentially shed serious doubt on the commonly cited claim that myelination is only complete in the twenties.
Criticism:
There is also a lack of evidence indicating the limbic system being mature (and sensation seeking peaking) while the executive functioning of the brain remains immature. In one longitudinal study, individual differences in working memory predicted subsequent levels of sensation seeking even after controlling for age, suggesting that sensation-based risk taking rises in concert with executive function.
Criticism:
Misinterpretation of data No-Go Task Researchers have also been accused of misrepresenting the data gathered from their studies. In one example, a commonly cited study to reference the immaturity of the brain in adolescence is a 2004 study involving a no-go task comparing teens and adults. Adolescents aged 12 to 17 were measured along with adults aged 22 to 27 with an MRI device while performing a task involving earning money. They were then told to press a button after a short period. Some symbols indicated that pressing the button would result in more money, while failing to respond would result in less. Areas of the brain were monitored during the session, and both groups seemed to perform well on the study. However, brain activity differed in one area specifically during the high-payment trials where the average activity of neurons in the right nucleus accumbens, but not in other areas that were monitored. Researchers drew a modest conclusion from this study, indicating that there were qualitative similarities in the processing abilities of adolescents and adults. However, it was reported instead that the study found a "biological reason for teen laziness", despite the study seeming to neither confirm nor deny that statement. This has led to some criticism in how these studies and the results they gather were being interpreted, either through baseless speculation with even accusations of malicious intent levied at journalists and researchers.
Criticism:
Laurence Steinberg, BJ Casey, and others have asserted that 18-21 year olds are more comparable to young teens in terms of risk assessment and perform worse than adults 22 years of age and older. However, 22-25 year olds sampled in studies perform worse than 26-30 year olds in terms of cognitive function under emotional pressure. Older groups were not surveyed.
Criticism:
The Teenage Brain In Frances Jensen's book, "The Teenage Brain", Jensen claims that myelination of the brain's frontal lobes is not finished until well into one's twenties and provides a study in support of her claim. However, the study did not necessarily come to that conclusion. The study included a group of adolescents with a mean age of 13.8 and it compared the average size of certain brain region’s gray matter in that group to the average size of certain brain region’s gray matter in the adult group, with a mean age of 25.6. However, they did not show brain development in individuals, and the size of each group was only about 10 people. Brain size can also vary massively between different people of the same age. Furthermore, “gray matter” was measured with the overall size of some macrostructures and claimed that a reduction in gray matter means an increase in white matter. The study may not have shown any activity relating to white matter at all. The study has been criticized for seemingly measuring brain sizes instead of development of the brain at continuous ages. The study also uses 23 year olds in the adult group, who have been considered by researchers to have somewhat immature brains. Nonetheless, such claims may seem unreliable with this conduction of research.
Criticism:
Psychosocial Maturity Another major study headed by Laurence Steinberg were tests that focused on cognitive maturity and psychosocial maturity. These studies found that cold cognitive maturity reached adult levels at 16, whereas psychosocial (or, hot cognitive maturity) was reached around 25. Cold cognition relates more to the raw function of the brain and ability to process information and operate competently. Hot cognition relates more to social or emotional maturity, or impulse control. However, some on the study had not reached sufficient adult levels of hot cognitive maturity by age 30 or greater, whereas some where able to achieve hot cognitive maturity by age 14 or 15. This, coupled with the fact that the study never went anywhere near the brain, suggests that social immaturity of adolescents and young adults could be influenced by culture or environment rather than by biological means.
Criticism:
Accusations of forwarding ideology Some proponents of Dual Systems Theory have been accused of forwarding certain agendas involving expansion of universal education, including the idea that youth are biologically predisposed to immaturity, which is then corrected by them pursuing education. This accusation goes back to the early 20th century, when G. Stanley Hall theorized that adolescence was an inevitable and necessary stage of life. He advocated for the undergraduate to be exempt from adult responsibilities, and that students have the freedom to be lazy. His definition of adolescence included girls going through it from age twelve to twenty one and males from age fourteen to twenty five. This may be where the commonly cited myth of the male and female brain maturing at these ages originates from. However, Hall's claims were not supported by any evidence. He would propose a new stage of life that would delay entry into the world of work and that any attempt to restrict the time spent in school or college was "an attempt to return to more savage conditions". This provides some credence that the troubled teen industry is driven by motivation to expand the education system.
Newer Theories:
Center for the Developing Adolescent As of July 2022, adolescent brain researchers have taken a new direction in their research and have seemingly abandoned the 'imbalance/immaturity' theory in place of adolescent brains having a specific advantage, such as being highly adaptable but also possessing of adult-level cognitive maturity at a young age. This makes it a special period of development, and despite the window ultimately being between age 10 and 25, it has been noted that brain development is not 'incomplete' before the end of this window, nor is it completely finished with this window. As seen above and as seen with more up-to-date research, maturation of the brain goes on for much of adult life and does not necessarily have a 'completion' date. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Anytime algorithm**
Anytime algorithm:
In computer science, an anytime algorithm is an algorithm that can return a valid solution to a problem even if it is interrupted before it ends. The algorithm is expected to find better and better solutions the longer it keeps running.
Anytime algorithm:
Most algorithms run to completion: they provide a single answer after performing some fixed amount of computation. In some cases, however, the user may wish to terminate the algorithm prior to completion. The amount of computation required may be substantial, for example, and computational resources might need to be reallocated. Most algorithms either run to completion or they provide no useful solution information. Anytime algorithms, however, are able to return a partial answer, whose quality depends on the amount of computation they were able to perform. The answer generated by anytime algorithms is an approximation of the correct answer.
Names:
An anytime algorithm may be also called an "interruptible algorithm". They are different from contract algorithms, which must declare a time in advance; in an anytime algorithm, a process can just announce that it is terminating.
Goals:
The goal of anytime algorithms are to give intelligent systems the ability to make results of better quality in return for turn-around time. They are also supposed to be flexible in time and resources. They are important because artificial intelligence or AI algorithms can take a long time to complete results. This algorithm is designed to complete in a shorter amount of time. Also, these are intended to have a better understanding that the system is dependent and restricted to its agents and how they work cooperatively. An example is the Newton–Raphson iteration applied to finding the square root of a number. Another example that uses anytime algorithms is trajectory problems when you're aiming for a target; the object is moving through space while waiting for the algorithm to finish and even an approximate answer can significantly improve its accuracy if given early.
Goals:
What makes anytime algorithms unique is their ability to return many possible outcomes for any given input. An anytime algorithm uses many well defined quality measures to monitor progress in problem solving and distributed computing resources. It keeps searching for the best possible answer with the amount of time that it is given. It may not run until completion and may improve the answer if it is allowed to run longer.
Goals:
This is often used for large decision set problems. This would generally not provide useful information unless it is allowed to finish. While this may sound similar to dynamic programming, the difference is that it is fine-tuned through random adjustments, rather than sequential.
Goals:
Anytime algorithms are designed so that it can be told to stop at any time and would return the best result it has found so far. This is why it is called an interruptible algorithm. Certain anytime algorithms also maintain the last result, so that if they are given more time, they can continue from where they left off to obtain an even better result.
Decision trees:
When the decider has to act, there must be some ambiguity. Also, there must be some idea about how to solve this ambiguity. This idea must be translatable to a state to action diagram.
Performance profile:
The performance profile estimates the quality of the results based on the input and the amount of time that is allotted to the algorithm. The better the estimate, the sooner the result would be found. Some systems have a larger database that gives the probability that the output is the expected output. It is important to note that one algorithm can have several performance profiles. Most of the time performance profiles are constructed using mathematical statistics using representative cases. For example, in the traveling salesman problem, the performance profile was generated using a user-defined special program to generate the necessary statistics. In this example, the performance profile is the mapping of time to the expected results. This quality can be measured in several ways: certainty: where probability of correctness determines quality accuracy: where error bound determines quality specificity: where the amount of particulars determine quality
Algorithm prerequisites:
Initial behavior: While some algorithms start with immediate guesses, others take a more calculated approach and have a start up period before making any guesses.
Growth direction: How the quality of the program's "output" or result, varies as a function of the amount of time ("run time") Growth rate: Amount of increase with each step. Does it change constantly, such as in a bubble sort or does it change unpredictably? End condition: The amount of runtime needed | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phospholysine phosphohistidine inorganic pyrophosphate phosphatase**
Phospholysine phosphohistidine inorganic pyrophosphate phosphatase:
Phospholysine phosphohistidine inorganic pyrophosphate phosphatase is a protein that in humans is encoded by the LHPP gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Special-use domain name**
Special-use domain name:
A special-use domain name is a domain name that is defined and reserved in the hierarchy of the Domain Name System of the Internet for special purposes. The designation of a reserved special-use domain is authorized by the Internet Engineering Task Force (IETF) and executed, maintained, and published by the Internet Assigned Numbers Authority (IANA).
Reserved domain names:
The following list comprises the domain names list by IANA in the category of special-use domain names. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ideal sheaf**
Ideal sheaf:
In algebraic geometry and other areas of mathematics, an ideal sheaf (or sheaf of ideals) is the global analogue of an ideal in a ring. The ideal sheaves on a geometric object are closely connected to its subspaces.
Definition:
Let X be a topological space and A a sheaf of rings on X. (In other words, (X, A) is a ringed space.) An ideal sheaf J in A is a subobject of A in the category of sheaves of A-modules, i.e., a subsheaf of A viewed as a sheaf of abelian groups such that Γ(U, A) · Γ(U, J) ⊆ Γ(U, J)for all open subsets U of X. In other words, J is a sheaf of A-submodules of A.
General properties:
If f: A → B is a homomorphism between two sheaves of rings on the same space X, the kernel of f is an ideal sheaf in A.
Conversely, for any ideal sheaf J in a sheaf of rings A, there is a natural structure of a sheaf of rings on the quotient sheaf A/J. Note that the canonical mapΓ(U, A)/Γ(U, J) → Γ(U, A/J) for open subsets U is injective, but not surjective in general. (See sheaf cohomology.)
Algebraic geometry:
In the context of schemes, the importance of ideal sheaves lies mainly in the correspondence between closed subschemes and quasi-coherent ideal sheaves. Consider a scheme X and a quasi-coherent ideal sheaf J in OX. Then, the support Z of OX/J is a closed subspace of X, and (Z, OX/J) is a scheme (both assertions can be checked locally). It is called the closed subscheme of X defined by J. Conversely, let i: Z → X be a closed immersion, i.e., a morphism which is a homeomorphism onto a closed subspace such that the associated map i#: OX → i⋆OZis surjective on the stalks. Then, the kernel J of i# is a quasi-coherent ideal sheaf, and i induces an isomorphism from Z onto the closed subscheme defined by J.A particular case of this correspondence is the unique reduced subscheme Xred of X having the same underlying space, which is defined by the nilradical of OX (defined stalk-wise, or on open affine charts).For a morphism f: X → Y and a closed subscheme Y′ ⊆ Y defined by an ideal sheaf J, the preimage Y′ ×Y X is defined by the ideal sheaf f⋆(J)OX = im(f⋆J → OX).The pull-back of an ideal sheaf J to the subscheme Z defined by J contains important information, it is called the conormal bundle of Z. For example, the sheaf of Kähler differentials may be defined as the pull-back of the ideal sheaf defining the diagonal X → X × X to X. (Assume for simplicity that X is separated so that the diagonal is a closed immersion.)
Analytic geometry:
In the theory of complex-analytic spaces, the Oka-Cartan theorem states that a closed subset A of a complex space is analytic if and only if the ideal sheaf of functions vanishing on A is coherent. This ideal sheaf also gives A the structure of a reduced closed complex subspace. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Punch down tool**
Punch down tool:
A punch down tool, punchdown tool, IDC tool, or a Krone tool (named after the Krone LSA-PLUS connector), is a small hand tool used by telecommunication and network technicians. It is used for inserting wire into insulation-displacement connectors on punch down blocks, patch panels, keystone modules, and surface mount boxes (also known as biscuit jacks).
Description and use:
Most punch down tools are of the impact type, consisting of a handle, an internal spring mechanism, and a removable slotted blade. To use the punch down tool, a wire is pre-positioned into a slotted post on a punch block, and then the punch down tool is pressed down on top of the wire, over the post. Once the required pressure is reached, an internal spring is triggered, and the blade pushes the wire into the slot, simultaneously cutting the insulation and securing the wire. The tool blade does not cut through the wire insulation to make contact, but rather the sharp edges of the slot in the contact post itself slice through the insulation. However, the punch down tool blade also is usually used to cut off excess wire, in the same operation as making the connection; this is done with a sharp edge of the punch down tool blade trapping the wire to be cut against the plastic punch block. If this cutoff feature is heavily used, the tool blade must be resharpened or replaced from time to time. Tool blades without the sharp edge are also available; these are used for continuing a wire through a slotted post to make connections with another slotted post ("daisy-chained" wiring).
Description and use:
For light-duty use, there are also less-expensive punch down tools with fixed blades and no impact mechanism. These low-cost tools are more time-consuming for making reliable connections, and can cause muscle fatigue when used for large numbers of connections.
Description and use:
To accommodate different connector types, 66, 110, BIX and krone blocks require different blades. Removable blades for 66 or 110 are almost always double-ended. Some blades have one end that only inserts the wire for daisy-chain wiring from post to post, and another end that inserts wire and trims the excess length for termination at a post. Other blades have a cutting 66 blade on one end and a cutting 110 blade on the other. Krone blades require a separate scissor-like mechanism for trimming the wire. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cyprus lunar sample displays**
Cyprus lunar sample displays:
The Cyprus lunar sample displays are part of two commemorative plaques consisting of tiny fragments of Moon specimens brought back with the Apollo 11 and Apollo 17 lunar missions. These plaques were given to the people of the Republic of Cyprus by United States President Richard Nixon as goodwill gifts.
Description:
Apollo 11 At the request of Nixon, NASA had about 250 presentation plaques made following Apollo 11 in 1969. Each included about four rice-sized particles of Moon dust from the mission totaling about 50 mg. The Apollo 11 lunar sample display has an acrylic plastic button containing the Moon dust mounted with the recipient's country or state flag that had been to the Moon and back. All 135 countries received the display, as did the 50 states of the United States and the U.S. provinces and the United Nations.The plaques were given as gifts by Nixon in 1970.
Description:
Apollo 17 The sample Moon rock collected during the Apollo 17 mission was later named lunar basalt 70017, and dubbed the Goodwill rock. Pieces of the rock weighing about 1.14 grams were placed inside a piece of acrylic lucite, and mounted along with a flag from the country that had flown on Apollo 17 it would be distributed to.In 1973 Nixon had the plaques sent to 135 countries, and to the United States with its territories, as a goodwill gesture.
History:
An international mystery of how the Cyprus goodwill Moon rock was offered for sale on the black market begins in 1960. During a coup of 1974 the Presidential Palace burned. The Cyprus "Moon rock" plaque from Apollo 17 was considered lost at that time. Subsequent information revealed that the display was never actually given to the Cyprus government, rather was kept at the US embassy in Nicosia during the 1974 coup d'état (Turkish invasion), which caused a delayed presentation of the plaque. But American diplomatic personnel left the island and the display went missing, showing up on the black market years later, in the hands of the son of a previous US diplomat.NASA reported in May 2010 that the Office of Inspector General recovered the Apollo 17 plaque and are preparing to re-gift it. According to Robert Pearlman, the whereabouts of the Cyprus Apollo 11 goodwill lunar display are unknown. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Industrial revolutions**
Industrial revolutions:
Various technological revolutions have been defined as successors of the original Industrial Revolution. The sequence includes: The first Industrial Revolution The Second Industrial Revolution, also known as the Technological Revolution The Third Industrial Revolution, better known as the Digital Revolution The Fourth Industrial Revolution | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glycoside hydrolase family 101**
Glycoside hydrolase family 101:
In molecular biology, glycoside hydrolase family 101 is a family of glycoside hydrolases.
Glycoside hydrolase family 101:
Glycoside hydrolases EC 3.2.1. are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.Glycoside hydrolase family GH101 includes enzymes with endo-α-N-acetylgalactosaminidase EC 3.2.1.97 activity and can be split into several subfamilies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reperfusion injury**
Reperfusion injury:
Reperfusion injury, sometimes called ischemia-reperfusion injury (IRI) or reoxygenation injury, is the tissue damage caused when blood supply returns to tissue (re- + perfusion) after a period of ischemia or lack of oxygen (anoxia or hypoxia). The absence of oxygen and nutrients from blood during the ischemic period creates a condition in which the restoration of circulation results in inflammation and oxidative damage through the induction of oxidative stress rather than (or along with) restoration of normal function.
Reperfusion injury:
Reperfusion injury is distinct from cerebral hyperperfusion syndrome (sometimes called "Reperfusion syndrome"), a state of abnormal cerebral vasodilation.
Mechanisms:
Reperfusion of ischemic tissues is often associated with microvascular injury, particularly due to increased permeability of capillaries and arterioles that lead to an increase of diffusion and fluid filtration across the tissues. Activated endothelial cells produce more reactive oxygen species but less nitric oxide following reperfusion, and the imbalance results in a subsequent inflammatory response.
Mechanisms:
The inflammatory response is partially responsible for the damage of reperfusion injury. White blood cells, carried to the area by the newly returning blood, release a host of inflammatory factors such as interleukins as well as free radicals in response to tissue damage. The restored blood flow reintroduces oxygen within cells that damages cellular proteins, DNA, and the plasma membrane. Damage to the cell's membrane may in turn cause the release of more free radicals. Such reactive species may also act indirectly in redox signaling to turn on apoptosis. White blood cells may also bind to the endothelium of small capillaries, obstructing them and leading to more ischemia.Reperfusion injury plays a major part in the biochemistry of hypoxic brain injury in stroke. Similar failure processes are involved in brain failure following reversal of cardiac arrest; control of these processes is the subject of ongoing research. Repeated bouts of ischemia and reperfusion injury also are thought to be a factor leading to the formation and failure to heal of chronic wounds such as pressure sores and diabetic foot ulcer. Continuous pressure limits blood supply and causes ischemia, and the inflammation occurs during reperfusion. As this process is repeated, it eventually damages tissue enough to cause a wound.The main reason for the acute phase of ischemia-reperfusion injury is oxygen deprivation and, therefore, arrest of generation of ATP (cellular energy currency) by mitochondria oxidative phosphorylation. Tissue damage due to the general energy deficit during ischemia is followed by reperfusion (increase of oxygen level) when the injury is enhanced. Mitochondrial complex I is thought to be the most vulnerable enzyme to tissue ischemia/reperfusion but the mechanism of damage is different in different tissues. For example brain ischemia/reperfusion injury is mediated via complex I redox-dependent inactivation. It was found that lack of oxygen leads to conditions in which mitochondrial complex I loses its natural cofactor, flavin mononucleotide (FMN) and become inactive. When oxygen is present the enzyme catalyzes a physiological reaction of NADH oxidation by ubiquinone, supplying electrons downstream of the respiratory chain (complexes III and IV). Ischemia leads to dramatic increase of succinate level. In the presence of succinate mitochondria catalyze reverse electron transfer so that fraction of electrons from succinate is directed upstream to FMN of complex I. Reverse electron transfer results in a reduction of complex I FMN, increased generation of ROS, followed by a loss of the reduced cofactor (FMNH2) and impairment of mitochondria energy production. The FMN loss by complex I and I/R injury can be alleviated by the administration of FMN precursor, riboflavin.Reperfusion can cause hyperkalemia.Reperfusion injury is a primary concern in liver transplantation surgery.
Treatment:
Therapeutic hypothermia However, the therapeutic effect of hypothermia does not confine itself to metabolism and membrane stability. Another school of thought focuses on hypothermia's ability to prevent the injuries that occur after circulation returns to the brain, or what is termed reperfusion injuries. In fact an individual suffering from an ischemic insult continues suffering injuries well after circulation is restored. In rats it has been shown that neurons often die a full 24 hours after blood flow returns. Some theorize that this delayed reaction derives from the various inflammatory immune responses that occur during reperfusion. These inflammatory responses cause intracranial pressure, pressure which leads to cell injury and in some situations cell death. Hypothermia has been shown to help moderate intracranial pressure and therefore to minimize the harmful effect of a patient's inflammatory immune responses during reperfusion. Beyond this, reperfusion also increases free radical production. Hypothermia too has been shown to minimize a patient's production of deadly free radicals during reperfusion. Many now suspect it is because hypothermia reduces both intracranial pressure and free radical production that hypothermia improves patient outcome following a blockage of blood flow to the brain.
Treatment:
Hydrogen sulfide treatment There are some preliminary studies in mice that seem to indicate that treatment with hydrogen sulfide (H2S) can have a protective effect against reperfusion injury.
Treatment:
Cyclosporin In addition to its well-known immunosuppressive capabilities, the one-time administration of cyclosporin at the time of percutaneous coronary intervention (PCI) has been found to deliver a 40 percent reduction in infarct size in a small group proof of concept study of human patients with reperfusion injury published in The New England Journal of Medicine in 2008.Cyclosporin has been confirmed in studies to inhibit the actions of cyclophilin D, a protein which is induced by excessive intracellular calcium flow to interact with other pore components and help open the MPT pore. Inhibiting cyclophilin D has been shown to prevent the opening of the MPT pore and protect the mitochondria and cellular energy production from excessive calcium inflows.However, the studies CIRCUS and CYCLE (published in September 2015 and February 2016 respectively) looked at the use of cyclosporin as a one time IV dose given right before perfusion therapy (PCI). Both studies found there is no statistical difference in outcome with cyclosporin administration.Reperfusion leads to biochemical imbalances within the cell that lead to cell death and increased infarct size. More specifically, calcium overload and excessive production of reactive oxygen species in the first few minutes after reperfusion set off a cascade of biochemical changes that result in the opening of the so-called mitochondrial permeability transition pore (MPT pore) in the mitochondrial membrane of cardiac cells.The opening of the MPT pore leads to the inrush of water into the mitochondria, resulting in mitochondrial dysfunction and collapse. Upon collapse, the calcium is then released to overwhelm the next mitochondria in a cascading series of events that cause mitochondrial energy production supporting the cell to be reduced or stopped completely. The cessation of energy production results in cellular death. Protecting mitochondria is a viable cardioprotective strategy.In 2008, an editorial in the New England Journal of Medicine called for more studies to determine if cyclosporin can become a treatment to ameliorate reperfusion injury by protecting mitochondria. To that end, in 2011 the researchers involved in the original 2008 NEJM study initiated a phase III clinical study of reperfusion injury in 1000 myocardial infarction patients in centers throughout Europe. Results of that study were announced in 2015 and indicated that "intravenous cyclosporine did not result in better clinical outcomes than those with placebo and did not prevent adverse left ventricular remodeling at 1 year." This same process of mitochondrial destruction through the opening of the MPT pore is implicated in making traumatic brain injuries much worse.
Treatment:
TRO40303 TRO40303 is a new cardioprotective compound that was shown to inhibit the MPT pore and reduce infarct size after ischemia-reperfusion. It was developed by Trophos company and currently is in Phase I clinical trial.
Stem cell therapy Recent investigations suggest a possible beneficial effect of mesenchymal stem cells on heart and kidney reperfusion injury.
Superoxide dismutase Superoxide dismutase is an effective anti-oxidant enzyme which converts superoxide anions to water and hydrogen peroxide. Recent researches have shown significant therapeutic effects on pre-clinical models of reperfusion injury after ischemic stroke.
Metformin A series of 2009 studies published in the Journal of Cardiovascular Pharmacology suggest that Metformin may prevent cardiac reperfusion injury by inhibition of Mitochondrial Complex I and the opening of MPT pore and in rats.
Riboflavin In neonatal in vivo model of brain ischemia/reperfusion, tissue injury can be alleviated by the administration of FMN precursor, riboflavin that prevents inactivation of mitochondrial complex I.
Treatment:
Cannabinoids A study published in 2012 show that the synthetic analogue of the phytocannabinoid Tetrahydrocannabivarin (THCV), Δ8-Tetrahydrocannabivarin (Δ8-THCV) and its metabolite 11-OH-Δ8-THCV, prevent hepatic ischaemia/reperfusion injury by decreasing oxidative stress and inflammatory responses through cannabinoid CB2 receptors and thereby decrease tissue injury and inflammation with a protective effect against liver damage. Pretreatment with a CB2 receptor antagonist attenuated the protective effects of Δ8-THCV, while a CB1 antagonist tended to enhance it.An earlier study published in 2011 found, that Cannabidiol (CBD) also protects against hepatic ischemia/reperfusion injury by attenuating inflammatory signaling and response of oxidative and nitrative stress, and thereby cell death and tissue injury, but independent from classical CB1 and CB2 receptors.
Reperfusion protection in obligate hibernators:
Obligatory hibernators such as the ground squirrels show resistance to ischemia/reperfusion (I/R) injury in liver, heart, and small intestine during the hibernation season when there is a switch from carbohydrate metabolism to lipid metabolism for cellular energy supply. This metabolic switch limits anaerobic metabolism and the formation of lactate, a herald of poor prognosis and multi-organ failure (MOF) after I/R injury. In addition, the increase in lipid metabolism generates ketone bodies and activates peroxisome proliferating-activated receptors (PPARs), both of which have been shown to be protective against I/R injury. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SWIM Protocol**
SWIM Protocol:
The Scalable Weakly Consistent Infection-style Process Group Membership (SWIM) Protocol is a group membership protocol based on "outsourced heartbeats" used in distributed systems, first introduced by Indranil Gupta in 2001. It is a hybrid algorithm which combines failure detection with group membership dissemination.
Protocol:
The protocol has two components, the Failure Detector Component and the Dissemination Component.
The Failure Detector Component functions as follows: Every T' time units, each node ( N1 ) sends a ping to random other node ( N2 ) in its membership list.
If N1 receives a response from N2 , N2 is decided to be healthy and N1 updates its "last heard from" timestamp for N2 to be the current time.
Protocol:
If N1 does not receive a response, N1 contacts k other nodes on its list ( {N3,...,N3+k} ), and requests that they ping N2 If after T' units of time: if no successful response is received, N1 marks N2 as failed.The Dissemination Component functions as follows: Upon N1 detecting a failed node N2 , N1 sends a multicast message to the rest of the nodes in its membership list, with information about the failed node.
Protocol:
Voluntary requests for a node to enter/leave the group are also sent via multicast.
Properties:
The protocol provides the following guarantees: Strong Completeness: Full completeness is guaranteed (e.g. the crash-failure of any node in the group is eventually detected by all live nodes).
Detection Time: The expected value of detection time (from node failure to detection) is T′˙11−e−qf , where T′ is the length of the protocol period, and qf is the fraction of non-faulty nodes in the group.
Extensions:
The original SWIM paper lists the following extensions to make the protocol more robust: Suspicion: Nodes that are unresponsive to ping messages are not initially marked as failed. Instead, they are marked as "suspicious"; nodes which discover a "suspicious" node still send a multicast to all other nodes including this mechanism. If a "suspicious" node responds to a ping before some time-out threshold, an "alive" message is sent via multicast to remove the "suspicious" label from the node.
Extensions:
Infection-Style Dissemination: Instead of propagating node failure information via multicast, protocol messages are piggybacked on the ping messages used to determine node liveness. This is equivalent to gossip dissemination.
Round-Robin Probe Target Selection: Instead of randomly picking a node to probe during each protocol time step, the protocol is modified so that each node performs a round-robin selection of probe target. This bounds the worst-case detection time of the protocol, without degrading the average detection time. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Esthesiometer**
Esthesiometer:
An esthesiometer (British spelling aesthesiometer) is a device for measuring the tactile sensitivity of the skin (or mouth, or eye, etc.). The measure of the degree of tactile sensitivity is called aesthesiometry. The device was invented by Edward Henry Sieveking. There are different types of aesthesiometers depending on their particular function.
Two-point discrimination:
The simplest is a manual tool with adjustable points similar to a caliper. It can determine how short a distance between two impressions on the skin can be distinguished. To differentiate between two points and one point of equal area (the sum of the areas of the two points equals the area of the third point), Dr. Sidney Weinstein created the three-point esthesiometer. A scale on the instrument gives readings in millimeter gradients.
Monofilaments:
Another type of manual aesthesiometer is used to test lower thresholds of touch or pain. The tool uses nylon monofilaments with varying calibrated diameters. The force needed to cause the monofilament to buckle determines the tactile reading. The filaments are calibrated by force applied, rather than by gram/mm2 pressure ratings, because sensation follows force (when the stimulated area is small).
Monofilaments:
Von Frey hair A von Frey hair is a type of aesthesiometer designed in 1896 by Maximilian von Frey. Von Frey filaments rely on the principle that an elastic column, in compression, will buckle elastically at a specific force, dependent on the length, diameter and modulus of the material. Once buckled, the force imparted by the column is fairly constant, irrespective of the degree of buckling. The filaments may therefore be used to provide a range of forces to the skin of a test subject, in order to find the force at which the subject reacts because the sensation is painful. This type of test is called a mechanical nociceptive threshold test.
Monofilaments:
The buckling force is inversely proportional to the length of the column (so the shorter the column, the higher the force required to buckle it) and proportional to the cube of the diameter, so that increasing the diameter of a filament by a small amount increases the buckling force considerably. Sets of filaments are normally made of nylon hairs, all the same length, but of various diameters so as to provide a range of forces, typically from 0.008 grams force up to 300 grams force.
Monofilaments:
Von Frey filaments are a diagnostic, research, and screening tool, used both in human and animal medicine. They are readily used to study skin areas with normal responsiveness, as well as hyper- or hyposensitive areas.
Monofilaments:
The determination of a mechanical threshold using von Frey filaments requires a number of discrete tests using filaments with different bucking forces. There are two commonly used algorithms, the up-down method and the percent response method. The up-down method is most commonly used, usually requiring a minimum of four tests after the first response is detected. The first measurement should be made with a filament with a buckling force close to the expected mean of the population. Errors may result if testing is commenced a long way above or below the mean. These errors have been evaluated by a combination of experimentation and mathematical simulation.
Monofilaments:
Semmes-Weinstein Aesthesiometer The Semmes-Weinstein Aesthesiometer, and its variant the Weinstein Enhanced Sensory Test (WEST, e.g., WEST-hand), present nylon monofilaments of approximately the same length (38 mm) and of varying diameters. The diameter and length are used to control the force applied. Whereas Dr. Weinstein used 3-digit numbers to reflect the force of the Semmes-Weinsein Aesthesiometer (3 digit number equals the common log of the force measured in tenths of a milligram), the WEST esthesiometers (also created by Weinstein and group) use grams (e.g., 0.70 g) to describe the force.
Monofilaments:
For small-area stimulating instruments like WEST, force, rather than area, is the appropriate measure. This is because an approximately equal area of skin is indented for the heavy and light forces (see Weinstein et al., Evaluation of sensory methods in neuropathy, in Tendon and Nerve Surgery in the Hand—a Third Decade, by Hunter et al.).
Monofilaments:
The area of stimulation of the Semmes-Weinstein Aestheiometer is not correctly described by the area of the stimulating nylon (the nylon twists on the skin, pushing a sharp edge into the skin). Therefore, the unit gram/mm2 is descriptive of the geometry but not the function. The WEST esthesiometer has a bulb for a contacting tip, so when the tip bends it presents the same contacting face.
Air-based devices:
A non-intrusive device called a corneal aesthesiometer is used to test cornea nerve sensitivity by using a controlled pulse of air as stimulation. The device gives readouts in millibars. Also, a thermal aesthesiometer is used to determine sensitivity of thermal stimuli.
Weinstein and group created an air-based corneal esthesiometer using gram-force (tens of micrograms force). They also created an air-based oral esthesiometer. For example, smokers' upper throats are much less sensitive than nonsmokers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Magnetic seizure therapy**
Magnetic seizure therapy:
Magnetic seizure therapy (MST) is a proposed form of electrotherapy and electrical brain stimulation. It is currently being investigated for the treatment of major depressive disorder, treatment-resistant depression (TRD), bipolar depression, schizophrenia and obsessive-compulsive disorder. MST is stated to work by inducing seizures via magnetic fields, in contrast to ECT which does so using alternating electric currents. Additionally, MST works in a more concentrated fashion than ECT, thus able to create a seizure with less of a total electric charge. In contrast to (r)TMS, the stimulation rates are higher (e.g. 100 Hz at 2 T) resulting in more energy transfer. Currently it is thought that MST works in patients with major depressive disorder by activating the connection between the subgenual anterior cingulate cortex and the parietal cortex.
Medical uses:
Magnetic seizure therapy is a new treatment modality that is being studied for the treatment of multiple psychiatric conditions, including major depressive disorder, treatment-resistant depression (TRD), bipolar depression, schizophrenia and obsessive-compulsive disorder.
Major depressive disorder and treatment-resistant depression MST is currently being studied to as a potential treatment option versus ECT based on the need for a procedure with a different safety and side effect profile. Current limitations to a more widespread implementation of MST for these diseases are the variable dosages, number of treatments, and efficacy versus other treatment modalities.
Procedure:
MST is performed with the use of a modified rTMS device that delivers a higher output. Similar to ECT, because MST induces seizures, general anesthesia is used to relax the muscles. However, because there is not an electric current that may stimulate the jaw muscles, a bite block is not necessary. Coils are placed over the frontal cortex (usually bilaterally) and the treatment dosage is usually determined via titration with a preset dosing schedule. The treatment dosage is determined once the seizure threshold has been met and a sufficient seizure is produced. Various coil designs have been tested, such as the figure 8 coil, double cone coil, and cap coil. The latter two are the ones that have been most reliable in seizure induction.
Mechanism of action:
The mechanism of action of MST is not yet clearly understood. One hypothesis focuses on the neuroplasticity of the affected areas of the brain, mostly including the hippocampus and amygdala. Further recent imaging with fMRI has shown an effect on the connection between the subgenual anterior cingulate cortex and the parietal cortex.
Adverse effects:
Adverse effects include disorientation, emergence of mania, and superficial burns due to coil malfunctions. While one study did note a decline in autobiographical memory after MST, many studies have noted no anterograde memory loss nor retrograde memory loss, both of which are more commonly seen side effects of ECT. Other adverse effects include generalized seizures as well as side effects typically seen with general anesthesia. Hearing loss is a possible adverse effect from the clicking noise of the magnetic coils if earplugs are not used. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gerhard Weikum**
Gerhard Weikum:
Gerhard Weikum is a Research Director at the Max Planck Institute for Informatics in Saarbrücken, Germany, where he is leading the databases and information systems department. His current research interests include transactional and distributed systems, self-tuning database systems, data and text integration, and the automatic construction of knowledge bases. He is one of the creators of the YAGO knowledge base. He is also the Dean of the International Max Planck Research School for Computer Science (IMPRS-CS).
Gerhard Weikum:
Earlier he held positions at Saarland University in Saarbrücken, Germany, at ETH Zurich, Switzerland, at MCC in Austin, Texas, and he was a visiting senior researcher at Microsoft Research in Redmond, Washington. He received his diploma and doctoral degrees from the TU Darmstadt, Germany.
He acted as the President of the VLDB endowment in 2005 and 2006. The endowment organizes the yearly International Conference on Very Large Databases, a scientific conference for researchers in the area of database research.
Gerhard Weikum:
In 2005 the Association for Computing Machinery appointed Gerhard Weikum a fellow, one of the highest honors of the ACM. Weikum has been honored for his research in the fields of databases and information systems, in particular for his contributions to improve the reliability and the performance of large-scale, distributed information systems. In 2010 he was elected as a fellow of the Gesellschaft für Informatik and received a Google Focused Research Award. He received the ACM SIGMOD Contributions Award in 2011, an ERC Synergy Grant in 2013, and the ACM SIGMOD Edgar F. Codd Innovations Award in 2016.In 2018 he became a member of the German Academy of Sciences Leopoldina. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quadrant count ratio**
Quadrant count ratio:
The quadrant count ratio (QCR) is a measure of the association between two quantitative variables. The QCR is not commonly used in the practice of statistics; rather, it is a useful tool in statistics education because it can be used as an intermediate step in the development of Pearson's correlation coefficient.
Definition and properties:
To calculate the QCR, the data are divided into quadrants based on the mean of the X and Y variables. The formula for calculating the QCR is then: Quadrant I Quadrant III Quadrant II Quadrant IV )N, where n(Quadrant) is the number of observations in that quadrant and N is the total number of observations.The QCR is always between −1 and 1. Values near −1, 0, and 1 indicate strong negative association, no association, and strong positive association (as in Pearson's correlation coefficient). However, unlike Pearson's correlation coefficient the QCR may be −1 or 1 without the data exhibiting a perfect linear relationship.
Example:
The scatterplot shows the maximum wind speed (X) and minimum pressure (Y) for 35 Category 5 Hurricanes. The mean wind speed is 170 mph (indicated by the blue line), and the mean pressure is 921.31 hPa (indicated by the green line). There are 6 observations in Quadrant I, 13 observations in Quadrant II, 5 observations in Quadrant III, and 11 observations in Quadrant IV. Thus, the QCR for these data is 13 11 35 0.37 , indicating a moderate negative relationship between wind speed and pressure for these hurricanes. The value of Pearson's correlation coefficient for these data is −0.63, also indicating a moderate negative relationship.. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Linux on Apple devices**
Linux on Apple devices:
The Linux kernel can run on a variety of devices made by Apple, including devices where the unlocking of the bootloader is not possible with an official procedure, such as iPhones and iPads.
iPad:
In June 2022, software developers Konrad Dybcio and Markuss Broks managed to run Linux kernel 5.18 on a iPad Air 2. The project made use of the Alpine Linux based Linux distribution called postmarketOS, which is primarily developed for Android devices. The developer suggested that they used the checkm8 exploit which was published back in 2019.
iPhone:
In 2008, Linux kernel 2.6 was ported to the iPhone 3G, iPhone (1st generation), and iPod Touch (1st generation) using OpeniBoot.Corellium's Project Sandcastle made it possible to run Android on an iPhone 7/7+ or an iPod Touch (7th generation) using the checkm8 exploit.
iPod:
iPodLinux is a Linux distribution created specifically to run on Apple's iPod.
Mac:
Motorola 68k Macs Linux can be dual-booted on Macs that use Motorola 680x0 processors (only 68020 and higher, and only non-"EC" processor variants since an MMU is required). The Linux/mac68k community project provides resources to do so, and an m68k community port of the Debian Linux distribution is also available.
Mac:
PowerPC Macs PowerPC Macs can run Linux through both emulation and dual-booting ("bare metal"). The most popular PowerPC emulation tools for Mac OS/Mac OS X are Microsoft's Virtual PC, and the open-source QEMU.Linux dual-booting is achieved by partitioning the boot drive, installing the Yaboot bootloader onto the Linux partition, and selecting that Linux partition as the Startup Disk. This results in users being prompted to select whether they want to boot into Mac OS or Linux when the machine starts.By 2008, a number of major Linux distributions had official versions compatible with Mac PowerPC processors, including: Gentoo Debian (until Debian 8) Ubuntu (until Ubuntu 16.10) Fedora (until Fedora 17 for G3 and G4 processors, and Fedora 28 for G5) Yellow Dog Linux (discontinued in 2009)All of the above PowerPC ports have since been discontinued, except for Gentoo.
Mac:
Intel Macs Macs with Intel processors can run Linux through virtualization or through dual-booting. Common virtualization tools for Intel Macs include VMware Fusion, Parallels Desktop, and VirtualBox.In 2010, Whitson Gordon from Lifehacker noted that Apple has streamlined the process of dual booting Windows on Macs, but not for Linux. rEFIt made it possible to dual boot Linux.
Apple silicon Macs The Asahi Linux project is porting Linux to the M1 (and up) based SoCs. Asahi Linux is currently available as an incomplete preview. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metam sodium**
Metam sodium:
Metam sodium is an organosulfur compound with the formula CH3NHCS2Na. The compound is a sodium salt of a dithiocarbamate. The compound exists as a colorless dihydrate, but most commonly it is encountered as an aqueous solution. It is used as a soil fumigant, pesticide, herbicide, and fungicide. It is one of the most widely used pesticides in the United States, with approximately 60 million pounds used in 2001.
Preparation and properties:
Metam sodium is prepared by combining methylamine, carbon disulfide, and sodium hydroxide: CH3NH2 + CS2 + NaOH → CH3NHCS2Na + H2OIt also arises from the reaction of methyl isothiocyanate and sodium thiolate.Upon exposure to the environment, metam sodium decomposes to methyl isothiocyanate.
Safety and environmental considerations:
Metam sodium is nonpersistent in the environment since it is prone to hydrolysis. The degradation products, carbon disulfide and methyl amine are however toxic. In 1991 a tank car with 19,000 gallons of metam sodium spilled into Sacramento River above Lake Shasta. This killed all fish in a 41-mile stretch of the river. By 20 years later the rainbow trout population had recovered. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tasha Inniss**
Tasha Inniss:
Tasha Rose Inniss is an American mathematician and the director of education and industry outreach for the Institute for Operations Research and the Management Sciences (INFORMS).
Early life and education:
Inniss was born in New Orleans and grew up without a father. She became interested in mathematics in fourth grade, and decided she would study it as a freshman in high school. She studied mathematics at Xavier University of Louisiana, graduating summa cum laude. In 1992 she was listed in the Who's Who Among Colleges and Universities for her academic achievements. She earned a master's degree in applied mathematics from Georgia Institute of Technology.She moved to the University of Maryland for her PhD, funded by the David and Lucile Packard Foundation. In 2000, Inniss became the first African American woman to obtain a Ph.D. from the University of Maryland, together with Sherry Scott and Kimberly Weems. Her dissertation was Stochastic Models for the Estimation of Airport Arrival Capacity Distributions. She was part of the National Center of Excellence for Aviation Operators and advised by Michael Owen Ball. Her brother, Enos Inniss, also completed his PhD in 2000.
Research and career:
In 2001 she was appointed the Clare Boothe Luce Professor of Mathematics at Trinity Washington University. Her doctoral thesis described programming methods to calibrate models to estimate airport capacity. She remains a consultant for the Federal Aviation Administration.She joined the department of mathematics at Spelman College in 2005 as an assistant professor.Throughout her career she has worked to recruit, support and mentor underrepresented minority students. She led a National Science Foundation project that looked to increase the quality and quantity of underrepresented minorities matriculating and completing doctoral degrees. She has contributed to the EDGE Foundation (Enhancing Diversity in Graduate Education) program.In 2017 she joined the Institute for Operations Research and the Management Sciences as Director of Education.Inniss' work earned her recognition by Mathematically Gifted & Black as a Black History Month 2017 Honoree.In 2022, Inniss was added to the American Mathematical Society (AMS) Committee on Professional Ethics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Munia Ganguli**
Munia Ganguli:
Munia Ganguli is an Indian biochemist, biotechnologist and a scientist at the Institute of Genomics and Integrative Biology (IGIB). She is known for the development of non-invasive protocols of drug delivery and the team led by her was successful in developing a drug delivery system for skin disorders, using a nanometer-sized peptide complex carrying plasmid DNA which has since shown effective penetration and apparently without harming the skin. She holds two patents for the processes she has developed. At IGIB, she has established her laboratory where she hosts several research scholars and scientists. Her studies have been documented by way of a number of articles and ResearchGate, an online repository of scientific articles has listed 76 of them.Ganguli is a member of the contingent which represented IGIB in the Joint Research Initiative between CSIR and IGIB for interfacing chemistry with biology and has been a member of the editorial advisory committee of Nano Science and its Application, a national level seminar sponsored by the University Grants Commission. She has been the leader of the IGIB project, Nanomaterials and nanodevices for applications in health and disease, has delivered invited speeches which included the International Conference on Advances in Biological Systems and Materials Science in NanoWorld (ABSMSNW-2017) and guest edited the special volume of Science and Culture journal on Emerging Trends in Genomics : Applications in Health and Disease, published in January 2011. The Department of Biotechnology of the Government of India awarded her the National Bioscience Award for Career Development, one of the highest Indian science awards, for her contributions to biosciences, in 2012.
Selected bibliography:
Sharma, Rajpal; Shivpuri, Shivangi; Anand, Amitesh; Kulshreshtha, Ankur; Ganguli, Munia (1 July 2013). "Insight into the Role of Physicochemical Parameters in a Novel Series of Amphipathic Peptides for Efficient DNA Delivery". Molecular Pharmaceutics. 10 (7): 2588–2600. doi:10.1021/mp400032q. ISSN 1543-8384. PMID 23725377.
Naik, Rangeetha J.; Chatterjee, Anindo; Ganguli, Munia (2013). "Different roles of cell surface and exogenous glycosaminoglycans in controlling gene delivery by arginine-rich peptides with varied distribution of arginines". Biochimica et Biophysica Acta (BBA) - Biomembranes. 1828 (6): 1484–1493. doi:10.1016/j.bbamem.2013.02.010. PMID 23454086.
Ganguli, Munia; Nisakar, Daniel; Sharma, Rajpal; Vij, Manika (2015). "494. Glycosaminoglycans in Gene Delivery: An Effective Strategy for Enhancement of Transfection Efficiency of Amphipathic Peptides for Localized and Systemic Applications?". Molecular Therapy. 23: S197. doi:10.1016/s1525-0016(16)34103-x. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**X-ray absorption near edge structure**
X-ray absorption near edge structure:
X-ray absorption near edge structure (XANES), also known as near edge X-ray absorption fine structure (NEXAFS), is a type of absorption spectroscopy that indicates the features in the X-ray absorption spectra (XAS) of condensed matter due to the photoabsorption cross section for electronic transitions from an atomic core level to final states in the energy region of 50–100 eV above the selected atomic core level ionization energy, where the wavelength of the photoelectron is larger than the interatomic distance between the absorbing atom and its first neighbour atoms.
Terminology:
Both XANES and NEXAFS are acceptable terms for the same technique. XANES name was invented in 1980 by Antonio Bianconi to indicate strong absorption peaks in X-ray absorption spectra in condensed matter due to multiple scattering resonances above the ionization energy. The name NEXAFS was introduced in 1983 by Jo Stohr and is synonymous with XANES, but is generally used when applied to surface and molecular science.
Theory:
The fundamental phenomenon underlying XANES is the absorption of an x-ray photon by condensed matter with the formation of many body excited states characterized by a core hole in a selected atomic core level (refer to the first Figure). In the single-particle theory approximation, the system is separated into one electron in the core levels of the selected atomic species of the system and N-1 passive electrons. In this approximation the final state is described by a core hole in the atomic core level and an excited photoelectron. The final state has a very short life time because of the short life-time of the core hole and the short mean free path of the excited photoelectron with kinetic energy in the range around 20-50 eV. The core hole is filled either via an Auger process or by capture of an electron from another shell followed by emission of a fluorescent photon. The difference between NEXAFS and traditional photoemission experiments is that in photoemission, the initial photoelectron itself is measured, while in NEXAFS the fluorescent photon or Auger electron or an inelastically scattered photoelectron may also be measured. The distinction sounds trivial but is actually significant: in photoemission the final state of the emitted electron captured in the detector must be an extended, free-electron state. By contrast, in NEXAFS the final state of the photoelectron may be a bound state such as an exciton since the photoelectron itself need not be detected. The effect of measuring fluorescent photons, Auger electrons, and directly emitted electrons is to sum over all possible final states of the photoelectrons, meaning that what NEXAFS measures is the total joint density of states of the initial core level with all final states, consistent with conservation rules. The distinction is critical because in spectroscopy final states are more susceptible to many-body effects than initial states, meaning that NEXAFS spectra are more easily calculable than photoemission spectra. Due to the summation over final states, various sum rules are helpful in the interpretation of NEXAFS spectra. When the x-ray photon energy resonantly connects a core level with a narrow final state in a solid, such as an exciton, readily identifiable characteristic peaks will appear in the spectrum. These narrow characteristic spectral peaks give the NEXAFS technique a lot of its analytical power as illustrated by the B 1s π* exciton shown in the second Figure.
Theory:
Synchrotron radiation has a natural polarization that can be utilized to great advantage in NEXAFS studies. The commonly studied molecular adsorbates have sigma and pi bonds that may have a particular orientation on a surface. The angle dependence of the x-ray absorption tracks the orientation of resonant bonds due to dipole selection rules.
Experimental considerations:
Soft x-ray absorption spectra are usually measured either through the fluorescent yield, in which emitted photons are monitored, or total electron yield, in which the sample is connected to ground through an ammeter and the neutralization current is monitored. Because NEXAFS measurements require an intense tunable source of soft x-rays, they are performed at synchrotrons. Because soft x-rays are absorbed by air, the synchrotron radiation travels from the ring in an evacuated beam-line to the end-station where the specimen to be studied is mounted. Specialized beam-lines intended for NEXAFS studies often have additional capabilities such as heating a sample or exposing it to a dose of reactive gas.
Energy range:
Edge energy range In the absorption edge region of metals, the photoelectron is excited to the first unoccupied level above the Fermi level. Therefore, its mean free path in a pure single crystal at zero temperature is as large as infinite, and it remains very large, increasing the energy of the final state up to about 5 eV above the Fermi level. Beyond the role of the unoccupied density of states and matrix elements in single electron excitations, many-body effects appear as an "infrared singularity" at the absorption threshold in metals.
Energy range:
In the absorption edge region of insulators the photoelectron is excited to the first unoccupied level above the chemical potential but the unscreened core hole forms a localized bound state called core exciton.
EXAFS energy range The fine structure in the x-ray absorption spectra in the high energy range extending from about 150 eV beyond the ionization potential is a powerful tool to determine the atomic pair distribution (i.e. interatomic distances) with a time scale of about 10−15 s.
In fact the final state of the excited photoelectron in the high kinetic energy range (150-2000 eV ) is determined only by single backscattering events due to the low amplitude photoelectron scattering.
NEXAFS energy range In the NEXAFS region, starting about 5 eV beyond the absorption threshold, because of the low kinetic energy range (5-150 eV) the photoelectron backscattering amplitude by neighbor atoms is very large so that multiple scattering events become dominant in the NEXAFS spectra.
Energy range:
The different energy range between NEXAFS and EXAFS can be also explained in a very simple manner by the comparison between the photoelectron wavelength λ and the interatomic distance of the photoabsorber-backscatterer pair. The photoelectron kinetic energy is connected with the wavelength λ by the following relation: kinetic binding =ℏ2k2/(2m)=(2π)2ℏ2/(2mλ2), which means that for high energy the wavelength is shorter than interatomic distances and hence the EXAFS region corresponds to a single scattering regime; while for lower E, λ is larger than interatomic distances and the XANES region is associated with a multiple scattering regime.
Final states:
The absorption peaks of NEXAFS spectra are determined by multiple scattering resonances of the photoelectron excited at the atomic absorption site and scattered by neighbor atoms.
The local character of the final states is determined by the short photoelectron mean free path, that is strongly reduced (down to about 0.3 nm at 50 eV) in this energy range because of inelastic scattering of the photoelectron by electron-hole excitations (excitons) and collective electronic oscillations of the valence electrons called plasmons.
Applications:
The great power of NEXAFS derives from its elemental specificity. Because the various elements have different core level energies, NEXAFS permits extraction of the signal from a surface monolayer or even a single buried layer in the presence of a huge background signal. Buried layers are very important in engineering applications, such as magnetic recording media buried beneath a surface lubricant or dopants below an electrode in an integrated circuit. Because NEXAFS can also determine the chemical state of elements which are present in bulk in minute quantities, it has found widespread use in environmental chemistry and geochemistry. The ability of NEXAFS to study buried atoms is due to its integration over all final states including inelastically scattered electrons, as opposed to photoemission and Auger spectroscopy, which study atoms only with a layer or two of the surface.
Applications:
Much chemical information can be extracted from the NEXAFS region: formal valence (very difficult to experimentally determine in a nondestructive way); coordination environment (e.g., octahedral, tetrahedral coordination) and subtle geometrical distortions of it.
Transitions to bound vacant states just above the Fermi level can be seen. Thus NEXAFS spectra can be used as a probe of the unoccupied band structure of a material.
Applications:
The near-edge structure is characteristic of an environment and valence state hence one of its more common uses is in fingerprinting: if you have a mixture of sites/compounds in a sample you can fit the measured spectra with a linear combinations of NEXAFS spectra of known species and determine the proportion of each site/compound in the sample. One example of such a use is the determination of the oxidation state of the plutonium in the soil at Rocky Flats.
History:
The acronym XANES was first used in 1980 during interpretation of multiple scattering resonances spectra measured at the Stanford Synchrotron Radiation Laboratory (SSRL) by A. Bianconi. In 1982 the first paper on the application of XANES for determination of local structural geometrical distortions using multiple scattering theory was published by A. Bianconi, P. J. Durham and J. B. Pendry. In 1983 the first NEXAFS paper examining molecules adsorbed on surfaces appeared. The first XAFS paper, describing the intermediate region between EXAFS and XANES, appeared in 1987.
Software for NEXAFS analysis:
ADF Calculation of NEXAFS using spin-orbit coupling TDDFT or the Slater-TS method.
FDMNES Calculation of NEXAFS using finite difference method and full multiple scattering theory.
FEFF8 Calculation of NEXAFS using full multiple scattering theory.
MXAN NEXAFS fitting using full multiple scattering theory.
FitIt NEXAFS fitting using multidimensional interpolation approximation.
PARATEC NEXAFS calculation using plane-wave pseudopotential approach WIEN2k NEXAFS calculation on the basis of full-potential (linearized) augmented plane-wave approach. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Silicon–germanium**
Silicon–germanium:
SiGe ( or ), or silicon–germanium, is an alloy with any molar ratio of silicon and germanium, i.e. with a molecular formula of the form Si1−xGex. It is commonly used as a semiconductor material in integrated circuits (ICs) for heterojunction bipolar transistors or as a strain-inducing layer for CMOS transistors. IBM introduced the technology into mainstream manufacturing in 1989. This relatively new technology offers opportunities in mixed-signal circuit and analog circuit IC design and manufacture. SiGe is also used as a thermoelectric material for high-temperature applications (>700 K).
Production:
The use of silicon–germanium as a semiconductor was championed by Bernie Meyerson. The challenge that had delayed its realization for decades was that Germanium atoms are roughly 4% larger than Silicon atoms. At the usual high temperatures at which silicon transistors were fabricated, the strain induced by adding these larger atoms into crystalline silicon produced vast numbers of defects, precluding the resulting material being of any use. Meyerson and co-workers discovered that the then believed requirement for high temperature processing was flawed, allowing SiGe growth at sufficiently low temperatures such that for all practical purposes no defects were formed. Once having resolved that basic roadblock, it was shown that resultant SiGe materials could be manufactured into high performance electronics using conventional low cost silicon processing toolsets. More relevant, the performance of resulting transistors far exceeded what was then thought to be the limit of traditionally manufactured silicon devices, enabling a new generation of low cost commercial wireless technologies such as WiFi. SiGe processes achieve costs similar to those of silicon CMOS manufacturing and are lower than those of other heterojunction technologies such as gallium arsenide. Recently, organogermanium precursors (e.g. isobutylgermane, alkylgermanium trichlorides, and dimethylaminogermanium trichloride) have been examined as less hazardous liquid alternatives to germane for MOVPE deposition of Ge-containing films such as high purity Ge, SiGe, and strained silicon.SiGe foundry services are offered by several semiconductor technology companies. AMD disclosed a joint development with IBM for a SiGe stressed-silicon technology, targeting the 65 nm process. TSMC also sells SiGe manufacturing capacity.
Production:
In July 2015, IBM announced that it had created working samples of transistors using a 7 nm silicon–germanium process, promising a quadrupling in the amount of transistors compared to a contemporary process.
SiGe transistors:
SiGe allows CMOS logic to be integrated with heterojunction bipolar transistors, making it suitable for mixed-signal integrated circuits. Heterojunction bipolar transistors have higher forward gain and lower reverse gain than traditional homojunction bipolar transistors. This translates into better low-current and high-frequency performance. Being a heterojunction technology with an adjustable band gap, the SiGe offers the opportunity for more flexible bandgap tuning than silicon-only technology.
SiGe transistors:
Silicon–germanium on insulator (SGOI) is a technology analogous to the silicon on insulator (SOI) technology currently employed in computer chips. SGOI increases the speed of the transistors inside microchips by straining the crystal lattice under the MOS transistor gate, resulting in improved electron mobility and higher drive currents. SiGe MOSFETs can also provide lower junction leakage due to the lower bandgap value of SiGe. However, a major issue with SGOI MOSFETs is the inability to form stable oxides with silicon–germanium using standard silicon oxidation processing.
Thermoelectric application:
A silicon–germanium thermoelectric device MHW-RTG3 was used in the Voyager 1 and 2 spacecraft.
Silicon–germanium thermoelectric devices were also used in other MHW-RTGs and GPHS-RTGs aboard Cassini, Galileo, Ulysses.
Light emission:
By controlling the composition of a hexagonal SiGe alloy, researchers from Eindhoven University of Technology developed a material that can emit light. In combination with its electronic properties, this opens up the possibility of producing a laser integrated into a single chip to enable data transfer using light instead of electric current, speeding up data transfer while reducing energy consumption and need for cooling systems. The international team, with lead authors Elham Fadaly, Alain Dijkstra and Erik Bakkers at Eindhoven University of Technology in the Netherlands and Jens Renè Suckert at Friedrich-Schiller-Universität Jena in Germany, were awarded the 2020 Breakthrough of the Year award by the magazine Physics World. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rubinsohl**
Rubinsohl:
Rubinsohl (also referred to as Rubensohl) is a bridge convention that can be used to counter an opponent's intervention over a 1NT opening bid. After opponent's two-level overcall, all bids starting from 2NT are transfer bids to the next strain.
Origins:
The concept was introduced by Bruce Neill of Australia in a Bridge World magazine article in May 1983. Because he had based his concept on earlier work by Jeff Rubens on Rubens Advances and on Lebensohl, Neill named the treatment Rubensohl. However, from the fifth edition in 1994 onwards, The Official Encyclopedia of Bridge notes that Ira Rubin of the United States had devised similar methods earlier to replace Lebensohl and "...so Rubinsohl seems the appropriate name".
Applications:
When playing Rubinsohl, the following applies after an opposing 2♦ (natural) overcall over partner's 1NT opening: 1NT - (2♦) - ?? dbl : penalty 2♥/♠ : to play 2NT : transfer to 3♣ 3♣ : transfer to opponent's suit -> asks for four card major 3♦ : transfer to hearts (at least invitational) 3♥ : transfer to spades (at least invitational) 3♠ : transfer to 3NT -> game values but no major suit and no stopper in opponent's suit 3NT : to playSimilar schedules apply following a natural two-level overcall in any of the other suits.
Applications:
Unlike Lebensohl, the partner of the 1NT opener can indicate his long suit at the first bid; this can be advantageous in competitive auctions.
The same transfer schedule can also be used following a conventional overcall over 1NT as long as this overcall indicates an anchor suit. For instance: following an Asptro 2♣ overcall (showing ♥ and another suit) over partner's 1NT opening, the bids 2NT, 3♣ and 3♥ would be transfers to the next strain, whilst 3♦ would be an asking bid.
Applications:
Partnerships that have agreed to use Rubinsohl often extend its use to include responses to partner's takeout double over an opposing weak two opening. For instance: (2♥) - dbl - (pass) - ?? pass = penalty 2♠ : to play 2NT : transfer to clubs (weak or strong) 3♣ : transfer to diamonds (weak or strong) 3♦ : asking bid 3♥ : transfer to spades (at least invitational) 3♠ : transfer to 3NT -> game values but no major suit and no stopper in opponent's suit 3NT : to play | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JsSIP**
JsSIP:
JsSIP is a library for the programming language JavaScript. It takes advantage of SIP and WebRTC to provide a fully featured SIP endpoint in any website. JsSIP allows any website to get real-time communication features using audio and video. It makes it possible to build SIP user agents that send and receive audio and video calls as well as and text messages.
General features:
SIP over WebSocket transport Audio-video calls, instant messaging and presence Pure JavaScript built from the ground up Easy to use and powerful user API Works with OverSIP, Kamailio, and Asterisk servers SIP standards
Standards:
JsSIP implements the following SIP specifications: RFC 3261 — SIP: Session Initiation Protocol RFC 3311 — SIP Update Method RFC 3326 — The Reason Header Field for SIP RFC 3327 — SIP Extension Header Field for Registering Non-Adjacent Contacts (Path header) RFC 3428 — SIP Extension for Instant Messaging (MESSAGE method) RFC 4028 — Session Timers in SIP RFC 5626 — Managing Client-Initiated Connections in SIP (Outbound mechanism) RFC 5954 — Essential Correction for IPv6 ABNF and URI Comparison in RFC 3261 RFC 6026 — Correct Transaction Handling for 2xx Responses to SIP INVITE Requests RFC 7118 — The WebSocket Protocol as a Transport for SIP
Interoperability:
SIP proxies, servers JsSIP uses the SIP over WebSocket transport for sending and receiving SIP requests and responses, and thus, it requires a SIP proxy/server with WebSocket support. Currently the following SIP servers have been tested and are using JsSIP as the basis for their WebRTC Gateway functionality: FreeSWITCH FRAFOS ABC WebRTC Gateway Archived 20 July 2016 at the Wayback Machine OverSIP Kamailio Asterisk reSIProcate and repro WebRTC web browsers At the media plane (audio calls), JsSIP version 0.2.0 works with Chrome browser from version 24.
Interoperability:
At the signaling plane (SIP protocol), JsSIP runs in any WebSocket capable browser.
License:
JsSIP is provided as open-source software under the MIT license. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cricket nets**
Cricket nets:
Cricket nets are used by batters and bowlers to practice their cricketing techniques. They consist of a cricket pitch (natural or artificial) enclosed by netting on either side, behind, and optionally above. The bowling end is left open.
Nets stop the cricket ball travelling across the field when the batter plays a shot. They save practice time and eliminate the need for fielders or a wicket-keeper. They allow more people to train at once, particularly when they have several lanes. They allow solitary batting practice when used with a bowling machine.
Use:
Nets are fundamental to cricket practice and are used at every level of the game. Professional cricket clubs are likely to have over 10 lanes of nets, and be able to practice both indoors and outdoors. Nets are also very prevalent in educational establishments, as they allow safe and efficient training with a high volume of pupils when there are significant time constraints. Keen cricketers may have nets in their gardens.
Use:
Nets helps safe practice. By containing most aerial cricket balls, they reduce the potential for injury of bystanders. However, the nets need an opening for the bowler, so it is still common for balls to leave the nets, and shouts of heads up are commonly heard.
Types:
Indoor and outdoor cricket nets differ significantly.
Indoor Indoor nets are often suspended on a track (runner) fixed to the ceiling of the sports hall or gymnasium no. The nets can drop 4–8 metres to the ground, and be over 20 metres long. Indoors nets are commonly multi-lane, with two- or four-lane nets being particularly common.
Types:
Indoor nets tend to be white. They have separate 3-metre-high canvas screens that enclose the area immediately surrounding the batsman, for two reasons. First, the netting near the batsman has by far the highest work rate, and canvas is significantly more durable than mesh netting, so screens improves the nets' lifespan. Second, the batsman is less likely to be distracted.
Types:
Indoor nets can be suspended on runners, providing a curtain system where they can be pulled in and out of use. This allows the sports facility to be flexible in its use.
Types:
Outdoor Outdoor nets are the most common form of practice nets. They take many forms, with some being homemade whilst others are professional manufactured and installed. The design and construction of outdoor nets tends to be based around two factors: the frequency and age of those who will use them, and the available space. In schools and cricket clubs where use will be high, construction will be tailored to that. The nets may also need safeguards against misuse or vandalism. Therefore, the frame is often constructed out of heavy-duty galvanised steel tube with an overall diameter ranging from 34 to 50 mm. The tube is then joined by key-clamp brackets. This system requires permanent concrete ground sockets, but the actual frame of the cage can still be dismantled and removed. Outdoor nets can be fitted with wheels to be completely mobile.
Types:
There are variations in the design of outdoor nets such as use of pulley system where the netting is mounted on a cable that spans posts located at either end. Garden nets are frequently home-made, often to a professional design with locally sourced components. This saves money, and cricket nets have a simple design and purpose, so are not difficult to make. Nets should be no less than 9 ft wide, with 12 ft being optimum. If the nets are under 24 ft long, they should be at least 9 ft high; if under 36 ft long, at least 10 ft high; and 12 ft high if longer than that. This prevents balls ending up on the roof of the nets when bowled. The length is less critical, but the longer the safer.
Netting:
Netting is the most important component. The netting twine is usually made of a synthetic polymer such as polyethylene, which is hardwearing and relatively cheap. Before about 1995, nets were often made from nylon, but this became too expensive. Nets are often black, green or white. The mesh gap is usually 50 mm and the twine will commonly have a diameter of 1.8 to 3.0 mm. Netting may be knotless or knotted: knotted is considered superior. The breaking strength of knotted netting is higher for the same diameter twine. Good twine will be UV stabilized and rot proof. For home-made nets, netting is the only specialist supply.
Netting:
Netting is seamed at its edges to prevent fraying. The seam is usually a 6 mm cord sewn into the netting where it meets a cage or end. Canvas blinkers can be added to offer privacy and to reduce wear. Also, partial canvas skirts of 0.5 m can be added to the bottom to prevent damage from wild animals.
In other sports:
The baseball equivalent is the batting cage, though fundamentally different, as that provides complete ball containment, whereas cricket nets do not. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Micro-schooling**
Micro-schooling:
Microschooling is the reinvention of the one-room school house, where class size is typically smaller than that in most schools (15 students or less in a classroom) and there are mixed-age level groupings. Generally, microschools do not meet all 5 days of the school week, and their schedules look different than a traditional public or private school. Classes can be taught using a flipped classroom approach, a form of blended learning, though not all microschools focus on technology in the same ways. Classes tend to be more impactful due to meeting fewer times in the week. Classes may use instructional methods, ranging from traditional lecture-based approaches to hands-on and activity-based approaches. Microschooling is viewed as a replacement for various school paradigms that are standard worldwide.
Micro-schooling:
Microschooling is seen as being in between homeschooling and private schooling and is designed to offer a full-year of education at around $10,000 or often less. Its growing popularity stems from a general dissatisfaction of how schools (public and private) often structure their content. Homeschool families are drawn to the idea because of how microschooling establishes a core set of learning experiences similar to what might be found in normal schools that parents can then expand on and individualize for their children. Private and public school parents see microschooling as an affordable option that provides their children with a more worldly education that some might consider as constructivism in approach.
History:
Microschooling began in the UK as small independent schools, privately funded by groups of like minded parents, with no dedicated premises (home rotation) led by a full-time paid tutors (as opposed to homeschooling where a parent tutors their own child (or children)). Cushla Barry first coined the term microschooling in February 2010. It has also been explored by education writers like Anya Kementz, who spoke about the return of the one-room school house United Kingdom It is set to be a rising trend in the UK, where getting your child into a good local school is becoming increasingly difficult due to underfunding and overcrowding. The UK Conservative party alluded to the concept of microschooling in 2007 with their concept of Pioneer schools.
History:
New York City In the early 2000s, as the cost of private school became increasingly exorbitant and availability of spaces declined, a movement of preschool co-ops and homeschool co-ops started by teachers and parents started emerging to directly meet the needs of children in the community. CottageClass is one company which supports the creation of microschools, with fifteen schools operating in Brooklyn alone.
History:
San Francisco Bay Area Since 2010, microschooling has evolved in scope from the simple idea of privately funded groups of learners to a full concept of a specialized learning environment dedicated to ensuring individual understanding of the subjects being taught. Students attend a school for up to 2 days a week and cover a more diverse set of topics and subjects than would otherwise be accessible in small, private groups.
Pedagogy:
In order to provide the same amount of learning in fewer days, microschooling has of late become associated with a movement in education to provide students with hands-on and activity-based learning that often pairs experts with the students in the classroom. The overall approach might be considered constructivism in approach. Lecture, worksheets, and book work are often eschewed in favor of carefully constructed activities that are designed to help the learner to develop their own personal understanding. In this model, teachers will typically avoid making definitive statements of understanding to the students and will instead help the students work towards their own understanding. It is uncommon to see letter grading in microschools. Microschools will often pair suggested extracurricular activities, field trips, programs, books, and other multimedia with their classes so that students will have a broader exposure to content than what is just offered in the classroom.
Pedagogy:
Learning outcomes Despite the relatively short time frame that microschools have existed, there has been evidence that shows students are often learning more concepts in the classroom in a shorter time frame. It is unclear whether or not those students are retaining those concepts over longer periods of time due to the short time frame that microschooling has existed.
Pedagogy:
National standards As a potential point of controversy, microschooling often does not consider national standards as relevant to the educational needs of students and instead seeks to develop content and curriculum that helps create a passion for learning in the students that focuses on real-world knowledge and skills. Educators involved with microschooling often feel that standards hamper their flexibility to develop engaging and enriching curriculum. Microschools will often choose their own set of standards and it can readily be shown that many of those standards, but not all, map to the Common Core. In many cases the content covered by microschools can exceed the standards provided by the state, particularly in regard to the four key content areas of mathematics, language arts, history, and especially science. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hyperscore**
Hyperscore:
Hyperscore is a computer-assisted music composition program intended to make the creation of music readily accessible to experienced musicians as well as those without any musical training. To accomplish this, the software maps complex musical concepts to intuitive visual representations. Color, shape, and texture are used to convey high-level musical features such as timbre, melodic contour, and harmonic tension.
Hyperscore has received international media attention and awards. It has been featured in numerous news and journal publications, including the New York Times, as well as television programs such as Scientific American Frontiers.
Composing:
Users of Hyperscore compose music by first creating simple melodies or sequences of notes. A library of predefined elements is also provided. These melodies are assigned unique colors. The user then creates a musical sketch composed of colored lines, where each line instances the notes from the corresponding melody. The contour and position of the line alters the pitch at which notes are played back.
Composing:
The software can optionally use different classes of automated harmonization to organize the given notes, in order to easily generate more pleasing results. The effects of the harmony algorithms can be controlled by contours in a special line presented throughout the sketch. Modulations and sections of harmonic tension and resolution can be introduced in this manner, adding interest and variation to the music.
Composing:
Hyperscore also provides users with control over tempo and dynamics. MIDI synthesis is used for audible output from within the application and all General MIDI voices are available for use.
History:
Hyperscore was originally developed by Morwaread Farbood in Tod Machover's Opera of the Future group at the Massachusetts Institute of Technology Media Lab. Early versions of the software allowed users to generate novel compositions from predefined motives by sketching lines indicating patterns of musical tension. In 2021, Hyperscore was re-developed by Peter Torpay, who earned his PhD in Machover's group at the MIT Media Lab. In the new version, scheduled for release in 2022, the graphical user interface has been updated and the application is web-based so that it will be broadly accessible.
History:
The application evolved to play a prominent role in the Toy Symphony. During an international tour of this project, children were given the opportunity to compose orchestral pieces using Hyperscore, which were then performed in concert along with other works utilizing traditional and technologically enhanced instruments and approaches. Hyperscore was also used extensively in Machover's series of City Symphonies, in which children and adults in cities around the world composed original music that was incorporated by Machover into orchestral works performed by major symphony orchestras.
Current applications:
In 2004, Hyperscore became a commercial product under Harmony Line, Inc. The company created H-Lounge, an online music and ring tone-oriented social networking website dedicated to music makers who can upload mp3's or songs they have created with Hyperscore. The company closed in 2017. Subsequently, a nonprofit, New Harmony Line, was formed and acquired the license to Hyperscore. In 2022, New Harmony Line released Hyperscore as a web-based software application available to the public and is now developing standards-based curriculums for music education in grades K-12. A version of Hyperscore was also released through Music First. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital Library of Modern Greek Studies "Anemi"**
Digital Library of Modern Greek Studies "Anemi":
Anemi is a digital library that aims to provide access to a collection of digitized material related to Modern Greek Studies. Apart from finding bibliographic information, the researcher can also browse the documents themselves in electronic form. They can find a great number of old and rare documents, as well as recent publications for which their creators allowed the digitization and free distribution over the Internet.
Collections:
Neoellinistis the digital library of bibliographies, dictionaries and handbooks for the Greek Modern StudiesThis collection provides free access to bibliographies, dictionaries, encyclopaedias, handbooks, chronologies and other tools related with Greek Modern Studies. It also provides the users with the possibility of locating relevant alternative information where the digitization is prohibited by the Greek law. The material that it is included in Neoellinistis is organised according to the work of Politis Alexis, THE HANDBOOK OF MODERN GREEK STUDIES , Crete University Press, 2005.
Collections:
Greek Digital Bibliography 15th - 20th centuryBy using the digital technology, the Greek Digital Library regenerates the national bibliographic landscape of the period 1476-1900. Entries that concern it, are catalogued electronically and, where feasible, are linked with the corresponding digital item. Since December 2006, 8,000 bibliographic records are available in Anemi's data base as well as a vast amount of corresponding digitized pages.
Collections:
AnacharsisRare collections from the Library of University of Crete, with travel literature, have been catalogued. The bibliographical records are linked with the corresponding digital items which are hosted either in the local library or other bibliographic agents elsewhere.
Markos MousourosIt is a digital collection with books and archival materials about Crete. The main part of the items that are available in the collection come from the Library of the University of Crete. Among them is, the incunabulum: Etymologikon Mega, which was printed by the Cretans Zacharias Kalliergis and Nikolaos Vlastos in Venice in 1499.
History:
Anemi was founded in 2006 by the University of Crete Library. It embodies the final result of the Programme "Digital Library of Modern Greek Studies" which was funded by the Operational Programme "Information Society" (3rd CSF 2000-2006). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Haze (optics)**
Haze (optics):
There are two different types of haze that can occur in materials: Reflection haze occurs when light is reflected from a material.
Transmission haze occurs when light passes through a material.The measurement and control of both types during manufacture is essential to ensure optimum quality, acceptability and suitability for purpose of the product.
For instance, in automotive manufacturing, a high quality reflective appearance is desirable with low reflection haze and high contrast whilst in packaging clear, low haze, highly transmissive films are required so that the contents, foods etc., can be clearly observed.
Reflection Haze:
Reflection Haze is an optical phenomenon usually associated with high gloss surfaces, it is a common surface problem that can affect appearance quality. The reflection from an ideal high gloss surface should be clear and radiant, however, due to scattering at imperfections in the surface caused by microscopic structures or textures (≈ 0.01 mm wavelength) the reflection can appear milky or hazy reducing the quality of its overall visual appearance.
Reflection Haze:
Causes of this could be due to a number of factors – Poor dispersion Method of applying the coating Variations in drying, curing or baking Types of materials used in the formulation Polishing or abrasionA high gloss surface with haze exhibits a milky finish with low reflective contrast- reflected highlights and lowlights are less pronounced.
On surfaces with haze, halos are visible around the reflections of strong light sources.
Measurement Measurement of reflection haze is primarily defined under three International test standards: ASTM E430 ASTM E430 comprises three test methods: Test method A specifies a 30° angle for specular gloss measurement, 28° or 32° for narrow-angle reflection haze measurement and 25° or 35° for wide-angle reflection haze measurement.
Test method B specifies a 20° angle for specular gloss measurement and 18.1° and 21.9° for narrow-angle reflection haze measurement.
Test method C specifies a 30° angle for specular gloss measurement, 28° or 32° for narrow-angle reflection haze measurement and 15° wide-angle reflection haze measurement.
ASTM D4039 Test method specifies gloss measurements to be made at 20° and 60°, the haze index is then calculated as the difference between the 60° and 20° measurements.
ISO 13803 Test method specifies a 20° angle for specular gloss measurement and 18.1° and 21.9° for narrow-angle reflection haze measurement.
All test methods specify that measurements should be made with visible light according to CIE spectral luminous efficiency function V(λ) in the CIE 1931 standard observer and CIE standard illuminant C.
Reflection Haze:
As most commercially available glossmeters have gloss measurement angles of 20°, 60° and 85° haze measurement is incorporated at either 20° (ISO 13803 / ASTM E430 method B) or at 20° and 60° ( ASTM D4039). There are however some manufacturers that offer glossmeters with measurement angles of 30° and haze measurement in accordance with ASTM E430 Method A and C but are fewer in number, therefore for the purposes of detailing haze measurement theory only the first three methods will be included.
Reflection Haze:
ISO 13803 / ASTM E430 method B Both test methods measure specular gloss and haze together at 20° that means light is transmitted and received at an equal but opposite angle of 20°.
Reflection Haze:
Specular gloss is measured over an angular range that is limited by aperture dimensions as defined in ASTM Test Method D523. The angular measurement range for this at 20° is ±0.9° (19.1° - 20.9°). For haze measurement additional sensors are used either side of this range at 18.1° and 21.9° to measure the intensity of the scattered light. Both solid colours and those containing metallics can be measured using this method provided haze compensation is used (as detailed later).
Reflection Haze:
ASTM D4039 This method can only be used on nonmetallic materials having a 60° specular gloss value greater than 70 in accordance with ASTM Test Method D523 / ISO 2813. Haze Index is calculated from gloss measurements made at 20 and 60 degrees as the difference between the two measurements (HI = G60-G20).
Reflection Haze:
As measurements of specular gloss depend largely on the refractive index of the material being measured 20° gloss will change more noticeably than 60° gloss, therefore as haze index is calculated using these two measurements it too will be affected by the refractive index of the material. Evaluations of reflection haze using this test method are therefore confined to samples of roughly the same refractive index.
Reflection Haze:
Haze compensation It is important to note that the colour (luminous reflectance) of a material can greatly influence the measurement of reflection haze. As colour and haze are both components of scattered light (diffuse reflectance) they must be separated so that only the haze value is quantified; this is also true for metallics or coatings containing metallic pigments where a higher scattering exists.
Reflection Haze:
As test method ASTM D4039 is only suitable for nonmetallic materials of more or less the same refractive index separation of the colour and haze components is not detailed. Haze index calculations and measurements using this test method will therefore produce higher haze results on brighter coloured materials than darker with the same level of haze present. The chart below shows these differences for various colours:- Both ISO 13803 and ASTM E430 method B require a separate measurement of luminous reflectance, Y, to calculate compensated haze. The tri-stimulus value Y gives a measure of the lightness of the material as defined in ISO 7724-2 requiring a 45°/0° geometry to be used with standard illuminant C and 2° observer (although it is mentioned that slightly different conditions will not result in significant errors). Luminous reflectance measurements, Y, are required on both the sample material and a reference white; ISO 13803 details the use of a BaSO4 standard - barium sulphate, a white crystalline solid having a white opaque appearance and high density as this material is a good substitute for a perfectly reflecting diffusor as defined under ISO 7724-2.
Reflection Haze:
Compensated haze can then be calculated as - H Comp = H Linear – Y Sample / Y BaSO4Using the ISO / ASTM method therefore to measure luminous reflectance produces a reliable measurement of Y for non-metallic surfaces as the diffuse component is lambertian, i.e. it is equal in amplitude at all angles in relation to the sample surface.
Reflection Haze:
However, for metallic coatings and those containing speciality pigments, as the particles within the coating reflect the light directionally around the specular angle, little or no metallic reflection is present at the angle at which the luminosity is measured, therefore these types of coatings have an unexpectedly high haze reading. Using a measurement angle which is closer to the region adjacent to the haze angle has proven successful in providing compatible readings on solid colours and also compensating for directional reflection from metallic coatings and speciality pigments Applications Generally measurement of reflection haze is confined to high gloss paints and coatings and highly polished metals. Although there has been some degree of success using this measurement method for films it has proven unreliable due to variability caused by changes in the film thickness (internal refraction variations) and the background colour on which the film sample is placed. Generally haze measurement of films is performed using a transmission type hazemeter as described hereafter.
Transmission Haze:
Light and transparent materials When light strikes the surface of a transparent material the following interactions occur – • Light is reflected from the front surface of the material • Some light is refracted within the material (depending on thickness) and reflected from the second surface • Light passes through the material at an angle which is determined by the refractive index of the material and the angle of illumination.
Transmission Haze:
The light that passes through the transparent material can be affected by irregularities within it; these can include poorly dispersed particles, contaminants (i.e. dust particles) and/or air spaces. This causes the light to scatter in different directions from the normal the degree of which being related to the size and number of irregularities present. Small irregularities cause the light to scatter, or diffuse, in all directions whilst large ones cause the light to be scattered forward in a narrow cone shape. These two types of scattering behaviour are known as Wide Angle Scattering, which causes haze due to the loss of transmissive contrast, and Narrow Angle Scattering a measure of clarity or the "see through quality" of the material based on a reduction of sharpness.
Transmission Haze:
These factors are therefore important for defining the transmitting properties of a transparent material- Transmission – The amount of light that passes through the material without being scattered Haze – The amount of light that is subject to Wide Angle Scattering (At an angle greater than 2.5° from normal (ASTM D1003)) Clarity – The amount of light that is subject to Narrow Area Scattering (At an angle less than 2.5° from normal) Measurement Measurement of these factors is defined in two International test standards- ASTM D1003 ASTM D1003 comprises two test methods: Procedure A – using a Hazemeter Procedure B – using a Spectrophotometer BS EN ISO 13468 Parts 1 and 2 Part 1 – Using a single beam Hazemeter Part 2 – Using a dual beam Hazemeter The test methods specify the use of a Hazemeter as shown below - A collimated beam of light from a light source (ASTM D1003 - Illuminant C, BS EN ISO 13468 Parts 1 and 2 - Illuminant D65 ) passes through a sample mounted on the entrance port of an integrating sphere.
Transmission Haze:
The light, which is uniformly distributed by a matte white highly reflective coating on the sphere walls, is measured by a photodetector positioned at 90° from the entrance port. A baffle mounted between the photodetector and the entrance port prevents direct exposure from the port.
The exit port immediately opposite the entrance port contains a light trap to absorb all light from the light source when no sample is present. A shutter in this exit port coated with the same coating as the sphere walls allows the port to be opened and closed as required.
Total transmittance is measured with the exit port closed.
Transmittance haze is measured with the exit port open.
Commercially available Hazemeters of this type perform both measurements automatically, the only operator interaction being the placement of the sample material on the measurement (entrance) port of the device. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Winston cone**
Winston cone:
A Winston cone is a non-imaging light collector in the shape of an off-axis parabola of revolution with a reflective inner surface. It concentrates the light passing through a relatively large entrance aperture through a smaller exit aperture. The collection of incoming rays is maximized by allowing off-axis rays to make multiple reflections before reaching the exit aperture. Winston cones are used to concentrate light from a large area onto a smaller photodetector or photomultiplier. They are widely used for measurements in the far infrared portion of the electromagnetic spectrum in part because there are no suitable materials to form lenses in the range.Winston cones take their name from their inventor, the physicist Roland Winston. It is commercialized by companies such as Winston Cone Optics | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ceramide synthase 2**
Ceramide synthase 2:
Ceramide synthase 2, also known as LAG1 longevity assurance homolog 2 or Tumor metastasis-suppressor gene 1 protein is an enzyme that in humans is encoded by the CERS2 gene.
Ceramide synthase 2 is a ceramide synthase that catalyses the synthesis of very long acyl chain ceramides, including C20 and C26 ceramides. It is the most ubiquitously expressed of all CerS and has the broadest distribution in the human body.CerS2 was first identified in 2001. It contains the conserved TLC domain and Hox-like domain common to almost all CerS.
Distribution:
CerS2 mRNA (TRH3) has been found in most tissues and it is strongly expressed in liver, intestine and brain. CerS2 is much more widely distributed than Ceramide synthase 1 (CerS1) and is found in at least 12 tissues in the human body, with high expression in the kidney and liver, and moderate expression in the brain and other organs. In the mouse brain, CerS2 is mainly expressed white matter tracts, specifically in oligodendrocytes and Schwann cells.
Function:
Expression of CerS2 is transiently increased during periods of active myelination, suggesting that it is important for the synthesis of myelin sphingolipids. The lack of CerS2, as shown in knockout mice, induces the autophagy and activation of the unfolded protein response (UPR). These mice showed no decrease in overall ceramide level, but levels of sphinganine were elevated. They also developed severe liver disease, but there was no observable change in the kidneys.The CerS2 gene is compact in size and is located in a chromosomal region that is replicated early in the cell cycle. CerS2 activity is regulated by sphingosine-1-phosphate (S1P) via two sphingosine-1-phosphate receptor-like residues on CerS2 that operate independently.
Pathological significance:
CerS2 levels are significantly elevated in breast cancer tissue compared to normal tissue, along with increased levels of ceramide synthase 6 (CerS6).CerS2 was also implicated in the control of body weight. The administration of leptin to rats induced a decrease in CerS2 was observed in white adipose tissue. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ARID5B**
ARID5B:
AT-rich interactive domain-containing protein 5B is a protein that in humans is encoded by the ARID5B gene.Alternative names for this gene include Modulator recognition factor 23.
Genomics:
The gene is located on the long arm of chromosome 10 (10q21.2) on the 'plus' strand. It spans 195,261 base pairs in length. It encodes a protein of predicted length and molecular weight of 1188 amino acids and 132.375 kilo Daltons respectively.
Clinical importance:
Through genome wide association studies (GWAS),some of the single nucleotide polymorphisms (SNPs) located in this gene has been noticed to be significantly associated with susceptibility as well as treatment outcomes of childhood acute lymphoblastic leukaemia in ethnically diverse populations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sequent calculus**
Sequent calculus:
In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies.
Sequent calculus:
Sequent calculus is one of several extant styles of proof calculus for expressing line-by-line logical arguments.
Hilbert style. Every line is an unconditional tautology (or theorem).
Gentzen style. Every line is a conditional tautology (or theorem) with zero or more conditions on the left.
Natural deduction. Every (conditional) line has exactly one asserted proposition on the right.
Sequent calculus:
Sequent calculus. Every (conditional) line has zero or more asserted propositions on the right.In other words, natural deduction and sequent calculus systems are particular distinct kinds of Gentzen-style systems. Hilbert-style systems typically have a very small number of inference rules, relying more on sets of axioms. Gentzen-style systems typically have very few axioms, if any, relying more on sets of rules.
Sequent calculus:
Gentzen-style systems have significant practical and theoretical advantages compared to Hilbert-style systems. For example, both natural deduction and sequent calculus systems facilitate the elimination and introduction of universal and existential quantifiers so that unquantified logical expressions can be manipulated according to the much simpler rules of propositional calculus. In a typical argument, quantifiers are eliminated, then propositional calculus is applied to unquantified expressions (which typically contain free variables), and then the quantifiers are reintroduced. This very much parallels the way in which mathematical proofs are carried out in practice by mathematicians. Predicate calculus proofs are generally much easier to discover with this approach, and are often shorter. Natural deduction systems are more suited to practical theorem-proving. Sequent calculus systems are more suited to theoretical analysis.
Overview:
In proof theory and mathematical logic, sequent calculus is a family of formal systems sharing a certain style of inference and certain formal properties. The first sequent calculi systems, LK and LJ, were introduced in 1934/1935 by Gerhard Gentzen as a tool for studying natural deduction in first-order logic (in classical and intuitionistic versions, respectively). Gentzen's so-called "Main Theorem" (Hauptsatz) about LK and LJ was the cut-elimination theorem, a result with far-reaching meta-theoretic consequences, including consistency. Gentzen further demonstrated the power and flexibility of this technique a few years later, applying a cut-elimination argument to give a (transfinite) proof of the consistency of Peano arithmetic, in surprising response to Gödel's incompleteness theorems. Since this early work, sequent calculi, also called Gentzen systems, and the general concepts relating to them, have been widely applied in the fields of proof theory, mathematical logic, and automated deduction.
Overview:
Hilbert-style deduction systems One way to classify different styles of deduction systems is to look at the form of judgments in the system, i.e., which things may appear as the conclusion of a (sub)proof. The simplest judgment form is used in Hilbert-style deduction systems, where a judgment has the form B where B is any formula of first-order logic (or whatever logic the deduction system applies to, e.g., propositional calculus or a higher-order logic or a modal logic). The theorems are those formulae that appear as the concluding judgment in a valid proof. A Hilbert-style system needs no distinction between formulae and judgments; we make one here solely for comparison with the cases that follow.
Overview:
The price paid for the simple syntax of a Hilbert-style system is that complete formal proofs tend to get extremely long. Concrete arguments about proofs in such a system almost always appeal to the deduction theorem. This leads to the idea of including the deduction theorem as a formal rule in the system, which happens in natural deduction.
Overview:
Natural deduction systems In natural deduction, judgments have the shape A1,A2,…,An⊢B where the Ai 's and B are again formulae and n≥0 . Permutations of the Ai 's are immaterial. In other words, a judgment consists of a list (possibly empty) of formulae on the left-hand side of a turnstile symbol " ⊢ ", with a single formula on the right-hand side. The theorems are those formulae B such that ⊢B (with an empty left-hand side) is the conclusion of a valid proof.
Overview:
(In some presentations of natural deduction, the Ai s and the turnstile are not written down explicitly; instead a two-dimensional notation from which they can be inferred is used.) The standard semantics of a judgment in natural deduction is that it asserts that whenever A1 , A2 , etc., are all true, B will also be true. The judgments A1,…,An⊢B and ⊢(A1∧⋯∧An)→B are equivalent in the strong sense that a proof of either one may be extended to a proof of the other.
Overview:
Sequent calculus systems Finally, sequent calculus generalizes the form of a natural deduction judgment to A1,…,An⊢B1,…,Bk, a syntactic object called a sequent. The formulas on left-hand side of the turnstile are called the antecedent, and the formulas on right-hand side are called the succedent or consequent; together they are called cedents or sequents. Again, Ai and Bi are formulae, and n and k are nonnegative integers, that is, the left-hand-side or the right-hand-side (or neither or both) may be empty. As in natural deduction, theorems are those B where ⊢B is the conclusion of a valid proof.
Overview:
The standard semantics of a sequent is an assertion that whenever every Ai is true, at least one Bi will also be true. Thus the empty sequent, having both cedents empty, is false. One way to express this is that a comma to the left of the turnstile should be thought of as an "and", and a comma to the right of the turnstile should be thought of as an (inclusive) "or". The sequents A1,…,An⊢B1,…,Bk and ⊢(A1∧⋯∧An)→(B1∨⋯∨Bk) are equivalent in the strong sense that a proof of either sequent may be extended to a proof of the other sequent.
Overview:
At first sight, this extension of the judgment form may appear to be a strange complication—it is not motivated by an obvious shortcoming of natural deduction, and it is initially confusing that the comma seems to mean entirely different things on the two sides of the turnstile. However, in a classical context the semantics of the sequent can also (by propositional tautology) be expressed either as ⊢¬A1∨¬A2∨⋯∨¬An∨B1∨B2∨⋯∨Bk (at least one of the As is false, or one of the Bs is true) or as ⊢¬(A1∧A2∧⋯∧An∧¬B1∧¬B2∧⋯∧¬Bk) (it cannot be the case that all of the As are true and all of the Bs are false).
Overview:
In these formulations, the only difference between formulae on either side of the turnstile is that one side is negated. Thus, swapping left for right in a sequent corresponds to negating all of the constituent formulae. This means that a symmetry such as De Morgan's laws, which manifests itself as logical negation on the semantic level, translates directly into a left-right symmetry of sequents—and indeed, the inference rules in sequent calculus for dealing with conjunction (∧) are mirror images of those dealing with disjunction (∨).
Overview:
Many logicians feel that this symmetric presentation offers a deeper insight in the structure of the logic than other styles of proof system, where the classical duality of negation is not as apparent in the rules.
Overview:
Distinction between natural deduction and sequent calculus Gentzen asserted a sharp distinction between his single-output natural deduction systems (NK and NJ) and his multiple-output sequent calculus systems (LK and LJ). He wrote that the intuitionistic natural deduction system NJ was somewhat ugly. He said that the special role of the excluded middle in the classical natural deduction system NK is removed in the classical sequent calculus system LK. He said that the sequent calculus LJ gave more symmetry than natural deduction NJ in the case of intuitionistic logic, as also in the case of classical logic (LK versus NK). Then he said that in addition to these reasons, the sequent calculus with multiple succedent formulas is intended particularly for his principal theorem ("Hauptsatz").
Overview:
Origin of word "sequent" The word "sequent" is taken from the word "Sequenz" in Gentzen's 1934 paper. Kleene makes the following comment on the translation into English: "Gentzen says 'Sequenz', which we translate as 'sequent', because we have already used 'sequence' for any succession of objects, where the German is 'Folge'."
Proving logical formulas:
Reduction trees Sequent calculus can be seen as a tool for proving formulas in propositional logic, similar to the method of analytic tableaux. It gives a series of steps which allows one to reduce the problem of proving a logical formula to simpler and simpler formulas until one arrives at trivial ones.Consider the following formula: ((p→r)∨(q→r))→((p∧q)→r) This is written in the following form, where the proposition that needs to be proven is to the right of the turnstile symbol ⊢ :⊢((p→r)∨(q→r))→((p∧q)→r) Now, instead of proving this from the axioms, it is enough to assume the premise of the implication and then try to prove its conclusion. Hence one moves to the following sequent: (p→r)∨(q→r)⊢(p∧q)→r Again the right hand side includes an implication, whose premise can further be assumed so that only its conclusion needs to be proven: (p→r)∨(q→r),(p∧q)⊢r Since the arguments in the left-hand side are assumed to be related by conjunction, this can be replaced by the following: (p→r)∨(q→r),p,q⊢r This is equivalent to proving the conclusion in both cases of the disjunction on the first argument on the left. Thus we may split the sequent to two, where we now have to prove each separately: p→r,p,q⊢r q→r,p,q⊢r In the case of the first judgment, we rewrite p→r as ¬p∨r and split the sequent again to get: ¬p,p,q⊢r r,p,q⊢r The second sequent is done; the first sequent can be further simplified into: p,q⊢p,r This process can always be continued until there are only atomic formulas in each side. The process can be graphically described by a rooted tree graph, as depicted on the right. The root of the tree is the formula we wish to prove; the leaves consist of atomic formulas only. The tree is known as a reduction tree.The items to the left of the turnstile are understood to be connected by conjunction, and those to the right by disjunction. Therefore, when both consist only of atomic symbols, the sequent is accepted axiomatically (and always true) if and only if at least one of the symbols on the right also appears on the left.
Proving logical formulas:
Following are the rules by which one proceeds along the tree. Whenever one sequent is split into two, the tree vertex has two child vertices, and the tree is branched. Additionally, one may freely change the order of the arguments in each side; Γ and Δ stand for possible additional arguments.The usual term for the horizontal line used in Gentzen-style layouts for natural deduction is inference line.
Proving logical formulas:
Starting with any formula in propositional logic, by a series of steps, the right side of the turnstile can be processed until it includes only atomic symbols. Then, the same is done for the left side. Since every logical operator appears in one of the rules above, and is removed by the rule, the process terminates when no logical operators remain: The formula has been decomposed.
Proving logical formulas:
Thus, the sequents in the leaves of the trees include only atomic symbols, which are either provable by the axiom or not, according to whether one of the symbols on the right also appears on the left.
Proving logical formulas:
It is easy to see that the steps in the tree preserve the semantic truth value of the formulas implied by them, with conjunction understood between the tree's different branches whenever there is a split. It is also obvious that an axiom is provable if and only if it is true for every assignment of truth values to the atomic symbols. Thus this system is sound and complete for classical propositional logic.
Proving logical formulas:
Relation to standard axiomatizations Sequent calculus is related to other axiomatizations of propositional calculus, such as Frege's propositional calculus or Jan Łukasiewicz's axiomatization (itself a part of the standard Hilbert system): Every formula that can be proven in these has a reduction tree.
Proving logical formulas:
This can be shown as follows: Every proof in propositional calculus uses only axioms and the inference rules. Each use of an axiom scheme yields a true logical formula, and can thus be proven in sequent calculus; examples for these are shown below. The only inference rule in the systems mentioned above is modus ponens, which is implemented by the cut rule.
The system LK:
This section introduces the rules of the sequent calculus LK (standing for Logistische Kalkül) as introduced by Gentzen in 1934. A (formal) proof in this calculus is a sequence of sequents, where each of the sequents is derivable from sequents appearing earlier in the sequence by using one of the rules below.
The system LK:
Inference rules The following notation will be used: ⊢ known as the turnstile, separates the assumptions on the left from the propositions on the right A and B denote formulae of first-order predicate logic (one may also restrict this to propositional logic), Γ,Δ,Σ , and Π are finite (possibly empty) sequences of formulae (in fact, the order of formulae does not matter; see § Structural rules), called contexts, when on the left of the ⊢ , the sequence of formulas is considered conjunctively (all assumed to hold at the same time), while on the right of the ⊢ , the sequence of formulas is considered disjunctively (at least one of the formulas must hold for any assignment of variables), t denotes an arbitrary term, x and y denote variables.
The system LK:
a variable is said to occur free within a formula if it is not bound by quantifiers ∀ or ∃ .A[t/x] denotes the formula that is obtained by substituting the term t for every free occurrence of the variable x in formula A with the restriction that the term t must be free for the variable x in A (i.e., no occurrence of any variable in t becomes bound in A[t/x] ).
The system LK:
WL , WR , CL , CR , PL , PR : These six stand for the two versions of each of three structural rules; one for use on the left ('L') of a ⊢ , and the other on its right ('R'). The rules are abbreviated 'W' for Weakening (Left/Right), 'C' for Contraction, and 'P' for Permutation.Note that, contrary to the rules for proceeding along the reduction tree presented above, the following rules are for moving in the opposite directions, from axioms to theorems. Thus they are exact mirror-images of the rules above, except that here symmetry is not implicitly assumed, and rules regarding quantification are added.
The system LK:
Restrictions: In the rules (∀R) and (∃L) , the variable y must not occur free anywhere in the respective lower sequents.
The system LK:
An intuitive explanation The above rules can be divided into two major groups: logical and structural ones. Each of the logical rules introduces a new logical formula either on the left or on the right of the turnstile ⊢ . In contrast, the structural rules operate on the structure of the sequents, ignoring the exact shape of the formulae. The two exceptions to this general scheme are the axiom of identity (I) and the rule of (Cut).
The system LK:
Although stated in a formal way, the above rules allow for a very intuitive reading in terms of classical logic. Consider, for example, the rule (∧L1) . It says that, whenever one can prove that Δ can be concluded from some sequence of formulae that contain A , then one can also conclude Δ from the (stronger) assumption that A∧B holds. Likewise, the rule (¬R) states that, if Γ and A suffice to conclude Δ , then from Γ alone one can either still conclude Δ or A must be false, i.e. ¬A holds. All the rules can be interpreted in this way.
The system LK:
For an intuition about the quantifier rules, consider the rule (∀R) . Of course concluding that ∀xA holds just from the fact that A[y/x] is true is not in general possible. If, however, the variable y is not mentioned elsewhere (i.e. it can still be chosen freely, without influencing the other formulae), then one may assume, that A[y/x] holds for any value of y. The other rules should then be pretty straightforward.
The system LK:
Instead of viewing the rules as descriptions for legal derivations in predicate logic, one may also consider them as instructions for the construction of a proof for a given statement. In this case the rules can be read bottom-up; for example, (∧R) says that, to prove that A∧B follows from the assumptions Γ and Σ , it suffices to prove that A can be concluded from Γ and B can be concluded from Σ , respectively. Note that, given some antecedent, it is not clear how this is to be split into Γ and Σ . However, there are only finitely many possibilities to be checked since the antecedent by assumption is finite. This also illustrates how proof theory can be viewed as operating on proofs in a combinatorial fashion: given proofs for both A and B , one can construct a proof for A∧B When looking for some proof, most of the rules offer more or less direct recipes of how to do this. The rule of cut is different: it states that, when a formula A can be concluded and this formula may also serve as a premise for concluding other statements, then the formula A can be "cut out" and the respective derivations are joined. When constructing a proof bottom-up, this creates the problem of guessing A (since it does not appear at all below). The cut-elimination theorem is thus crucial to the applications of sequent calculus in automated deduction: it states that all uses of the cut rule can be eliminated from a proof, implying that any provable sequent can be given a cut-free proof.
The system LK:
The second rule that is somewhat special is the axiom of identity (I). The intuitive reading of this is obvious: every formula proves itself. Like the cut rule, the axiom of identity is somewhat redundant: the completeness of atomic initial sequents states that the rule can be restricted to atomic formulas without any loss of provability.
Observe that all rules have mirror companions, except the ones for implication. This reflects the fact that the usual language of first-order logic does not include the "is not implied by" connective ↚ that would be the De Morgan dual of implication. Adding such a connective with its natural rules would make the calculus completely left-right symmetric.
Example derivations Here is the derivation of " ⊢A∨¬A ", known as the Law of excluded middle (tertium non datur in Latin).
The system LK:
Next is the proof of a simple fact involving quantifiers. Note that the converse is not true, and its falsity can be seen when attempting to derive it bottom-up, because an existing free variable cannot be used in substitution in the rules (∀R) and (∃L) For something more interesting we shall prove ((A→(B∨C))→(((B→¬A)∧¬C)→¬A)) . It is straightforward to find the derivation, which exemplifies the usefulness of LK in automated proving.
The system LK:
These derivations also emphasize the strictly formal structure of the sequent calculus. For example, the logical rules as defined above always act on a formula immediately adjacent to the turnstile, such that the permutation rules are necessary. Note, however, that this is in part an artifact of the presentation, in the original style of Gentzen. A common simplification involves the use of multisets of formulas in the interpretation of the sequent, rather than sequences, eliminating the need for an explicit permutation rule. This corresponds to shifting commutativity of assumptions and derivations outside the sequent calculus, whereas LK embeds it within the system itself.
The system LK:
Relation to analytic tableaux For certain formulations (i.e. variants) of the sequent calculus, a proof in such a calculus is isomorphic to an upside-down, closed analytic tableau.
Structural rules The structural rules deserve some additional discussion.
The system LK:
Weakening (W) allows the addition of arbitrary elements to a sequence. Intuitively, this is allowed in the antecedent because we can always restrict the scope of our proof (if all cars have wheels, then it's safe to say that all black cars have wheels); and in the succedent because we can always allow for alternative conclusions (if all cars have wheels, then it's safe to say that all cars have either wheels or wings).
The system LK:
Contraction (C) and Permutation (P) assure that neither the order (P) nor the multiplicity of occurrences (C) of elements of the sequences matters. Thus, one could instead of sequences also consider sets.
The extra effort of using sequences, however, is justified since part or all of the structural rules may be omitted. Doing so, one obtains the so-called substructural logics.
The system LK:
Properties of the system LK This system of rules can be shown to be both sound and complete with respect to first-order logic, i.e. a statement A follows semantically from a set of premises Γ (Γ⊨A) if and only if the sequent Γ⊢A can be derived by the above rules.In the sequent calculus, the rule of cut is admissible. This result is also referred to as Gentzen's Hauptsatz ("Main Theorem").
Variants:
The above rules can be modified in various ways: Minor structural alternatives There is some freedom of choice regarding the technical details of how sequents and structural rules are formalized. As long as every derivation in LK can be effectively transformed to a derivation using the new rules and vice versa, the modified rules may still be called LK.
First of all, as mentioned above, the sequents can be viewed to consist of sets or multisets. In this case, the rules for permuting and (when using sets) contracting formulae are obsolete.
The rule of weakening will become admissible, when the axiom (I) is changed, such that any sequent of the form Γ,A⊢A,Δ can be concluded. This means that A proves A in any context. Any weakening that appears in a derivation can then be performed right at the start. This may be a convenient change when constructing proofs bottom-up.
Variants:
Independent of these one may also change the way in which contexts are split within the rules: In the cases (∧R),(∨L) , and (→L) the left context is somehow split into Γ and Σ when going upwards. Since contraction allows for the duplication of these, one may assume that the full context is used in both branches of the derivation. By doing this, one assures that no important premises are lost in the wrong branch. Using weakening, the irrelevant parts of the context can be eliminated later.
Variants:
Absurdity One can introduce ⊥ , the absurdity constant representing false, with the axiom: ⊥⊢ Or if, as described above, weakening is to be an admissible rule, then with the axiom: Γ,⊥⊢Δ With ⊥ , negation can be subsumed as a special case of implication, via the definition (¬A)⟺(A→⊥) Substructural logics Alternatively, one may restrict or forbid the use of some of the structural rules. This yields a variety of substructural logic systems. They are generally weaker than LK (i.e., they have fewer theorems), and thus not complete with respect to the standard semantics of first-order logic. However, they have other interesting properties that have led to applications in theoretical computer science and artificial intelligence.
Variants:
Intuitionistic sequent calculus: System LJ Surprisingly, some small changes in the rules of LK suffice to turn it into a proof system for intuitionistic logic. To this end, one has to restrict to sequents with at most one formula on the right-hand side, and modify the rules to maintain this invariant. For example, (∨L) is reformulated as follows (where C is an arbitrary formula): Γ,A⊢CΣ,B⊢CΓ,Σ,A∨B⊢C(∨L) The resulting system is called LJ. It is sound and complete with respect to intuitionistic logic and admits a similar cut-elimination proof. This can be used in proving disjunction and existence properties.
Variants:
In fact, the only rules in LK that need to be restricted to single-formula consequents are (→R) , (¬R) (which can be seen as a special case of →R , as described above) and (∀R) . When multi-formula consequents are interpreted as disjunctions, all of the other inference rules of LK are derivable in LJ, while the rules (→R) and (∀R) become Γ,A⊢B∨CΓ⊢(A→B)∨C and (when y does not occur free in the bottom sequent) Γ⊢A[y/x]∨CΓ⊢(∀xA)∨C.
Variants:
These rules are not intuitionistically valid. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Accommodation ladder**
Accommodation ladder:
An accommodation ladder is a foldable flight of steps down a ship's side. Accommodation ladders can be mounted parallel or perpendicular to the ship's board. If the ladder is parallel to the ship, it has to have an upper platform. Upper platforms are mostly turnable. The lower platform (or the ladder itself) hangs on a bail and can be lifted as required.
Accommodation ladder:
To prevent damage to boats going under the ladder as the water level rises and falls, a boat fender is fitted to the end of the ladder.
The ladder has handrails on both sides for safety. Accommodation ladders are constructed in such a way that the steps are horizontal whatever the angle of inclination of the ladder. The lower end the ladder/lower platform is based on a roll to compensate for the motion of the ship in relation to the quay. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CarPlay**
CarPlay:
CarPlay is an Apple standard that enables a car radio or head unit to be a display and controller for an iOS device. It is available on iPhone 5 and later models running iOS 7.1 or later.
According to Apple, more than 800 car models support CarPlay through a USB connection. Some vehicles also allow devices to connect through a wireless connection; wireless support can also be added through aftermarket dongles.Vehicles without CarPlay can have vehicle audio products from automotive aftermarket suppliers fitted.
Software:
Apple's CarPlay-enabled apps include: Phone Apple Music Apple Maps Calendar Messages Audiobooks (part of Apple Books) Podcasts Settings NewsDevelopers must request permission from Apple to develop CarPlay-enabled apps. Such apps fall into five categories: Audio: primarily provide audio content, such as music or podcasts. Examples: Amazon Music, Audible, Google Play Music, iHeartRadio, QQ Music, Spotify, and Overcast.
Navigation: turn-by-turn guidance, including searching for points of interests and navigating to a destination. Examples: AutoNavi, Baidu Maps, Google Maps, and Waze.
Automaker-made apps allow a user to control vehicle-specific features such as climate controls, gas levels, or radio via CarPlay.
Messaging/Voice over IP (VoIP): listen to new messages and reply using dictation in an audio-only interface. Messaging apps on CarPlay integrate with third-party Siri support (known as SiriKit), while VoIP apps integrate with the iOS calling interface using CallKit. Examples: Telegram, WhatsApp, and Zoom.
Software:
Food-ordering and parking-services apps.To discourage distracted driving, Siri is used extensively, providing voice turn-by-turn navigation guidance and voice-input for text messages. Newscast-style weather and stock results are announced instead of displayed visually. Requests that bring up visual information may be blocked when the car is in drive; most native CarPlay apps deliver audio content with minimal interaction. CarPlay-enabled apps installed on the device appear on the CarPlay home screen.
Hardware:
While most of the CarPlay software runs on the connected iPhone, the CarPlay interface provides the audio and display connection to the car's infotainment system. CarPlay adapts to various display sizes and control interfaces for each vehicle: touch screen, rotary dials, buttons, steering-wheel controls, and hands-free microphones. Aftermarket head units may support CarPlay and/or Android Auto. Aftermarket head units can be purchased from Alpine, Clarion, Kenwood, Pioneer, Sony and JVC.The iPhone can connect to the car through a USB cable or wirelessly in two ways: by exchanging network credentials with a supporting CarPlay receiver over Bluetooth, establishing a two-way Wi-Fi connection; or by using a dongle adapter to enable a wireless connection to the system's USB port.
Manufacturers:
Most major automakers offer vehicles with CarPlay. Manufacturers with no CarPlay models include Wuling, Rivian and Tesla Motors.Honda offers CarPlay on the Gold Wing motorcycle and on the Africa Twin.
History:
Predecessor By 2008, a year after iPhone introduction [2009 model year for America], Mercedes vehicles were first to sell an audio system incorporating both the [iPod and iPhone] and were equipped with 30-pin iOS input jacks. The new 2008 Harman Kardon NTG 2.5 featured full audio streaming, syncing, charging and control integrated into the steering wheel controls, instrument panel and head unit. Apple was working with Mercedes to develop iOS compatible audio systems into their cars first only a year after iPhone launch. With an Apple Lightning-to-30-pin adapter, iPhones/iPods remain backwards-compatible with the Harman Kardon 2.5 and later models. This is the earliest audio system specifically engineered for iPod/iPhone integration, which predated CarPlay and every other manufacturer incorporating iOS into vehicles.The concept of CarPlay was based on the iOS 4 feature called "iPod Out" which was produced through several years of joint development by Apple and the BMW Group's Technology Office USA. iPod Out enabled vehicles with the necessary infrastructure to "host" the analog video and audio from a supporting iOS device while receiving inputs, such as button presses and knob rotations, from a car's infotainment system, to drive the "hosted" user interface in the vehicle's built-in display. It was announced at WWDC 2010 and first shipped in BMW Group vehicles in early 2011. The BMW and Mini option was called "PlugIn" and paved the way for the first cross-OEM platforms, introducing the concept of requiring a car-specific interface for apps (as opposed to MirrorLink's simple and insufficient mirroring of what was shown on the smartphone's screen).
History:
Development CarPlay's codename was Stark. Apple's Eddy Cue announced it as iOS in the Car at WWDC 2013. In January 2014 it was reported that Apple's hardware-oriented corporate culture had led to release delays. iOS in the Car was then rebranded and launched as "CarPlay" with significant design changes at the Geneva Motor Show in March 2014 with Ferrari, Kia, Mercedes-Benz, and Volvo among the first car manufacturers.At WWDC 2022, Apple introduced an all-new version of CarPlay (informally known as CarPlay 2) which can control vehicle functions, access vehicle stats and take over multiple vehicle screens completely. The projected release date from Apple for this new CarPlay is late 2023. Manufacturers that are planning to adopt the new CarPlay include: Audi, Acura, Ford, Honda, Infiniti, Jaguar, Land Rover, Lincoln, Mercedes-Benz, Nissan, Polestar, Porsche, Renault and Volvo.
Timeline:
June 2013: Apple introduced iOS in the Car; an early version of CarPlay that never got publicly released, at WWDC 2013.June 2013: BMW officials announced their cars would not support iOS in the Car; they later changed their minds.November 2013: Siri Eyes Free mode was offered as a dealer-installed accessory in the US to some Honda Accord and Acura RDX & ILX models. In December, Honda offered additional integration, featuring new HondaLink services, on some US and Canada models of the Civic and the Fit.March 2014: Apple introduced CarPlay, which was renamed from iOS in the Car with significant design changes, at the 2014 Geneva Motor Show with automakers Ferrari, Mercedes-Benz and Volvo.September 2014: A Ferrari FF was the first car with a full version of CarPlay.November 2014: Hyundai announced the Sonata sedan would be their first model with available CarPlay by the end of the first quarter of 2015.January 2015: Volkswagen announced CarPlay support would be coming later in 2015 and would be either standard or available on the majority of their 2016 model year lineup.May 2015: General Motors announced CarPlay would be available starting with 14 different 2016 model year Chevrolet vehicles.July 2015: Honda announced CarPlay would be available in their vehicles starting with the 2016 Honda Accord.December 2015: Volvo implemented CarPlay in the 2016 Volvo XC90 as their first vehicle with CarPlay support.December 2015: Mercedes-Benz confirmed that CarPlay would be available starting with select 2016 model year vehicles.January 2016: Apple released a list detailing the car models which support CarPlay.March 2016: Subaru announces the beginning of CarPlay and Android Auto support starting with the 2017 Impreza. June 2016: Ford announced CarPlay would be available on all 2017 model year vehicles equipped with the Sync 3 infotainment system.June 2016: Nissan announced CarPlay would be available in their vehicles beginning with the 2017 Nissan Maxima.September 2016: BMW adds CarPlay as a standalone option in most of their vehicles.February 2017: Harman announced the first implementation of wireless CarPlay which made its debut in the 2017 BMW 5 series.April 2017: The new generation Scania range became the first heavy duty truck in Europe to support CarPlay.July 2017: The new Volvo VNL became the first heavy duty truck in the United States to support CarPlay.October 2017: The 2018 Honda Gold Wing became the first motorcycle to support CarPlay.January 2018: Toyota began to implement CarPlay starting with the 2019 Toyota Avalon.July 2018: Mazda began to implement CarPlay starting with the 2018 Mazda6. Mazda also began offering a CarPlay retrofit to support previous vehicles that are 2014 model year or newer and are equipped with the MZD-Connect system.August 2018: Harley-Davidson CarPlay support was added to 2019 Touring models equipped with Boom! Box GTS radio.December 2019: BMW no longer requires a subscription to use CarPlay.June 2022: Apple introduced an all-new version of CarPlay at WWDC 2022 which can control vehicle functions and take over multiple vehicle screens. The projected release date from Apple for the new CarPlay is late 2023.
Timeline:
March 2023: General Motors announced plans to phase out CarPlay support in their electric vehicles in favor of a new Android Automotive system. This CarPlay phase out will start with the 2024 Chevrolet Blazer EV. GM vehicles shipped with CarPlay will not have CarPlay removed. July 2023: Porsche announced tighter CarPlay integration with vehicle functions through the My Porsche App. These added functions include control of the vehicle’s HVAC system, ambient lighting, radio and sound controls. While having similar features, this is not yet the all-new CarPlay Apple showed at WWDC 2022.
Timeline:
Improvements by iOS version iOS 9 added the ability to link car and iPhone wirelessly, not just a wired USB connector. It also enabled vehicle manufacturers to load apps that allow a user to control vehicle-specific features such as climate controls or radio via CarPlay.iOS 10's Messages app enabled the user to listen to new messages and reply using dictation in an audio-only interface.iOS 12 added turn-by-turn guidance, including searching for points of interests and navigating to a destination, as well as support for third-party navigation apps like Google Maps or Waze.iOS 13 added Dashboard, an alternative to the app home screen, which presents a split layout of maps, media information, Calendar, or Siri Suggestions. It also added Calendar to the home screen, allowing suggested events to link towards map directions to the event location. A new Settings app enabled users to configure certain CarPlay specific settings, such as switching between light and dark modes, adjusting album art in CarPlay’s Now Playing screen, or enabling Do Not Disturb While Driving while in a CarPlay session. Third-party maps may also be displayed on Dashboard starting with iOS 13.4. It also added Apple's News app.iOS 14 added new preset wallpapers and the ability to run food-ordering and parking-services apps.iOS 15 improved Apple Maps and Focus modes, allowing users to customize (prioritize or postpone) notification delivery, particularly while driving.iOS 16 removed the confirmation process in sending a message.iOS 17 added SharePlay in the Car, allowing for people in the vehicle add songs to the music queue.
Competition:
The Open Automotive Alliance's Android Auto is a similar implementation used for Android devices.
Some vehicle manufacturers have their own systems for syncing the car with smartphones, for example: BMW ConnectedDrive, NissanConnect, Hyundai Blue Link, iLane, MyFord Touch, Ford SYNC, OnStar, and Toyota Entune.
General Motors has released an API to allow the development of apps which interact with vehicle software systems.MirrorLink is a standard for car-smartphone connectivity, currently implemented in vehicles by Honda, Volkswagen, SEAT, Buick, Skoda, Mercedes-Benz, Citroën, and Smart with phones by manufacturers including Apple, HTC, Samsung, and Sony.
Competition:
Phaseout by General Motors In April 2023, General Motors announced that it would gradually stop including Apple CarPlay and Android Auto in its electric vehicles so that it could collect and monetize more driver data and deliver a better user experience. For instance, GM executive Scott Miller said, company-made software could warm up the electric automobile's battery before driving, something Apple software cannot do. The company said drivers would still be able to connect smartphones to their car with Bluetooth. The announcement was widely panned by consumers; the Detroit Free Press reported that some longtime GM customers said the lack of CarPlay would lead them to look at buying a Ford vehicle instead. The move was widely interpreted by the press as promoting its partnership with Google and cutting off revenue streams to Apple at the expense of its customers. Some noted that the move would severely inhibit customers' data privacy.Ford, a GM rival, announced that its vehicles would continue to offer CarPlay. Ford noted that GM's announcement meant that Ford's inclusion of CarPlay further distinguished itself among EV manufacturers because Tesla and Rivian have historically not included CarPlay. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**I-No**
I-No:
This is a list of characters from the Guilty Gear fighting game series.
Creation and influences:
Daisuke Ishiwatari has cited Kazushi Hagiwara's manga Bastard!!, and the fighting game Street Fighter II as influence to the Guilty Gear series. However, he noted that the majority of other fighting games were just recycling the character's same skins or style, and so he wanted every character "to be unique in their own way." Kazuhiko Shimamoto's characters were also noted as an inspiration for the male characters, with Ishiwatari saying they needed to be "chivalrous person-like characters", and citing Anji Mito "the most closest to this type". The female ones, on the other hand, have not followed a standard, with Ishiwatari only remarking that they needed to look like real women.There are many musical references in the Guilty Gear series, including various characters' names and moves, which were inspired by rock and heavy metal bands like Queen, Guns N' Roses, and Metallica. For instance, the main character, Sol Badguy, was named after Queen's lead vocalist, Freddie Mercury. Both his real name, Frederick, and his last name were influenced by the singer, whose nickname was "Mr. Badguy".
Introduced in Guilty Gear:
Sol Badguy Voiced by (English): David Forseth (Guilty Gear Xrd – present) Voiced by (Japanese): Daisuke Ishiwatari (Guilty Gear, X and XX), Jouji Nakata (Guilty Gear 2 Overture (story mode & alternative voice in Guilty Gear XX Accent Core Plus only) – present)Introduced in the first installment of the series (1998), Frederick was one of the lead scientists of the Gear project, as well as being the prototypical Gear, dating from over a hundred years before the events of the Guilty Gear games. As a prototype, he is immune to the orders of Commander Gears. He was personally acquainted with Asuka R. Kreutz (typically known as That Man) prior to the Crusades. As Frederick, Sol created the "Outrage", which he called a supreme Anti-Gear weapon. The Outrage has eight components called "Jinki" (Godlike Weapons), which greatly amplify their wielders' magical ability. Later, he was himself recruited into the order, as a bounty hunter named Sol Badguy. Sol took part in the Crusades, during which he was a member of the Sacred Order of Holy Knights (Seikishidan), acquiring the nickname "Flame of Corruption" (背徳の炎, Haitoku no Honō). However, he later became disenchanted with the methods of the Sacred Order, and fled the order, taking with him the Fūenken (封炎剣, Fireseal). The theft earned him Ky Kiske's enmity.In 2175, Sol faced Justice directly. During the fight, which Justice won, she discovered that Sol was a Gear. Justice attempted to assert her power as a Commander Gear to control Sol, but was unable to do so. Exploiting her confusion and weakness from the fight, the Holy Order, led by Ky, sealed Justice away, bringing the war to an end. However, a Gear named Testament began a plan to free Justice, and to stop it, the Union of Nations held a tournament. The canon stated that Sol was the winner of the Tournament, which also resulted in Justice discovering that Sol was, in fact, Frederick. Justice, in her dying words, commented that she wished that "...the three of us..." could talk one last time, and Sol swore to kill Asuka.In Guilty Gear X (2000), Sol has three endings, all of which involve a fight against Dizzy, who has a half-million dollar bounty on her head. However, he spares her life in all of them, losing against her in his second ending, and judging that she is not a threat to the world in the other two.In the subsequent game, Guilty Gear X2 (2002), his story-line involved chasing down I-No. In his first ending, his defeat of I-No led to a direct confrontation with Asuka, who casually deflected all of Sol's attacks, saying that Sol was needed because soon a greater battle than the Crusades will occur. In Sol's second ending, Slayer informed him of the Post-War Administration Bureau's interest in Dizzy. In the third ending, he fought Dizzy, who had been possessed by Necro, after I-No knocked her off the Mayship, and sent Dizzy on her way to meet Johnny and May.In Guilty Gear XX Accent Core Plus (2008), Sol had two endings. In one, I-No threw him back in time to fight his past self, Order-Sol. After both are weakened from the battle, I-No reappeared to murder Order-Sol, which, in turn, caused Sol's present form to cease existing. In his other ending, the same set of events played out, but Sol's present form, strangely, is unaffected by his past self's death. After escaping the time rift, Ky confronted and engaged him in battle. After the fight, Sol and Ky finally settled their differences and went their separate ways, with Ky asking Sol to promise that they will meet again. Though it was referenced in both Sol and Ky's endings, only Sol's told the events directly after the battle, which implied that Sol was the victor.In Guilty Gear 2: Overture (2007), Sol Badguy took in a young man named Sin as his apprentice, and traveled the world with him as bounty hunters. During their journey, he met Izuna, who told him that a man called Vizel was seeking out and destroying Gears, by order of Valentine, and that his next target was the kingdom of Illyria. Sol, Sin, and Izuna go to Illyria and find Ky Kiske trapped with a binding spell. With Dr. Paradigm's help they were able to release him. Eventually, they captured Valentine, who transformed herself into a monster. Sol fought her, and after the fight, he found himself in a white space, unable to return. However, he was confronted by That Man, who confirmed that Valentine was a copy of Aria, Sol's former lover. Sol was returned to the real world, where he reunited with his allies.In Guilty Gear Xrd -SIGN- (2014), Sol continued his journeys with Sin when Ramlethal Valentine declared war on humanity. Working alongside Ky, Sin, Leo Whitefang, and Elphelt Valentine, Sol fought to stop the Conclave from using the Cradle to resurrect Justice and destroy humanity. In the end, Sol and his allies were able to stop Justice from reviving, but Elphelt betrayed them and tried to kill them. She was eventually taken to the Backyard, and Sol, Sin and Ramlethal set out to find her.In Xrd -REVELATOR- (2016), Sol was confronted by Raven, who told him that That Man needed his help in preventing Elphelt from merging with Justice. Before the final battle against Ariels, Sol attempted to claim he is not related to Dizzy, due to her mother being Justice, who was once Aria. His allies began to theorize his connection with the two female Gears, which caused Ky—Dizzy's husband—and Sol to scream out in horror at the supposed revelation that the two rivals are in-laws. After Ariels' defeat, Jack-O took Elphelt's place and fused with Justice, becoming a reincarnated Aria, and Sol revealed to his allies That Man's true identity—Asuka R. Kreutz.
Introduced in Guilty Gear:
In the epilogue storyline added in Xrd REV 2 (2017), Sol visited Ky's manor, where he challenged Ky to a duel. As Ky noticed Sol's fighting intent, they took their discussion outside the manor to speak privately. While recalling their days as members of the Sacred Order of Holy Knights, Sol mocked Ky, claiming that he held back every time they fought, despite him not doing so when he battled Gears. He further remarked that Dizzy was a "monster", just like himself, angering Ky, who then entered a frenzied rage and managed to overpower Sol in combat. As Sol lied in a newly-formed crater, Sol explained that he had a score to settle with Asuka, with Ky asking him about it. Unbeknownst to them, at the same time they spoke, Asuka turned himself in to the United States government.
Introduced in Guilty Gear:
Sol returns as a main protagonist in Guilty Gear -STRIVE- (2021). Despite becoming a legendary hero since the previous event, Sol remains a bounty hunter, now accompanied by a fully recovered Jack-O, and the nation often goes to Ky to message Sol for urgent matters. Having had taken a request from a weakened Ariels to stop I-No, Sol realizes that Happy Chaos, a demon-like sorcerer who possessed Ariels body and formerly Asuka's mentor known as The Original had been manipulating entire events alongside I-No herself. Sol's healing power as a Gear becomes slower, due to being shot by Chaos with the same material which created Nagoriyuki's katanas. He arrives at Washington as guest of the White House during a peace meeting with Asuka, where the Gear Maker is targeted by Chaos and I-No for the possession of the Tome of Origin. Sol's story concludes as he lets Asuka remove the Flame of Corruption from him to restore his humanity. Despite the Flame of Corruption's removal being on a bad timing when Chaos fused with I-No into a godlike being, Sol manages to kill a godlike I-No, with the help of Ky, Axl, and a redeemed Nightless samurai Nagoriyuki. He also saves Jack-O from sacrificing herself to stop I-No, declaring his love for her as Jack-O, not as Aria. With I-No's demise, Sol is officially declared dead by the U.S. government and is given a hero's burial. Now going by his original name, Frederick goes off the grid and opens a shop with Jack-O at the abandoned space station Iseo.
Introduced in Guilty Gear:
He was also a playable character in the spin-off games Guilty Gear Petit (2001), Isuka (2003), Dust Strikers (2006), and Judgment (2006). Along with Ky Kiske, Potemkin and Milia Rage, he is the only character to appear in every Guilty Gear game.
Introduced in Guilty Gear:
Ky Kiske Voiced by (English): Sean Chiplock (Guilty Gear Strive) Voiced by (Japanese): Takeshi KusaoOrphaned at the age of 10 during the Holy War, a 100-year-war between mankind and bio-organic weapons called "Gears", he met the then-commander of the Sacred Order of Holy Knights (Seikishidan), Kliff Undersn. Ky was told to come back after five years of training if he really wanted to fight, and it was what he did. Due to the Undersn's retirement, the 16-year-old Ky Kiske was named the new chief of the Order. With the appointment, he was given the Fuuraiken (封雷剣, Thunderseal Sword), one of the Order's holiest treasures and a weapon that allows the wielder to manipulate lightning. He led them to win, ending the war, and with its aim reached the Order was disbanded. Five years later, Ky entered the International Police Force, where he is the captain.Ky enters a tournament that will select members for a second Sacred Order of Holy Knights at the start of Guilty Gear (1998) after hearing rumors of the possible resurrection of Justice, a Commander Gear who was in the leadership of the Gears during the Holy War. In Guilty Gear X (2000), Ky hears new rumors, these ones of a new Commander Gear that does not wish to harm humans—Dizzy. He sets out to find the flaws of his own concept of justice. In Guilty Gear X2 (2002), he returns to his normal duties as a captain of the IPF after rescuing a beaten Dizzy and entrusting her to Johnny's care. When he returns to work, Ky is thrust into a new conspiracy which includes robot clones of himself—called Robo-Ky (ロボカイ, Robokai)—, a secret organization, I-No, and his rival, Sol Badguy. In its sequel, Guilty Gear XX Accent Core Plus (2008), after discovering Post-War Administration Bureau's interest in Dizzy, Ky abandons his post in the police force to protect Dizzy and help her to control her power, the two of them eventually entering into a romantic relationship.In Guilty Gear 2: Overture (2007), Ky is the king of a land called Illyria. He has a new sword as he is keeping Dizzy, now his wife, sealed within the Thunderseal's power to preserve her existence. Having grown much more mature and composed over the years, his rivalry and animosity towards Sol has diminished, having entrusted his half-Gear son, Sin, to Sol. However, he is at constant conflict towards his own ineptitude of being a father and a husband due to its bias regarding Gears.Ky is a playable character in Guilty Gear Xrd (2014). He entrust his old friend Leo as his replacement of being a current king of Illriya. After Dizzy's return, his son, Sin begins to call him, "dad", much to Ky's happiness to hear it from his son. In the final chapters of the story mode during the Conclave's ambush in using Justice to attack Illriya, Ky was ambushed and being shot by the Conclave's member, Axus to death. However, Ky gets back on feet immediately to finish Axus. Upon awakened from Axus' fatal shots on him, Ky's left eye becomes red and has a gear mark, originating from a result of his relationship with Dizzy and the birth of Sin, resulting Ky to acquire a Uno Scale Gear Cell through exchanging half of his normal human eye with Sin's half of Gear eye at time after his son's birth. It was explained before revealing himself to have half of a Uno Gear Cell, his hair keeps growing fast many times because of that cells' rapid progress, and was originally meant for Dizzy's return. The main reason why he and Sin switched half of their eye is because his son's Gear power would become even more dangerous at full power. In a standalone sequel -Revelator-/Rev 2, Ky eventually found out Sol, Dizzy and Justice's connections, much to their dismay in horror. Despite his wife was exposed to the public and planning to move somewhere with his family, he is glad to hear the article news from a recently cured Elphelt that Dizzy is instead being praised as a savior during a last battle against Sanctus Maximus Ariels. Ky, who is very displease of Sol's parenting on Sin, intercepts Sol to discuss something with him privately outside the manor. Sol provoke Ky into battling him seriously when the former goads Dizzy as a monster. As Ky won, he found out Sol planned this for a preparation to settle with “That Man”, Asuka R. Kreutz the Gear Maker, who recently turned himself in to the government after the last battle against Ariels.Ky return as a playable character in Guilty Gear Strive (2021), where he will eventually begin to use his implemented Gear Cell's power, which allows him to be able to use Dragon Install similar to Sol, but can only be usable at low health unlike his rival. Ky was originally planned to volunteer a G4 summit with his family, but eventually halted due to I-No's invasion on Ariels’ prison and thereby saving his family from being held hostages during Happy Chaos’ invasion on U.S. White House, whereas the third Illirya king Daryl took care of Ky's place as the kingdom's main representative in the summit. Whether the summit needs Sol Badguy for urgent help, the letters are often goes to Ky, due to Sol's returning to his usual bounty hunting career. He and a fully recovered Jack-O interrogates I-No, when she suddenly surrendered herself on purpose after releasing Happy Chaos from Ariels' body. Ky and Jack-O suspect there might be more than just one traps during Chaos' invasion in U.S. white house, or rather, the airship Tir Na Nog. Ky and Jack-O arrive on time, after Sol, Asuka and U.S. president Vernon got rid of Happy Chaos at a same time witnessing Asuka reverting Sol back to his former human-self Frederick Bulsara through removing the Flame of Corruption in him. Unfortunately, Sol's de-transformation back to Frederick is on a bad timing when Chaos is still on the airship, taking a form of one of Giovanna's boss from security agency, and using the real Tome of Origin which fused with Asuka to fuse the Gear Maker's former master himself with I-No, restoring her full godlike power which Chaos split. With the help of Axl Low and Chaos' former servant Nagoriyuki, Ky uses Uno's Dragon Install to weakeaned I-No, allowing Sol to kill her with a god-killer weapon Outrage, thereby saving the universe from being destroyed and reset multiple times. In the epilogue, he and Dizzy are visited by Testament, introducing them to their new family. In an the epilogue of Another Story A, set after White House incident in the main story, Ky, and his son and wife, Sin and Dizzy are at the Illyrian Castle to celebrate the latter’s ceremonial welcome as an official Gear representative leader to the world.
Introduced in Guilty Gear:
He is also a playable character in the spin-off games Guilty Gear Petit (2001), Isuka (2003), Dust Strikers (2006), and Judgment (2006). Along with Sol Badguy, he is the only character to appear in every Guilty Gear game.
Introduced in Guilty Gear:
Axl Low Voiced by (English): Alexander Gross (Guilty Gear Strive) Voiced by (Japanese): Keiichi NanbaAxl Low (アクセル=ロウ, Akuseru Rou) is a time traveller who comes from 20th-century England, over 150 years before the Guilty Gear storyline, before he was caught in a time slip that sent him into the future. In reality, Axl became a being similar to I-No with the ability to manipulate reality. Axl made attempts to return to his time and be reunited with his girlfriend, first participating in the Sacred Knights Tournament as he thought he could have his wish granted as the winner. He later tried to contact I-No upon realizing she can send him back to his time, only for Jack'O to reveal the truth of his powers and that he could erase their reality to return to his. After much deliberation, Axl finds the courage to use his powers to become someone Megumi could be proud of. In the end of the Strive storyline, Axl is reunited with Megumi after a dying I-No realized the connection between herself and Axl, making Axl realize that I-No was in fact Megumi's fallen alternate future-self, just as I-No discovered that Axl Low is an alternate timeline version of William (ウィリアム, Wiriamu).
Introduced in Guilty Gear:
Baiken Voiced by (English): Patty Mattson (Guilty Gear -STRIVE-) Voiced by (Japanese): Satomi Kōrogi (Guilty Gear), Miho Sudo (Guilty Gear X), Chizu Yonemoto (Guilty Gear XX), Mayumi Asano (Guilty Gear Xrd – present)Baiken (梅喧) is among the few people born of Japanese descent. When she was young, her family was attacked by the Gears, a race of magical bioweapons that plunged the world into a hundred-year war known as the Crusades. After watching the death of her parents and losing her right arm and left eye, she swore revenge on the Gears creator, "That Man". For this end, she trained herself in the use of the katana. She has many hidden surprise weapons within the sleeve of her severed arm: a Japanese mace, a fireworks cannon, a bladed fan, a rope dart, a hook, a claw, and a spear. As of Xrd, she wears an eyepatch goggle on her scarred eye and was brought by May to visit Kum Haehyun. In Strive, it turns out that Baiken's actual target who she referred as "That Man" is not Asuka R. Kreutz, but his former teacher, Happy Chaos. Upon learning from Chipp and Anji that Chaos cannot be killed easily and fearing Delilah would endanger herself on a suicide attempt to avenger her late brother, Bedman, Baiken shifts to adopting Delilah, due to sharing similar situation of being lonely after losing the only family she had like hers.
Introduced in Guilty Gear:
Outside of the Guilty Gear series, Baiken appears as a playable character in the 2019 SNK fighting game Samurai Shodown.
Introduced in Guilty Gear:
Chipp Zanuff Voiced by (English): Edward Bosco (Guilty Gear Xrd SIGN - present) Voiced by (Japanese): Takuya Morito (Guilty Gear), Takeshi Miura (Guilty Gear X – XX), Yoshihisa Kawahara (Guilty Gear Xrd SIGN – present)Chipp Zanuff (チップ=ザナフ, Chippu Zanafu) was a youth who struggled to live life on the streets of America. Chipp was a drug trafficker that soon became a drug user. He found himself in a complicated situation that led to him fleeing from the Mafia. Chipp had become outnumbered and was almost dying, but his pursuers were dispatched by a man called Tsuyoshi (毅). He offered Chipp into his care, and trained Chipp in the art of Ninjutsu. They lived peacefully together until an assassin syndicate ordered Tsuyoshi's killing. Chipp, in an attempt to pursue the culprits responsible, entered the second sacred order tournament in order to get him a lead in his travels. Sometime during the events of XX onward, Chipp founded a kingdom called East Chipp Kingdom in a once lawless southern part of Africa. During Ariels' descent to madness, he is the first to discover that she is possessed by Happy Chaos, the true mastermind who manipulated the events and used Asuka as a scapegoat for the cause he did not commit.
Introduced in Guilty Gear:
Faust Voiced by (English): Kaiji Tang (Guilty Gear Xrd – present) Voiced by (Japanese): Kaneto Shiozawa (Guilty Gear), Takashi Kondō (Guilty Gear X – present)Faust is introduced in the series' first and homonymous installment of Guilty Gear, where he is a renowned physician. One day, a girl comes under his care that needs an unauthorized, experimental surgery, one that was denied repeatedly. In actuality, the procedure was denied because it held the secret to resurrection, which the Conclave would use in order to resurrect Justice. In the process, the Conclave had someone hire the assassin Zato-ONE, who would sabotage the surgery and cause the death of Faust's patient. Faust blames himself and, consumed by guilt, becomes insane and turns himself into a serial killer named Dr. Baldhead. After killing many people, he is arrested. However, he is allowed to enter in a tournament to kill more people unknowingly for Justice's resurrection. After it, he decides to atone for his crimes by committing suicide but he receives a visit from the ghost of the dead girl who tells him that her death is not his fault. Then, he abandons his Baldhead persona, assumes his real name, puts a paper bag on his head, and dedicates himself to saving as many lives as he can while he tries to find out the truth about the girl's death.In Guilty Gear X (2000), while Faust is trying to help people and fulfilling his duties as a doctor, he meets with Dizzy and persuades her to abandon her life in the forest to prevent further attacks on her. In another possible ending, he heals Zato-1 from his illness, and leaves him under the care of his right hand Venom. In Guilty Gear X2 (2002), he wants to pursue I-No since he knows she could cause people harm. This game features three possible endings for Faust: he finds I-No but she confronts him with his past, and he admits that he still enjoys causing pain, yet vows to continue in his duty as a doctor; he meets Zappa, a man with spirits in his body, and he does not know how to help him; and in a fight alongside Venom against several Robo-Kys, he discovers that the Assassin's Guild participated in the death of the young girl he thought he killed. In Guilty Gear XX Accent Core Plus (2008), Faust's storyline revolves around his attempt to find a cure for Zappa's condition. Depending on the player's decision, he can discover a cure and perform surgery on Zappa.In Guilty Gear Xrd (2014), Faust confronted the resurrected Zato-1, who revealed that he was framed for his malpractice incident by the Conclave because his procedure held the secret of resurrection. The Conclave needed to keep this technique hidden, which is why they hired Zato-1 to kill Faust's patient. Later, Faust is found by Johnny, who revealed that May was having a severe headache and that she was Japanese. He provided May with medicine for the pain, but they were then attacked by Bedman. Faust and the Jellyfish Pirates were able to escape when Chipp Zanuff distracted the assassin. After helping Sol in stopping Justice's revival, Faust confronted Chronus of the Conclave and discovered that they were merely manipulated by someone with a higher authority. Faust and Chronus eventually goes on journey together to investigate the true culprit's next plot, while on the run from the second Illyrian King Leo Whitefang. Upon discovering Ariels is behind all the tragic events, Faust snuck into Illyrian library where he reunite with Zappa at a same time how Japanese related to the existence of “Antimatter” Gears. He creates serums of a cure to negate “Antimatter” Gears deployed by Ariels from becoming suicidal time-bombs and destroy the world.Faust’s playable appearance Guilty Gear Strive (2021) eventually explains in one of the side-stories of the game, after Happy Chaos activated the Tir ná nOg mode of the White House, and Third Illyrian King Daryl convinced Chaos to release him and other hostage worlds’ leaders, barring US President Vernon, who alongside Giovanna are involved with Sol and Gear Maker Asuka R. Kreutz, including Potemkin on protecting the Tome of Origin from falling to Chaos’ hand. It is revealed that Faust is helping Sin, Ramlethal, Baiken and May on stopping Bedman’s sister, Delilah from losing control of her power just to kill Chaos, despite the fallen Original’ current immortal nature. With the help of Bedman, whose soul temporarily inhabited his weaponized bed, Faust and his allies manages to cure Delilah, at cost of the doctor’s condition becomes drastically weakened, which explains his playable appearance. He is later seen in the main story epilogue under Chronus’ care while they continue their next journey sometimes after the White House incident.
Introduced in Guilty Gear:
Faust is also a playable character in the spin-off games Guilty Gear Petit 2 (2002), Isuka (2004), Dust Strikers (2006), and Judgment (2006).
Introduced in Guilty Gear:
Justice Voiced by (Japanese): Takuya Moritou (Guilty Gear), Wakana Sakuraba (Guilty Gear XX), Kazue Fujita (Guilty Gear Xrd SIGN)Justice (ジャスティス, Jasutisu) is the original Command Class Gear, created by Asuka from Aria as a means to end war. Through Happy Chaos's influence, the Universal Will took over Justice in an attempt to manifest into the physical world using her and the population of Japan. Asuka was forced to override Justice and force her to annihilate Japan to stop this. The ordeal shattered Aria's mind - she forgot her past and began the century-long conflict known as the Crusades, only regaining her memories as Aria when Sol killed her. However, Justice's body ended up in the possession of the Conclave, who sought to revive her so they could use her power to reform humanity according to their vision. Justice's body was taken by Ariels for her plan to merge with Elphelt Valentine, a genetic copy of Aria, to create a "complete humanity". Jack'O Valentine, possessing the fragmented half of Aria's soul, took Elphelt's place and merged with Justice to completely restore Aria.
Introduced in Guilty Gear:
Kliff Undersn Voiced by: Hatsuaki Takami (Guilty Gear), Shigeru Sakano (Guilty Gear XX) Kliff Undersn (クリフ=アンダーソン, Kurifu Andāson) was a commander of the Holy Order and wields a massive sword known as the Dragonslayer (斬竜刀, Kiryūtō). A hero of the Crusades, having clashed with Justice multiple times, he became a great mentor to Ky Kiske and personally scouted Sol Badguy to join the Holy Order, who had saved a younger Kliff from a rampaging Gear long ago. He is the foster-father of Testament, whom he found as an orphan during the Crusades. After decades serving as commander, he resigned from his position and entrusted it to Ky. For the rest of the Crusades, Kliff remained as an instructor, training new recruits. With the Crusade's end and the disbanding of the Order after Justice's defeat, Kliff finally retired. But after only a couple years later, he would join the Second Holy Order Selection Tournament.and would lose his life to his own child.
Introduced in Guilty Gear:
May Voiced by (English): Eden Riegel (Guilty Gear Xrd SIGN - present) Voiced by (Japanese): Satomi KōrogiMay (メイ, Mei), the young, cute, and spunky first mate of the Jellyfish air pirates that is utterly dedicated to Johnny, the leader of the pirates and the man who raised her after she was orphaned. She entered the first tournament in order to bail Johnny out of prison, and fights in later tournaments for his benefit. She fights with a massive ship's anchor, which she is able to swing with ease. May is revealed to be one of the endangered citizens of Japan who are cursed by Universal Will with a seed that would transform them into living bombs bending to destroy the world. In Xrd, she begins to suffer an illness before being brought to Kum Haeyun for treatment. Thanks to the treatment, May has Information Flares in her which saves her from becoming an antimatter Gear.
Introduced in Guilty Gear:
Millia Rage Voiced by (English): Tara Platt (Guilty Gear Xrd SIGN - present) Voiced by (Japanese): Yuko SumitomoMillia is named after American thrash-metal band Meliah Rage. Daisuke Ishiwatari created Millia's character to, through her relationship with Zato, convey the feelings of a person who loves someone who is rejected from society.After the death of her parents, she is adopted into a nearby assassin syndicate, the Assassin's Guild. There, she learns the Sixth Hi-Deigokutsuipou (or the "Six Forbidden Magics"), "Angra", which allows Millia's hair to model as she wants. Due to Zato-1's rise in power within the Guild, Millia seals him within a dimensional portal, and abandons the guild shortly thereafter, finding no comfort in the cruel ways of an assassin. In Guilty Gear (1998), Millia uses the Second Holy Order's Fighting Tournament as a method of tracking down Zato, who has escaped from the dimensional prison, to kill him. However, she can not do it as she is manipulated like the rest of the cast, and the bloodshed from the tournament releases Justice from her slumber. She is still in search of Zato in Guilty Gear X (2000). Panicked about a Gear with free-will, countries establish another tournament, awarding a prize to whoever captures the Gear. Millia uses this as another chance to find Zato. Canonically, she finds him, seemingly killing him. Unknown to her, a symbiotic creature named Eddie takes control of Zato's body.In Guilty Gear X2 (2002), she receives sightings of a being similar to that of Zato's Forbidden Beast, Eddie. In XX, Millia has three different endings. In the first, faces Slayer just before she is about to confront Eddie. After a fight Millia manages to hold her ground but is unable to defeat him, Slayer tells Millia that her hair is of the same origins as Eddie, though Millia says she already knows it. In the second ending, she defeats Slayer and subsequently kills Eddie. The third ending shows that, after killing Eddie, she buries Zato's body. In Guilty Gear XX Accent Core Plus (2008), Millia sets off to find and kill Eddie and destroy the Assassin's Guild. In her first ending, she finishes her vendetta, as she slays Eddie. She continues to live on a run from the Assassin's Guild but did not falter in her mind and continues to keep her hair under control. In the second ending, however, she loses control of her hair and accidentally kills Bridget. As she stands horrified on her act, she is accidentally stabbed in the back by her fan, but she felt content as she died as herself, not as a monster.By Guilty Gear Xrd (2014), Millia had made peace with Venom and worked together with him to find the resurrected Zato-1. Working with a reformed Assassin Guilt led by Zato-1, Millia became a new director of a reformed Post-War Administration Bureau in Guilty Gear Strive (2021). Both Millia and Zato are summoned by the third king of Illirya Kingdom, Daryl to investigate I-No's next plan, after she took something from an imprisoned Ariels, later revealed to be a demon-like sorcerer Happy Chaos. With the help of Anji and Chipp, during Chaos' terrorist attack on the U.S. white house, or rather an airship Tir Na Nog, Millia and the rest of the world learned the true identity of I-No's accomplish, his past relation to Asuka and the previous events. After I-No's demise at the hands of Sol Badguy (now a human named Fredrick Bulsara), Millia and Zato are last seen being regular customers of Venom and Robo-Ky's Bakery.
Introduced in Guilty Gear:
Millia is also a playable character in the spin-off games Guilty Gear Petit (2001), Isuka (2003), Dust Strikers (2006), and Judgment (2006).
Introduced in Guilty Gear:
Potemkin Voiced by (English): Armen Taylor (Guilty Gear Strive) Voiced by (Japanese): Hideyuki Anbe (Guilty Gear), Takashi Kondō (Guilty Gear X – present)Potemkin (ポチョムキン, Pochomukin), a massive slave-soldier of Zepp, a floating continent controlled by a military dictatorship, Potemkin was forced into the first tournament by his superiors. However, during the tournament, the government of Zepp was overthrown in a revolt led by Gabriel, his mentor. Once Gabriel was made president of Zepp, Potemkin pledged his loyalty to the new government as a special agent. The mantle he wears was a slave collar used by his superiors to keep him in check. He decided to keep it as a memento of his past. From Xrd and onwards, he wears a Zepp Military Uniform and a masked helmet.
Introduced in Guilty Gear:
Testament Voiced by (English): Kayleigh McKee (Guilty Gear -STRIVE-) Voiced by (Japanese): Takami Akkun (Guilty Gear), Katsuaki Kobayashi (Guilty Gear X – XX), Yu Kobayashi (Guilty Gear -STRIVE-)Testament (テスタメント, Tesutamento) was an orphan during the Crusades and was adopted by Kliff Undersn. When they were old enough, despite their foster father's wishes, they desired to inherit their father's name by joining the Holy Order. However, while carrying out one mission on behalf of the Order, Testament met an untimely end and their body was never found, assumed to be dead. They were in fact captured by government agents and converted into a Gear by the Post-War Administration Bureau for the purposes of experimenting and developing new weapons. Unlike most Gears, they still retained their sense of self. However, Justice turned them against humanity, and they found themself on the opposite side of the war from which they started, causing them to become cynical and filled with regret after being freed from Justice's mind control at the time of her death at the hands of Sol Badguy. While hiding from humanity, they acted as a guardian to Justice's daughter, Dizzy, when her Gear powers went awry and caused her to end up having a bounty on her head. That is until The Jellyfish Pirates adopted her, before she met and married Ky Kiske. Testament was not seen after Dizzy began her relationship with Ky until their return in the ending of Strive many years later, being introduced to Dizzy and Ky's son, Sin, including Elphelt and Ramlethal Valentine. When announced as the last Season 1 fighter of Strive, it was revealed that Testament now lives peacefully with Dizzy's adoptive human parents.
Introduced in Guilty Gear:
Zato-1 and Eddie Voiced by (English): Matthew Mercer (Guilty Gear Xrd SIGN - present) Voiced by (Japanese): Kaneto Shiozawa (Guilty Gear – X), Takehito Koyasu (Guilty Gear XX – present)Zato-1 (ザトー=ONE, Zato Wan) was a Spanish member of the powerful Assassins Guild that allowed himself to become the host of a symbiotic creature named Eddie (エディ, Edi) in exchange for his sight. Because of this, Zato-1 was able to take control of his shadow, and use it as a weapon to gain great power. With this power, he made himself leader of the Assassins Guild. However, as his body weakened, Eddie was able to take control until Zato-1's death at the hands of Millia Rage. While his body was taken by Eddie from Guilty Gear XX onward, after the death of his voice actor Kaneto Shiozawa, Zato-1 was resurrected by the Conclave as part of their experiment and serving them until he is defeated by Faust. He later returns to the Assassins Guild and played a role in its reformation into a legit intelligence organization.
Introduced in Guilty Gear X:
Anji Mito Voiced by (English): Aleks Le (Guilty Gear -STRIVE-) Voiced by (Japanese): Toru Igarashi (Guilty Gear X – XX), Nobutoshi Canna (Guilty Gear -STRIVE-)Anji Mito (御津 闇慈, Mito Anji), real name unknown, is among the few people born of Japanese descent. Because of this, he is protected by the government since full-blooded Japanese are an endangered race. While there are those who accept this lifestyle, Anji does not—he compares acceptance of government to living in a zoo's cage. To regain his freedom, he escaped from his colony and pursued "That Man" for answers. He fights with a pair of hand-held fans called Zessen (絶扇). It is implied that Anji stole the Zessen Wind Fans, which had been stored in the Japanese colony, before escaping. However, after Asuka turned himself in to the government and joined the world peace project in Strive, he is not "That Man" whom Anji refers to, it was the Gear Maker's former master, Happy Chaos, the true mastermind who had been possessing Ariels' body, and the one who both destroyed Baiken's village and used Asuka as a scapegoat for the cause.
Introduced in Guilty Gear X:
Dizzy Voiced by (Japanese): Kazue FujitaDizzy (ディズィー, Dizī) was abandoned by her mother Justice, and found as an infant roughly three years before the events in Guilty Gear X (her first appearance), by an old couple. However, the other villagers became afraid since Dizzy appeared to age from being an infant to her late teens in three years. This was compounded when she discovered, that she now had wings and a tail. Dizzy was now identified as a Gear—essentially, a living weapon of mass destruction, and a hundred-year war against Gears had just ended five years ago. Dizzy's foster parents hid her in a grove in the woods, but she was soon discovered and subjected to abuse at the hands of her captors. She quickly escaped, however, and the government issued a 500,000 World-Dollar bounty for her death. She was later taken in by Johnny and May, joining the Jellyfish Pirates. Dizzy had married Ky and they had a half Gear child named Sin. Dizzy made an NPC appearance in Guilty Gear Xrd -Sign- before again becoming playable in Guilty Gear Xrd -Revelator/Rev 2- as the result of a fan vote conducted by Arc System Works. Her father was confirmed to be in-fact none other than a bounty hunter prototype Gear Sol Badguy in the near end of Revelator. Her full birth name is Dizzy Hale-Bulsara (ディズィー=ヘイル-バルサラ, Dizī Heiru-Barusara), named after her parents' original identity. In the After Story of -Revelator/Rev 2-, the public finally symbolizes Dizzy as a symbol of hope, allowing her to restart her family life normally with Ky and Sin, as well as Ramlethal and Elphelt.
Introduced in Guilty Gear X:
Jam Kuradoberi Voiced by (English): Xanthe Huynh (River City Girls 2) Voiced by (Japanese): Manami Komori (Guilty Gear X – XX), Rei Matsuzaki (Guilty Gear Xrd REVELATOR)Jam Kuradoberi (蔵土縁 紗夢, Kuradoberi Jamu) is a master chef, and longs to create her own restaurant, but lacks the means to do so. She seems to have terrible luck in this endeavor even once she gets it off the ground. She's a fairly docile character, and also relatively unimportant during the beginning arcs of the storyline. However, during XX, Jam's ability to wield Ki becomes a very notable aspect. She can be described as a bit of a flirt, as she has hit on both Bridget and Ky in her story. Jam returns in Guilty Gear Xrd -Revelator-/Rev 2, where her restaurant was somehow destroyed for the third time, and became frustrated when she found out that Ky married to Dizzy, leading her to return to bounty hunting. After Rev 2, as shown in the ending of -Strive-, Jam is opens a fourth restaurant, where she is seen serving dishes for Kum Haeyun.
Introduced in Guilty Gear X:
Johnny Voiced by (Japanese): Norio WakamotoJohnny (ジョニー, Jonī) is the captain of the airship May Ship and leader of the Jellyfish Pirates. His first appearance was in Guilty Gear as a non-playable character in May's ending, and he became a playable character in Guilty Gear X. Johnny is a compulsive womanizer; his entire crew, including May, is composed of young women, but when it comes to the protection of his crew, Johnny is a selfless acting man who will protect the lives of his crewmates as well as others if need be. He is protective of Dizzy, defending her from bounty hunters in Guilty Gear X and I-No's attack in Guilty Gear XX. He fights with a wood-handled Japanese sword and uses the Iaidō style of swordsmanship. Johnny made an NPC appearance again in Guilty Gear Xrd -Sign- before returning as playable in Guilty Gear Xrd -Revelator- and Guilty Gear Strive.
Introduced in Guilty Gear X:
Venom Voiced by (Japanese): Mikio Yaeda (Guilty Gear X), Junichi Suwabe (Guilty Gear XX – present)Venom (ヴェノム, Venomu) is an orphan raised by the Assassin's Guild. Venom became an apprentice and the devoted right hand of Zato when he saved him from being executed by the Guild who were displeased by Venom's reluctance to kill. Once Millia began to hunt Zato and the parasite Eddie began taking more control, Venom began his quest to save his beloved master. He fights using a cue stick.
Introduced in Guilty Gear X:
In -Revelator-/Rev 2 storyline, he met and befriended Robo Ky. After the storyline, he and a now bodyless Robo Ky are trying to readjust their new normal lives to repair the latter's body. Venom started out as a street vendor in a street alleyway before eventually being offered to open a bakery at a main shopping district. He has also done this to hide his identity and clear his assassination record.
Introduced in Guilty Gear Petit:
Fanny Fanny (ファニー, Fanī) is a strange nurse with an unusual connection to Dr. Baldhead who saves her life from a sickness. She fights her enemies in a similar same style to Dr. Baldhead, using a syringe that once belonged to her late mother. She appeared in the WonderSwan exclusive game Guilty Gear Petit and its sequel, Guilty Gear Petit 2. Her endings in both games show a big connection with Dr. Baldhead. In the first she is wondering why Dr. Baldhead disappeared; in the second she is saying goodbye because she knows she will never see him again.
Introduced in Guilty Gear X2 and updates:
Bridget I-No Voiced by (English): Tara Platt (Guilty Gear Xrd Sign), Amber Lee Connors (Guilty Gear Strive) Voiced by (Japanese): Kikuko InoueI-No was first introduced in the third installment of the series, Guilty Gear X2 (2002), where she appears as the primary antagonist and final boss. She carries with her an electric guitar nicknamed Marlene (マレーネ, Marēne) that she uses to fight both using it as a bludgeon and playing it to create deadly sonic waves, and she also fights with her hat that can shoot projectiles out of a secret hole. I-No is one of Asuka R. Kreutz's servants, and she appears in every character's storyline, manipulating them against each other—for example, she gives fake bounty lists with the name of the people her master wants to kill, consisting entirely of other cast members, to Jam Kuradoberi and Bridget.As she works for personal gain instead of being only a puppet, in Guilty Gear XX Accent Core Plus (2008), she forces Asuka into recruiting Anji Mito to capture her, eventually succeeding. She has three possible endings: she is captured by That Man and Raven to be punished, and she argues she only wanted to remove those who stood in her boss' way, but her master says that they're beneficial to what he has in mind for the world; she fights and defeats Dizzy and May, either subsequently becoming overwhelmed by Dizzy's power, and kidnapping May; or she is defeated by Baiken, being stabbed repeatedly, thus leading to her death.I-No is a playable character in Guilty Gear Xrd (2014), where she helped Asuka in dealing with the Conclave and Ariels while becoming associated with Axl Low due to their similar powers.I-No is also a playable character in the spin-off games Guilty Gear Isuka (2003), Dust Strikers (2006), Judgment (2006), Medal Masters (2015), and Epic Seven (2018).I-No returns in the 2021 video game Guilty Gear Strive as its main antagonist. The game's storyline reveals that I-No was artificially created by The Original as a replacement for the Universal Will when it became humanity's enemy, I-No being created from humanity's collective desire for a future. But I-No's existence threatens to unravel reality, forcing The Original to absorb half of her power into himself to prevent that calamity at the cost of his sanity. I-No allied herself with Asuka to capture the current Universal Will’s vessel, Ariels, in hopes to releasing the one who hid within the Universal Will to regain her full power from them. With Ariels is imprisoned at Illyrian Castle prison, she released The Original, now known as Happy Chaos, who helps the former in restoring her true godly power. To do so, they made Nagoriyuki a pawn and use a material of his sword to dispose Sol Badguy, and steal the Tome of Origin from Asuka so Chaos can return her godly power. I-No allows herself be capture to ensure Chaos’ plan goes well. Once regaining her power, I-No resolves to bestow her power humanity while repeatedly recreating the universe until she finds what she seeks. But she is ultimately destroyed by Sol. In her final moments, she uses her axis of time powers to reunite Axl with his beloved Megumi, who is revealed to be I-No's younger self from an alternate timeline, while leaving weapon-based guitar Marlene in Axl’s care.
Introduced in Guilty Gear X2 and updates:
Robo-Ky Voiced by: Takeshi Kusao (Guilty Gear X – XX), Yutaka Terada (Guilty Gear XX #Reload – XX Slash), Takumi Inoue (Guilty Gear XX Accent Core), Shigeru Chiba (Guilty Gear Xrd REVELATOR)Robo-Ky (ロボカイ, Robokai) is not simply one character, but in fact a line of robotic copies of Ky Kiske created by the shadowy Post War Administration Bureau. For some reason, Robo-Ky is often mistaken for the real Ky Kiske and vice versa during the game's story mode, even though his face is obviously metallic, his voice is higher-pitched and robotic, and he constantly blurts out *GIGIGI* or *BZZZT* noises during story sequences. A version of Robo-Ky, Robo-Ky Mk. II, whose moveset and technical parameters (like attack strength, defense, etc.) can be customized. Unlike the other Robo-Ky, Mk. II is built by and has loyalty to a mysterious scientist, not the Post-War Administration Bureau. Type is secret. As of Xrd, there's only one Robo-Ky left who survived its last appearance in XX Accent Core, the sentient first model unit, now as a con for hire and presumably homeless, until Venom hired the robot for some urgent emergencies. Sometimes later in After Story of Xrd Rev 2, Robo-Ky, now reduced to a head after sacrificing his body from his last battle against Bedman, is currently accompanying Venom, who wants to repay the robot's life to get more money and build a new body for him. Starting out as street vendors, then eventually open a bakery shop.
Introduced in Guilty Gear X2 and updates:
Slayer Voiced by (Japanese): Iemasa Kayumi (Guilty Gear XX – Xrd SIGN), Takaya Hashi (Guilty Gear Xrd REVELATOR)Slayer (スレイヤー, Sureiyā) is one of the few surviving species a nearly extinct ancient vampire race Nightless (ナイトレッスン, Naitoressu), who founded the Assassins Guild. He comes out of hiding when the Guild dives into chaos after Zato's disappearance. Cultured and debonair, Slayer enjoys haiku and spends his time with his wife Sharon, another immortal. He also has a personal connection with Gabriel, the president of Zepp, but seems to be acquainted with all of the movers and shakers of the Guilty Gear universe. He has the apparent motive of either observing the cast, or warning them of being targeted by the Post-War Administration Bureau. He personally knows "That Man"/Asuka and that character seems to hold him in high regard since he apologizes to Slayer. He was originally thought to be the only known surviving Nightless left, until another known as Nagoyuki is found alive and unearthed by Happy Chaos during -STRIVE-.
Introduced in Guilty Gear X2 and updates:
Zappa Voiced by: Yūji UedaZappa (ザッパ, Zappa) is an unlucky young man, looking for a wife and writing in his diary about his new "disease" he has, which, from his point of view, consists of fainting and then waking up somewhere else, possibly with alarming wounds and fractures and no memory of how he got there. He seeks the doctor Faust to cure his paranormal ailment. When entering a battle, he is unconscious, with S-Ko (S子)—his most powerful vengeful spirit—and the other ghosts having control of his actions. These ghosts consist of three giant centipedes, several will-o-wisp-like apparitions that manipulate a broken sword, three gray ghosts, a dark chihuahua-like dog, and a manifestation of lightning called Raou. As of Xrd, he is now "cured" from S-Ko's possession and now working for a paranormal investigation team at Illyria, under direction of the third king Daryl. By the time he reunites with Faust, Zappa soon stumbled upon what Faust had been reading, realizing the true masterminds behind manipulating the Conclave to attack humanity wants something to do on annihilating the Japanese for their extra ordinary Ki.
Introduced in Guilty Gear Isuka:
A.B.A Voiced by: Maki TakimotoA.B.A (アバ, Aba) is an artificial life-form, or homunculus, that was created by a scientist who lived within "Frasco" (フラスコ, Furasuko) mountain. However, before her birth, her creator was taken away by the military. A.B.A found herself alone within Frasco, and lived the first ten years of her life in total isolation until she managed to escape from Frasco. She began to collect keys to find relief from her sadness as they represented the opening of a bold new world and an escape from imprisonment. Eventually, she finds "Flament Nagel", an ancient war relic shaped like a key, and decides to keep it as her partner; whom she renamed "Paracelsus" (パラケルス, Parakerusu). Her new goal was to acquire an artificial body for her newfound partner, whom she refers to as her spouse.
Introduced in Guilty Gear Isuka:
Leopaldon Leopaldon (レオパルドン, Reoparudon) is the boss of Guilty Gear Isuka. He is a good man at heart who somehow manages to control a giant Gear, his faithful dog. A killer who shows absolutely no mercy, he is also a formidable beast.
Introduced in Guilty Gear Judgment:
Judgment Judgment was originally Raymond, a mad scientist working on the remote island of Isene and exploiting its inhabitants, trying to create a living weapon that would surpass even the Gears. He believed his work was the work of God. Raymond is devoured by Inus, a dark king of the underworld, who is subsequently killed. This allows Raymond to take control of Inus's power, transforming himself into Judgment. However, because Inus wished to remain dead, Judgment was subsequently consumed after being defeated.
Introduced in Guilty Gear 2: Overture:
Dr. Paradigm Voiced by (Japanese): Yuji MikimotoDr. Paradigm (Dr.パラダイム, Dokutā Paradaimu) is yet another of Guilty Gear 2: Overture's seven playable characters. He is one of the sealed Gears from the dimensional plane Backyard (バックヤード, Bakkuyādo) during the era of the Genocide Gear Justice in the Crusades. This action was done so Justice could not control his mind using her Commander Gear abilities. Paradigm was later released when the 100-year war was over. Dr. Paradigm is a strongly skilled magician with a fairly large book of magics. He has a protective bubble around him which appears to be permanent. Due to Justice's daughter, Dizzy married to a noble first king of Illriya, Ky Kiske and has a son named Sin, Dr. Paradigm begin to work for the kingdom.
Introduced in Guilty Gear 2: Overture:
Izuna Voiced by (Japanese): Toru FurusawaIzuna (イズナ) is another playable character who made his debut in Guilty Gear 2: Overture. Along with Valentine, Izuna also comes from the Backyard. Izuna is a Japanese fox spirit from a mysterious race called the Yokai. Unlike most spirits of his kind, Izuna gained a physical form through the power of his will. Izuna states that "There are more physical spirit demons like me, but were brainwashed by Valentine". He is a very skilled swordsman who wields a katana named "Namakura". Not only that, he possesses certain unique magical abilities such as his potent teleportation. Izuna is also the one to teach Sol Badguy and Sin how to use the tactics of the Ghost and Master Ghost system in the game as well. Most fans believe Izuna's looks were taken off of Slayer, though this is not likely because various interviews with Daisuke Ishiwatari he exclaims that Izuna along with Valentine, Dr. Paradigm, and Sin are all new children (creations) of his.
Introduced in Guilty Gear 2: Overture:
Ever since the Cradle Incident in Xrd, his current whereabouts are not known, and no one has been able to reach him since.
Introduced in Guilty Gear 2: Overture:
Raven Voiced by (Japanese): Shigeru Sakano (Guilty Gear XX), Hiroki Yasumoto (later games)One of three servants to That Man. Little is known of him, but he does share some sort of connection with Axl Low which That Man describes as being "parallel existences" of each other. In the Guilty Gear novel "Lightning the Argent", Raven shows unusual battle prowess by essentially ignoring Sol's fire attacks, via regeneration, and beating Faust in an inter-dimensional battle. Raven is also present in several endings in Guilty Gear XX, in which one he is noted as the parallel existence of Axl Low. Raven appears as a boss character as well as was made a playable character via DLC in the game Guilty Gear 2: Overture for the Xbox 360. In Xrd second story of -Revelator, he is entrusted by That Man to carry on his mission in his absence, due to Bedman's interference while trying to stop Justice from destroying Earth, sending him into Nightmare Theater, though immune to the effect, but had him locked within isolated space, thus unable to exit real world, which That Man found it difficult to try. Raven's current tasks are to find Elphelt before she merges with Justice, and to seek out Jack-O, but first they need an assistance of Sol and his party since their objectives coincide. Raven has been announced as a playable character for Revelator in February 2016.
Introduced in Guilty Gear 2: Overture:
Sin Kiske Voiced by (English): Yuri Lowenthal (Guilty Gear 2: Overture), Lucien Dodge (Guilty Gear Xrd SIGN - present) Voiced by (Japanese): Issei MiyazakiSin Kiske (シン=キスク, Shin Kisuku) is another of the six unique playable characters in Guilty Gear 2: Overture. He is son of the king of Illyuria, Ky Kiske, and the Maiden of the Grove, who was confirmed to be Dizzy. He was left in Sol Badguy's care because of the exploitable fact of his Gear cells may be known to the public. He bears grudges towards his father for neglecting him because of his duties as a king. Though Sin is at most five years old, he has the appearance of a tall boy in his late teens—making it hard to notice his rash, childlike behavior. Through Sol's training, Sin has grown into a very strong child. Sin is playable in the console versions of Guilty Gear Xrd, where he reconciles with his father, after his mother was freed from her seal, and eventually realizes that Sol is related to him and his family, out of concern of his relation with Justice, Sin's grandmother whose identity was once Aria. With his mother's reputation being cleared for her heroic actions against a possessed Ariels, Sin returns home with his family to live peacefully, in addition of welcoming Elphelt and Ramlethal Valentines to the family. With Dizzy's previous outcast record being cleared, and being praised as a heroine and publicly as the first non-human leader, a queen, Sin becomes a knight, just like his father before him.
Introduced in Guilty Gear 2: Overture:
Valentine Voiced by (Japanese): Chie SawaguchiValentine (バレンタイン, Barentain) is one of six playable characters in Guilty Gear 2: Overture. Valentine is the exact copy of Aria, who is indeed one form of the genocide Gear Justice. Valentine's motives lead her to be defeated by Sol Badguy who was once in love with Aria. Her weapon of choice is a talking balloon named Lucifero (ルシフェロ, Rushifero) with various amplified magic abilities. Valentine also has the power to brainwash enemies and turn them into her allies. Valentine's true form is a fake version of a Gear as yet her human looks resemble an unfinished Aria. Although she was destroyed, according to Raven and Asuka, there can be multiple Valentines, which eventually confirms only three known Valentines who first debut in Xrd.
Introduced in Guilty Gear Vastedge XT:
Baldias Voiced by (Japanese): Baldias (バルディウス, Barudiusu) is a member of the Conclave and one of the disciples of The Original (aka Happy Chaos), he fought Sol Badguy and Sin Kiske in their quest to upgrade Sol's Junkyard Dog Mk.II but ended up killed in battle.
Introduced in Guilty Gear Xrd and updates:
Answer Voiced by (Japanese): Tomokazu SekiAnswer (アンサー, Ansā) is a chief officer of Chipp Zanuff. Originally, he was an average street punk before Chipp appeared in his home to clean up crime and help the downtrodden. Finding his preaching annoying, Answer had challenged Chipp to a duel, but lost. After this loss, he was swayed by Chipp's words and joined him in creating the East Chipp Kingdom. Answer is said to have a photographic memory, and he essentially assists Chipp as a 'human database'.
Introduced in Guilty Gear Xrd and updates:
Bedman Voiced by (English): Yuri Lowenthal Voiced by (Japanese): Hikaru MidorikawaBedman (ベッドマン, Beddoman) is a character who made his debut in Guilty Gear Xrd. He is smart, fast-talking and known to ramble, despite saying he does not like long conversations. He is a mind-reading assassin who in an induced coma like his sister Delilah due to their unique condition of their bodies unable to handle their greatly enhanced thought processes, creating a dream world to interact with others while using a weaponized roll-away bed to physically move about. Bedman is hired by Ariels to aid in her scheme with the promise that he would be able to retrieve his sister from her dream and they would thrive in the Absolute World, unaware that of what it actually is. After being forced to be awakened by Venom and Robo-Ky, the latter immune to his mind reading, Bedman succumbs to his condition and dies as he was about to kill Ariels upon learning she exploited him. Shortly after his death, an unknown person who appears to resemble Bedman, later revealed his recently awakened twin sister Delilah approaches his crumbling petrified corpse. As it turns out in Strive that Bedman's soul is still alive within his weaponized bed frame, and Delilah discovers that Happy Chaos is responsible for Ariels' corruption and her brother's eventual downfall prior to being separated by I-No. Thanks to being convinced by Sin to focus on Delilah's survival when her power is proved to be dangerous that would unintentionally destroy her surrounding in attempt to kill Chaos, Bedman sacrifice his soul to buy sometime for Ramlethal and Baiken to give a cure developed by Faust to Delilah. Sometimes later, Bedman's weaponized bed frame has been repaired by Delilah herself, off-screen where it was announced as DLC playable character for Season 2.
Introduced in Guilty Gear Xrd and updates:
Elphelt Valentine Voiced by (English): Cassandra Morris Voiced by (Japanese): Aya SuzakiElphelt Valentine (エルフェルト バレンタイン, Eruferuto Barentain) is a character introduced in the console version of Guilty Gear Xrd as DLC. She is first portrayed as an ally, before and after capturing her sister, Ramlethal, and during a strike against Conclave and Justice. However, in the final chapters of Story Mode, once she exceeded her fighting limit too much, as Dr. Paradigm's warned her not to engage in combat while defending the Illriya castle, it is revealed her "true" objective was originally concealed from her own mind and didn't activate until after Justice's awakening. She was purposefully created not knowing her objective so that she could get close to Sol, Ky, and other major threats to "Mother." Even though Sol had much distrust for her in the beginning, he felt the need to save her from her programming as she started to remind him of Aria, not only in appearance, but personality as well. Before she self-destruct with no other way to stop it, she is stopped and saved by her sister Ramlethal as thanks for helping her awaken to the concept of emotions. She is then brought back to the Backyard where her fate is currently unknown. Apparently, she survived in -Revelator- storyline, but suddenly undergone a drastic change on her emotion and costume appearances. She is used by the mastermind, Ariels (or rather possessed by Happy Chaos) to be a vessel of Justice and destroyed humanity, but only to be saved Jack-O' by switching their places, and thus reviving Aria. Afterwards, she and Ramlethal lives in Kiske Estate.
Introduced in Guilty Gear Xrd and updates:
Jack-O' Valentine Voiced by (English): Nicole Tompkins Voiced by (Japanese): Hiromi IgarashiJack-O' Valentine (ジャックオー·バレンタイン, Jakkuō Barentain), more commonly known as simply Jack-O', is a playable character in Guilty Gear Xrd -Revelator- who uses Jack O'Lantern-themed weapons such as explosives . She is a Valentine that Asuka created from the remaining half of Aria's soul, created for the purpose to merge with Justice to restore her as Aria. Being incomplete made her slightly unstable and unable to function without her mask and toffee while developing a second childish persona. While Jack-O succeeds in her mission, she becomes the dominant persona of the completed Aria as she accompanies Sol as a bounty hunter. Despite already becoming a reincarnated Aria, Jack-O still has a conflict of being the original Aria's replacement, but her Aria-half insist she is better than she think she was, even Sol never once called Jack-O, "Aria", yet she already possess her pre-incarnate self's kindness completely from prior to becoming Justice as a primary reason for Sol to finally found peace and live.
Introduced in Guilty Gear Xrd and updates:
Kum Haehyun Voiced by: Hideaki Tezuka (Jeonryeok Kum)Kum Haehyun (琴慧弦, Kumu Hehyon) is the head of the Kum family and descendant of "Tuners" who can control the flow of energy. Kum Haehyun is a female, unknown to all, and normally rides inside of the humanoid artificial body Jeonryeok Kum ("Full Power Kum"). Only the Kum family are able to manipulate the flow of energy, but their presence in the world is scarce.
Introduced in Guilty Gear Xrd and updates:
Leo Whitefang Voiced by (English): Jamieson Price Voiced by (Japanese): Tetsu InadaLeo Whitefang (レオ·ホワイトファング, Reo Howaitofangu) is a character who first appeared in the console version of Guilty Gear Xrd as DLC. One third of the triumvirate ruling Illyria. He's an old acquaintance of Ky's from the Crusades, during which they became friends and rivals. Once faced with the threat of annihilation on the frontlines, he proved his combat and leadership skills by leading his unit to survival. His raucous tone may give the impression that he lacks guile, but he's actually very discrete. He's proud and a sore loser, but he's also a hard worker, always willing to put in a little more effort. He created his very own dictionary, and privately enjoys adding people and incidents to the definitions of existing words. Leo is also an expert at surveillance. This is shown when he re-inspects which one is the real Happy Chaos, whom Sol, Asuka and Vernon thought to got rid of him off the White House airship Tir Na Nog.
Introduced in Guilty Gear Xrd and updates:
Ramlethal Valentine Voiced by (English): Erin Fitzgerald (Guilty Gear Xrd -SIGN-), Laura Stahl (Guilty Gear Strive) Voiced by (Japanese): Megumi HanRamlethal Valentine (ラムレザル·バレンタイン, Ramurezaru Barentain) appears as a boss and later playable character in Guilty Gear Xrd. A lone girl who declared war on the entire world. She is a non-human life form born in the Backyard, which governs all of creation. Her relation to the Valentine who orchestrated the prior Baptisma 13 Incident (the Illyrian Invasion) is unknown. As an assassin of the Merciless Apocalypse, her objective is the extermination of the human race, and to that end she has formed an alliance of convenience with the United Nations Senate. Awakening the "Cradle" is her sole objective and mission. Her primary obstacle is Sol. However, her objective failed and she was easily captured because of Elphelt Valentine's sudden appearance. During her "imprisonment", she began to developed more emotions, instead of gloomy, sadistic and painless, under Sin and Elphelt's surveillance. When Bedman is sent by "Mother" to eliminate Ramlethal, he agrees to do so as he sees her as a thing, not a person. He is quite shocked, however, to see she has gained emotions and thus can't bring himself to kill a "little girl". When Elphelt went to help Leo to counter the Conclave and Justice's invasion on Illriya's castle, she exceeds her fighting limit. This turns her into a mindless puppet and she attacks her own allies after The Conclave and Justice are defeated. After Sol helps her re-assert her true consciousness, she decides to self-destruct to prevent herself from harming her allies, but Ramlethal deactivates her self-destruct sequence, in return for her kindness and introducing her to the concept of emotions. She is last seen in the company of Sol, Ky, and Sin as she watches her sister being summoned back into the Backyard. In -REVELATOR-, she is currently accompanying Sol and Sin as they search for Elphelt and Justice. After Jack-O switches places Elphelt and is reborn as Aria, she and Elphelt begin living at the Kiske Estate at the same time when Dizzy's public reputation is cleared and herself being praised as a perfect queen who married to a kind king like Ky. As of STRIVE, Ramlethal becomes a brigade commander at Illyria.
Introduced in Guilty Gear Strive:
Asuka R. Kreutz and Asuka R♯ Voiced by (English): Yuri Lowenthal (Guilty Gear 2: Overture), Derek Stephen Prince (Guilty Gear XRD -SIGN- - present) Voiced by (Japanese): Tomokazu SugitaAsuka R. Kreutz (飛鳥=R=クロイツ, Asuka R. Kuroitsu), the Gear Maker, is the creator of the Gears, and both initially thought to be referred to as "That Man" (あの男, Ano Otoko) and the primary antagonist of the Guilty Gear games until Xrd revealed his past and how he was turned into a scapegoat. Asuka was an apprentice of The Original (now Happy Chaos) who was entrusted with the Tome of Origin, which he fused into his body, along with the Flame of Corruption and the Scales of Juno. He bestowed the two powers to Fredrick and Aria when saving the latter and honoring her request for Fredrick to continue living. However, he comes to regret it when Aria's transition into Justice and easily getting corrupted by Universal Will caused the Gear Wars known as Crusades, with his reluctant initial firing on Japan to prevent the will from turning more Japanese into the living bombs Antimatter Gears unwittingly served as a beacon. But in the Xrd storyline, aiding the protagonists in stopping the Conclave and then a possessed Ariels, Asuka revealed he only took the blame for the Universal Will's actions while creating Jack'O to complete Justice's restoration to Aria. Once Ariels is defeated, and Jack-O fully reincarnated as a fully human Aria, Asuka surrenders to the government so he can commence his World Peace Experiment with the intent of removing the Flame of Corruption from Sol, and leaving Earth to prevent the Tome of Origin from falling into the wrong hands. Although Asuka succeeds in removing Flame of Corruption from Sol, after he, Sol and U.S. president Vernon thought they got rid of Happy Chaos, their timing is at the worst peak when Chaos has escaped the ambush by swapping places with a brainwashed agent and expose the real Tome of Origin from Asuka's body to fuse himself with I-No, until Sol destroys her, with a help from Nagoriyuki, Axl and Ky. Following days, Asuka started a new normal life as a radio broadcaster in his own established studio, his dream job, with Tir na Nog as the main station while moving the U.S.A.'s White House to safer areas.
Introduced in Guilty Gear Strive:
He was originally non-playable in the previous games, until he was announced as the fourth playable DLC of Strive's 2nd season on May 17th, 2023, with a release date of May 25th, 2023. Although his decoy clone, dubbed Asuka R♯ (飛鳥=R♯, Asuka R♯) serves as the character's default palette.
Introduced in Guilty Gear Strive:
Giovanna Voiced by (English): Lilimar Hernandez Voiced by (Japanese): Mayumi ShintaniGiovanna (ジオヴァーナ) is a Brazilian officer in the special operations unit that protects the President of the United States, accompanied by a wolf spirit called Rei. She is a very talented agent, yet still declined the privilege of earning a high ranked badge, due to hating its current new design made by president Vernon due to it being too cartoony. When Happy Chaos' invasion on the White House is about to begin, Giovanna was able to get there in time and took out most of Chaos' brainwashed agents and soldiers off-screen before the White House is revealed to be an airship Tir Na Nog. Giovanna has a need find a suitable good man, particularly the Illyrian second king, Leo Whitefang. Her distaste towards the special operations unit's cheesy high ranked badge design and her interest in Leo allows the second King to realize that the real Chaos was impersonating one of the new agents, Udos and was still on the airship, whereas the "Chaos" whom Sol, Vernon and Asuka got rid of was actually one of her high-ranked boss, Stryper.
Introduced in Guilty Gear Strive:
Goldlewis Dickinson Voiced by (English): Steven Barr Voiced by (Japanese): Masafumi KimuraGoldlewis Dickinson (ゴールドルイス=ディキンソン, Gōrudorūisu Dikinson) is an overweighted right hand-man of the 76th US President Colin Vernon E. Groubitz and the Secretary of Defense, who fights alongside an alien spirit filled coffin, code name "Area 51 - U.M.A.". Dickinson is personally invested in the ordeal with Asuka R. Kreutz, trying to protect his country from a possible threat of destruction. After Asuka turned himself in to the government and volunteered the World Peace G4 Summit, Dickinson was originally skeptical about Asuka supposedly advocating for peace, and jokes about whether "That Man" is actually two separate people. Unfortunately, his joke turns out to be the truth and he was unknowingly right all along. Asuka was held responsible for the initial firing, because of the person who exacerbated the conflict during the Crusades and framed Asuka as a scapegoat was actually his fallen master, Happy Chaos, the former Original, whom the public had also pinned under the title of "That Man" due to not knowing whom to blame for it.
Introduced in Guilty Gear Strive:
He was initially one of the major NPC characters in the Guilty Gear Strive's main storyline, prior to being released as the first downloadable playable character of Season 1. Dickinson also has a twin older brother who is a carefree sheriff in one of the American stages of Guilty Gear Xrd. Despite both sharing their love for eating fast foods (particularly Burgers), Goldlewis is dutiful, compared to his brother's carefree attitude.
Introduced in Guilty Gear Strive:
Happy Chaos Voiced by (English): Robbie Daymond Voiced by (Japanese): Makoto TakahashiHappy Chaos (ハッピーケイオス, Happī Keiosu) is an overreaching antagonist who took the name of secondary Jack'O Valentine that Asuka created as a contingency should Sol Badguy fail to defeat Ariels or I-No's power runs rampant, formally introduced in Guilty Gear Strive. His true identity is an Irish man known as The Original (第一の男, Daīchi no Otoko, First Man), a child prodigy from the last 20th century who discovered the Backyard and created the Universal Will; "The Father of All Magic." He gathered disciples in Asuka R. Kreutz and the Conclave as he instructed them and the Sanctus Populi to safeguard the world, entrusting the artifacts he took from the Backyard to Asuka before departing into it to create I-No as a replacement to the Ultimate Will. But he mutates from absorbing half of I-No's power to prevent her from unraveling reality, losing his sense of morality while sealed within the Universal Will. Through possessing the Universal Will, the Original orchestrated numerous disasters and tragedies through the Universal Will with Asuka as his scapegoat. He later gains physical form when after I-No extracted him from Ariels, convincing I-No to steal Asuka's Tome of Origin to restore her power. Though he fused into I-No to restore her full power, he later reconstitutes himself following her demise.
Introduced in Guilty Gear Strive:
He was later announced a third playable DLC of Strive Season 1 at Red Bull Kumite on 14 November 2021, and was slated for November 30, 2021, release date.
Introduced in Guilty Gear Strive:
Nagoriyuki Voiced by (English): Evan Michael LeeVoiced by (Japanese): Taiten KusunokiNagoriyuki (名残雪) is a dark-skinned Nigerian Nightless vampire samurai and a war veteran of Crusades who debuted in Guilty Gear Strive. In battle, he wields an enormous katana paired with a large wakizashi and can drain opponents of their blood, like his fellow surviving Nightless, Slayer. In contrast to Slayer's dandyism, Nagoiryuki is devoted to bushido, The material from his katanas is powerful and dangerous, which would slow those who have healing factors, no matter what species, including godlike beings. It is also implied that he knew Chipp's master, Tsuyoshi; as Chipp recognized some of his attacks, such as Gamma Blade. Having had been sealed and meditating beneath a building of Illriya town, he was forced to serve Happy Chaos once again on invading the U.S. White House airship Tir Na Nog, until he met Sol Badguy, allowing himself to be freed of Happy Chaos' control and aid Sol in return.
Non-playable characters:
Aria Hale Voiced by (English): Nicole Tompkins (Guilty Gear Strive) Voiced by (Japanese): Chie SawaguchiAria Hale (アリア·ヘイル, Aria Heiru) is Sol's lover and acquittance of That Man/Asuka, first referred to during Guilty Gear 2: Overture. Aria was said to be born with have an incurable illness called TP infection. Initially, Aria refuse to take Asuka's suggestion to go a cryosleep to keep the illness negated until the cure is found, but eventually agree with his suggestion when Sol volunteered to be transformed into a prototype Gear. While Aria's body is preserved, there are four known Valentines created, such as the original Valentine, Ramlethal, Elphelt and Jack-O, with the latter contains half of her soul and memories. It is also hinted that she could be Justice as when she was killed off, her final words were wishing the three of them could talk one last time. However, this was false, as Justice is revealed to be one of Aria's clones created from the cryogenic remains of her DNA and only retains some of her memories after her conversion into a Gear. While Aria is recreated after Jack-O switches places with Elphelt to merge with Justice in -Revelator-, she remains dormant while letting Jack'O become the dominant persona.
Non-playable characters:
Ariels Voiced by (English): Valerie Arem Voiced by (Japanese): Junko MinagawaSanctus Maximus Populi Ariels is the current leader of the Sanctus Populi and the vessel of the Universal Will, an entity created by The Original to ensure eternal happiness for humans without harming them. But the Original made a critical error in not defining humans, with his creation reaching its own conclusion that the humanity it to meant to serve has yet to come into being and thus considers current "human" race as "redundancies" that must be exterminated. With the Sanctus Populi by its creator, the Universal Will made attempts to gain physical form in the real world through first the creation of Justice and then by possessing those who are named Sanctus Maximus Populi. As Ariels, she created the Valentines and recruited Bedman for her scheme to create an "Absolute World" while merging Justice with Elphelt to create the first her ideal humans. But the plan is foiled and Ariels is incarcerated, the Universal Will's actions later revealed to be manipulated and possessed by The Original, Happy Chaos as I-No extracted him from Ariels's body. She request Sol to stop both I-No and Chaos before they warped the universe. After I-No's demise, Ariels is fully recover back to her real good-self.
Non-playable characters:
Colin Vernon E. Groubitz Voiced by (English): Anthony Alabi (Guilty Gear Strive) Voiced by (Japanese): Kiyoyuki Yanada (Guilty Gear Xrd - Guilty Gear -Strive)A current/76th President of the United States in the current Guilty Gear timeline as of Xrd, with his predecessor Erica Bartholomew as his vice-president. He used to love football before entering the political world while keeping his football and his family photo at the White House as mementos. Sometimes in the past, Vernon lost his organic right arm and being given a mechanical one, which can be used as a weapon to defend himself from danger, such as being able to switch between combat mode or long-range. He plays an important role in Strive.
Non-playable characters:
Daryl Voiced by (English): Kaiji Tang Voiced by (Japanese): Toshiki IzawaThe last of the three kings of the United Kingdoms of Illyria was the Third King. Unlike Ky and Leo, Daryl favors pragmatism, or in his own words; 'thinking objectively'. He is the least popular of the three kings as a result, but the people cannot deny his ability to rule, earning him the nickname King of Groundworks. He is a leader of a paranormal team where Zappa is a member.
Non-playable characters:
When I-No made her move for the next terrorist attack after Ariels was defeated, Daryl takes care of Ky's position to represent Illyria's participation in World Peace G4 Summit at Washington, D.C. Daryl's participation at G4 directly saves Ky's Gear family, due to being originally meant to appear alongside Ky as the representatives of the Gear species shortly before I-No's next scheme happens. During Happy Chaos' invasion of the White House, Daryl remains calm with no fear. When U.S. president Vernon becomes more involved with Sol and Asuka, including Giovanna in stopping Chaos, Daryl was able to learn Chaos' origin and connection to I-No, and outsmart him to ensure the safety of himself and the other world's leader (safe for Vernon) and secretly implanted emergency magic communicator on Chaos' cold coffee to provide emergency backups. Daryl likes puddings and hot tea, and actually hates coffee and cold-temperature drinks.
Non-playable characters:
At some point in the After Story of Xrd Rev 2, before the event of Strive, Daryl was both regrettably pushing one of his subordinates from overusing herself on finishing the biggest pudding in the world, and being one of the paranormal members (barring Zappa) for eating his subordinates' still incomplete, yet unexpectedly cursed giant dessert. At the time of Strive, Daryl and the rest of his subordinates who ate the incomplete pudding are freed from its penalty curse off-screen. By the end of Strive after Sol (now a human named Frederick) defeated Chaos-empowered I-No, Daryl's team celebrates a proper completion of the giant pudding they made together this time.
Non-playable characters:
Delilah Voiced by (English): Jessica DiCicco Voiced by (Japanese): Akane FujikawaA younger sister of the late Bedman, and unwitting antagonist in the Another Story of Guilty Gear Strive, who briefly appeared as a cameo character in Guilty Gear Xrd sub-series. Due to her unstable physical condition, Delilah has been put into the Backyard while her brother had been working hard to find cures for her. Following Bedman's sacrifice to weaken Happy Chaos-possessed Ariels before the heroes' final battle against the fallen Sanctus Populi, Delilah returns to the real world shortly and consumed by vengeance. Delilah is also shares the same thing Baiken had, fueled by vengeance against Happy Chaos, the true culprit who goes by the name "That Man" since Crusades, instead of Asuka R. Kreutz. As Baiken learnt that Chaos cannot be killed easily, she have no choice to adopted Delilah at Anji and Chipp's behalf, to keep her from endangering herself.
Non-playable characters:
By the time Happy Chaos is freed by I-No and made their next moves during Strive event, Delilah decide to get her revenge on Chaos. However, her full power is too unstable to be used to destroy Chaos, which instead, turning her into a living bomb that would destroy half of locations she is in. Thankfully, Bedman, whose soul somehow reside in his weaponized bed manage to buy his time to keep Delilah's power in check for short period, until Faust manage to administer a proper cure for her, at cost of the former's soul and the latter's physical condition, but allow Baiken and Ramlethal to save her, with the help from Sin and Jellyfish Pirates.
Non-playable characters:
Erica Bartholomew Voiced by (English): Sarah Anne Williams (Guilty Gear Strive) Voiced by (Japanese): Masumi TazawaA former President of the United States turned vice-president in the current Guilty Gear timeline and from the novel, The Butterfly and Her Gale. An orphan, she is a child prodigy who became president at the age of 17. Chipp decided to become her bodyguard after the Assassin Syndicate tried to kill her. The reason for the attempt on her life is because the US had been under the influence and corruption of the Syndicate for many years. One of Erica's goals was to rid the US of its influence by forming an alliance with President Gabriel of Zepp since his nation was the most highly technological and powerful nation in the world. Despite attempts by the syndicate to frame each nation of an assassination attempt on their leaders, blackmailing the Senate, and kidnapping her guardian and caretaker of her orphanage, the alliance was finally made thanks to the help of Chipp. She is succeeded by Colin Vernon and demoted to vice-president as of Xrd.
Non-playable characters:
Gabriel Voiced by (English): Richard Epcar Voiced by (Japanese): Takayuki SugōPotemkin's mentor and the current president of the Independent Airborne State of Zepp. Came to power leading the slave uprising in which he freed Potemkin. Though seldom seen in battle, when he does fight, his displayed feats indicate he's one of the most formidable characters in the series. Has a friendly rivalry with Slayer due to his aforementioned powers, being one of few people able to pose a challenge to Slayer. He is also a friend of the current U.S. president Vernon.
Non-playable characters:
Inus A dark king of the underworld, Inus is the fifth boss of Guilty Gear Judgment. Split into skeletal sections (two of them resembling skulls), he devours Raymond just before attacking the player character. After whatever character the player is controlling defeats Inus, he is subsequently killed, allowing Raymond to absorb his power to become Judgment. However, since Inus wished to remain dead, Judgment was consumed after he was defeated.
Non-playable characters:
Jellyfish Air Pirates Jellyfish Air Pirates (空賊のジェリーフィッシュ快賊団, Kūzoku no Jerīfisshu Kaizoku-dan) is Johnny's all-female air pirate crew, who travel with him on the May Ship. The members include eleven girls, one older woman, and a cat. There are several playable characters from the Jellyfish Pirates in the games; namely May, Dizzy and Johnny himself. Most of the members of the Jellyfish crew are orphaned girls adopted by Johnny, but there are exceptions; he took in Dizzy for her protection (and seclusion) from the larger world, and in one ending of Guilty Gear XX allows Bridget to live on the ship (although she is not officially a Jellyfish Pirate) when she has nowhere else to go.
Non-playable characters:
The crew, whose names are derived from the English names of the twelve months of the year, includes, Janis, a black cat with a shaggy white forelock; Febby, a tall busty blonde who is the record-keeper; March, a pink-haired baby girl with a stuffed penguin; April, the ship's pilot; May, one of the crew's most capable fighters and the ship's namesake; June, the navigator; July, who is said to be the fourth strongest fighter on board, after Dizzy, Johnny, and May; Augus, a dark complexion fighter known to be fast; Sephy, a brown-haired and gentle-expressioned girl; Octy, the crew's lookout; Novel, the ship's mechanic who rides a large red mecha; Leap, an enormous white-haired woman who is the ship's cook; Dizzy, whom Johnny helped fake her death for her own protection.
Non-playable characters:
Post-War Administration Bureau The Post-War Administration Bureau (終戦管理局, Shūsen Kanrikyoku) (or P.W.A.B.) is a fictional secret society in the Guilty Gear fighting game series, making its first appearance in Guilty Gear XX. It is the organization that created Robo-Ky, and changed Testament into a Gear.
Non-playable characters:
The organization was founded during the war between humans and Gears; as its name implies, it was intended to manage the affairs of the human race and support the reintegration of soldiers back into society once the Crusades was over. However, the war's end saw no need for them, and it was supposed to have been disbanded. Instead, they fell into the direct control of the Conclave and retreated to the shadows.
Non-playable characters:
The group demonstrably has access to relatively advanced technology, as it created Robo-Ky. However, its members can also use magic; an unidentified member of the group used a crystal ball to observe Jam Kuradoberi in one of her endings.
Non-playable characters:
The purpose of the organization has apparently shifted entirely to maintaining its own power and influence, as well as its own secrecy. Its members are willing to go to any lengths to do so, evidently lacking any ethics in how they go about this. Its interest in each character seems focused on whether they should be manipulated, killed, captured, or studied, as each characters' Story Mode begins with the P.W.A.B.'s profile for that character, accompanied with a "risk rating" that apparently denotes how dangerous they are to the organization. Robo-Ky was created both to impersonate Ky Kiske and as an equalizer should they decide that direct confrontation is necessary. However, despite most of them being destroyed, one of the Robo-Ky becomes a sentient being and survive the destruction.
Non-playable characters:
After the Conclave was found to be the mastermind behind the Cradle Incident in Xrd, the group was due to be dismantled. However, Illyria's Third King Daryl restructered the Bureau into an intelligence agency that supports the Illyria government, incorporating the former Assassin's Guild and having Millia as its head.
Solaria A character from the novel Lightning the Argent. Solaria is a full-blooded Gear created by the Blackard Company. She was used by the company to awaken and control the world's dormant Gears as their weapons. She was later rescued by Ky Kiske where she now lives freely under the protection of the International Police Force.
Non-playable characters:
Tsuyoshi Chipp's Sensei and the man who changed his life. Tsuyoshi was a ninja master who saved Chipp when he was about to be killed by the mafia. It was his tutelage that changed Chipp from a drug addict to the man he is now. He was killed by the Assassin Syndicate before the events of Guilty Gear. The novel The Butterfly and her Gale reveals he was killed because he was an undercover agent of the International Police Force who infiltrated the Syndicate until they found out about his true identity. Tsuyoshi also had mysterious relations with Nagoriyuki, due to the latter can also use Gamma Blade, a technique which Tsuyoshi taught to Chipp.
Non-playable characters:
Volf A member of the Assassin Syndicate from the novel The Butterfly and her Gale. He is the man responsible for the death of Tsuyoshi. In the novel he & the Syndicate tries to prevent President Erica from forming the US/Zepp alliance in any way and even he himself was responsible for kidnapping her guardian. Ironically, his plans were thwarted by Tsuyoshi student, Chipp. For his failure to kill Erica, Chipp and stopping the alliance, he was killed by Venom personally.
Reception:
The characters have often been noted as the best element of the Guilty Gear series. IGN said all the characters are very distinguishable and interesting, and remarked they "doesn't feel repetitive, even after dozens of hours of play", citing them as the reason that separates Guilty Gear from other fighting games. IGN also mentioned the character's play styles are "even more divergent" than their appearances. Game Informer stated "character complexity and unique visual design" have become hallmarks of Guilty Gear. GameSpy cited the characters as one of three reasons Guilty Gear X is "hands-down the best 2D fighting game" as of 2001, remarking that "[t]he difference in style for each character is profound". They also stated that it "has some of the coolest character designs ever seen in a game", and "one of the best casts of characters ever assembled in a fighter." GameSpot called them "unique character" and described their move-sets as "sometimes-bizarre". IGN called them "the best [...] outside Capcom/SNK", and GameSpot found they "truly awesome", noting their diversity "keeps Guilty Gear fresh". Allgame declared "superb is the only way to describe them", asserting they are all "pretty original". GamePro praised the characters' uniqueness as each have "distinct looks and strategies."While some characters have been criticized for being "generic", "typical characters", and "unoriginal" the cast of characters in overall have also been generally described with adjectives such "bizarre", "quirk", and "crazy", with IGN noting that the series' cast makes "the biggest freak show" of Capcom, Darkstalkers, "look like a Saturday morning cartoon". Game Informer dubbed the cast "a roster of startling characters that would make Vincent Price whimper like a kitten." GameNOW even stated, "it makes me afraid to ponder the nature of the demons that have possessed the minds of the artists who created the characters ... They are among the wiliest and most violently flamboyant ever to grace a fighting game." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cystathionine gamma-lyase**
Cystathionine gamma-lyase:
The enzyme cystathionine γ-lyase (EC 4.4.1.1, CTH or CSE; also cystathionase; systematic name L-cystathionine cysteine-lyase (deaminating; 2-oxobutanoate-forming)) breaks down cystathionine into cysteine, 2-oxobutanoate (α-ketobutyrate), and ammonia: L-cystathionine + H2O = L-cysteine + 2-oxobutanoate + NH3 (overall reaction) (1a) L-cystathionine = L-cysteine + 2-aminobut-2-enoate (1b) 2-aminobut-2-enoate = 2-iminobutanoate (spontaneous) (1c) 2-iminobutanoate + H2O = 2-oxobutanoate + NH3 (spontaneous)Pyridoxal phosphate is a prosthetic group of this enzyme.Cystathionine γ-lyase also catalyses the following elimination reactions: L-homoserine to form H2O, NH3 and 2-oxobutanoate L-cystine, producing thiocysteine, pyruvate and NH3 L-cysteine producing pyruvate, NH3 and H2SIn some bacteria and mammals, including humans, this enzyme takes part in generating hydrogen sulfide. Hydrogen sulfide is one of a few gases that was recently discovered to have a role in cell signaling in the body.
Enzyme mechanism:
Cystathionase uses pyridoxal phosphate to facilitate the cleavage of the sulfur-gamma carbon bond of cystathionine, resulting in the release of cysteine. The lysine residue reforms the internal aldimine by kicking off α-iminobutyric acid. Afterwards the external ketimine is hydrolyzed, causing the formation of α-ketobutyrate. The amino group on cystathionine is deprotonated and undergoes a nucleophilic attack of the internal aldimine. An additional deprotonation by a general base results in the formation of the external aldimine and removal of the lysine residue. The basic lysine residue is then able to deprotonate the alpha carbon, pushing electron density into the nitrogen of the pyridine ring. Pyridoxal phosphate is necessary to stabilize this carbanionic intermediate; otherwise the proton's pKa would be too high. The beta carbon is then deprotonated, creating an alpha-beta unsaturation and pushing a lone pair onto the aldimine nitrogen. To reform the aldimine, this lone pair pushes back down, cleaving the sulfur-gamma carbon bond, resulting in the release of cysteine.A pyridoxamine derivative of vinyl glyoxylate remains after the gamma elimination. The lone pair from the pyridine nitrogen pushes electron density to the gamma carbon, which is protonated by lysine. Lysine then attacks the external aldimine, pushing electron density to the beta carbon, which is protonated by a general acid. The imine is then hydrolyzed to release α-ketobutyrate. Deprotonation of the lysine residue causes ammonia to leave, thus completing the catalytic cycle.Cystathionine gamma lyase also shows gamma-synthase activity depending on the concentrations of reactants present. The mechanisms are the same until they diverge after formation of the vinyl glyoxylate derivative. In the gamma synthase mechanism, the gamma carbon is attacked by a sulfur nucleophile, resulting in the formation of a new sulfur-gamma carbon bond.
Enzyme structure:
Cystathionine γ-lyase is a member of the Cys/Met metabolism PLP-dependent enzymes family. Other members include cystathionine γ synthase, cystathionine β lyase, and methionine γ lyase. It is also a member of the broader aspartate aminotransferase family. Like many other PLP-dependent enzymes, cystathionine γ-lyase is a tetramer with D2 symmetry.Pyridoxal phosphate is bound in the active site by Lys212.
Disease relevance:
Cysteine is the rate-limiting substrate in the synthetic pathway for glutathione in the eye. Glutathione is an antioxidant that protects crystallins in the eye from reactive oxygen species; denatured crystallins can lead to cataracts. Cystathionase is also a target for reactive oxygen species. Thus as cystathionase is oxidized, its activity decreases, causing a decrease in cysteine and, in turn, glutathione in the eye, leading to a decrease in antioxidant availability, causing a further decrease in cystathionase activity. Deficiencies in cystathionase activity have also been shown to contribute to glutathione depletion in patients with cancer and AIDS.
Disease relevance:
Mutations and deficiencies in cystathionase are associated with cystathioninuria. The mutations T67I and Q240E weaken the enzyme's affinity for pyridoxal phosphate, the co-factor vital to enzymatic function. Low levels of H2S have also been associated with hypertension in mice.Excessive levels of H2S, due to increased activity of cystathionase, are associated with endotoxemia, acute pancreatitis, hemorrhagic shock, and diabetes mellitus.Propargylglycine and β-cyanoalanine are two irreversible inhibitors of cystathionase used to treat elevated H2S levels. Mechanistically, the amino group of propargylglycine attacks the aldimine to form an external aldimine. The β position of the alkyne is then deprotonated to form the allene, which is then attacked by the phenol of Tyr114. The internal aldimine can regenerate, but the newly created vinyl ether sterically hinders the active site, blocking cysteine from attacking pyridoxal phosphate.
Regulation:
H2S decreases transcription of cystathionase at concentrations between 10 and 80μM. However, transcription is increased by concentrations near 120μM, and inhibited completely at concentrations in excess of 160μM. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OptiSLang**
OptiSLang:
optiSLang is a software platform for CAE-based sensitivity analysis, multi-disciplinary optimization (MDO) and robustness evaluation. It is developed by Dynardo GmbH and provides a framework for numerical Robust Design Optimization (RDO) and stochastic analysis by identifying variables which contribute most to a predefined optimization goal. This includes also the evaluation of robustness, i.e. the sensitivity towards scatter of design variables or random fluctuations of parameters. In 2019, Dynardo GmbH was acquired by Ansys.
Methodology:
Sensitivity analysis Representing continuous optimization variables by uniform distributions without variable interactions, variance based sensitivity analysis quantifies the contribution of the optimization variables for a possible improvement of the model responses. In contrast to local derivative based sensitivity methods, the variance based approach quantifies the contribution with respect to the defined variable ranges.
Methodology:
Coefficient of Prognosis (CoP) The CoP is a model independent measure to assess the model quality and is defined as follows: pred SST Where pred is the sum of squared prediction errors. These errors are estimated based on cross validation. In the cross validation procedure, the set of support points is mapped to q subsets. Then the approximation model is built by removing subset i from the support points and approximating the subset model output yi using the remaining point set. This means that the model quality is estimated only at those points which are not used to build the approximation model. Since the prediction error is used instead of the fit, this approach applies to regression and even interpolation models.
Methodology:
Metamodel of Optimal Prognosis (MOP): The prediction quality of an approximation model may be improved if unimportant variables are removed from the model. This idea is adopted in the Metamodel of Optimal Prognosis (MOP) which is based on the search for the optimal input variable set and the most appropriate approximation model (polynomial or Moving Least Squares with linear or quadratic basis). Due to the model independence and objectivity of the CoP measure, it is well suited to compare the different models in the different subspaces.
Methodology:
Multi-disciplinary optimization: The optimal variable subspace and approximation model found by a CoP/MOP procedure can also be used for a pre-optimization before global optimizers (evolutionary algorithms, Adaptive Response Surface Methods, Gradient-based methods, biological-based methods) are used for a direct single-objective optimization. After conducting a sensitivity analysis using MOP/CoP, also a multi-objective optimization can be performed to determine the optimization potential within opposing objectives and to derive suitable weighting factors for a following single-objective optimization. Finally this single-objective optimization determines an optimal design.
Methodology:
Robustness evaluation: In variance-based robustness analysis, the variations of the critical model responses are investigated. In optiSLang, random sampling methods are used to generate discrete samples of the joined probability density function of the given random variables. Based on these samples, which are evaluated by the solver similarly as in the sensitivity analysis, the statistical properties of the model responses as mean value, standard deviation, quantiles and higher order stochastic moments are estimated.
Methodology:
Reliability analysis: Within the framework of probabilistic safety assessment or reliability analysis, the scattering influences are modelled as random variables, which are defined by distribution type, stochastic moments and mutual correlations. The result of the analysis is the complementary of reliability, the probability of failure, which can be represented on a logarithmic scale.
Process integration:
optiSLang is designed to use several solvers to investigate mechanical, mathematical, technical and any other quantifiable problems. Herein optiSLang provides direct interfaces for external programs: ANSYS MATLAB GNU Octave Excel OpenOffice Calc Python Abaqus SimulationX CATIA LS-DYNA Flownex multiPlas any software with text-based input definition
History:
Since the 1980s, research teams at the University of Innsbruck and Bauhaus-Universität Weimar had been developing algorithms for optimization and reliability analysis in conjunction with finite element simulations. As a result, the software "Structural Language (SLang)" was created. In 2000, CAE engineers first applied it to conducted optimization and robustness analysis in the automotive industry. In 2001, the Dynardo GmbH was founded in 2003. Based on SLang, the software optiSLang was launched as an industrial solution for CAE-based sensitivity analysis, optimization, robustness evaluation and reliability analysis. In 2013, the current version optiSLang 4 was completely restructured with a new graphical user interface and extended interfaces to external CAE processes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transcendental realism**
Transcendental realism:
Initially developed by Roy Bhaskar in his book A Realist Theory of Science (1975), transcendental realism is a philosophy of science that was initially developed as an argument against epistemic realism of positivism and hermeneutics. The position is based on Bhaskar's transcendental arguments for certain ontological and epistemological positions based on what reality must be like in order for scientific knowledge to be possible. The overview of transcendental realism that follows is largely based on Andrew Sayer's Realism and Social Science.
Transitive and intransitive domains:
A Realist Theory of Science starts with a proposed paradox: how it is that people create knowledge as a product of social activities and at the same time knowledge is 'of' things that are not produced by people at all.
Transitive and intransitive domains:
The former is inspired by Kuhnian arguments of how scientific communities develop knowledge and asserts all observation is theory-laden based on previously acquired concepts. As such, it is not a naïve realist perspective that knowledge is a direct acquisition of facts through observation of the real world, but rather that knowledge is fallible. This ontological position is described as the transitive domain of knowledge, in that knowledge can change over time.
Transitive and intransitive domains:
The second part of the paradox is asserted to be based on a real world, which exists and behaves in the same manner regardless of whether or not people exist or whether they know about the real world. This is described as the intransitive domain of knowledge. Reducing ontology to epistemology is referred to as the epistemic fallacy, a fallacy that Bhaskar asserts has been made repeatedly over the last 300 years of philosophy of science.
Real, actual, and empirical:
The exposition of transcendental realism continues that not only is the world divided into a real world and our knowledge of it, but it is further divided into the real, the actual and the empirical. The real is the intransitive domain of things that exist (i.e. the real world): objects, their structures and their causal powers. It is important to note that even though these objects and structures may be able to perform certain action, those actions may go unrealized. This gives rise to the actual, which is the events that actually occur, regardless of whether or not people are aware of them. The empirical contains the events that people have actually experienced.
Stratification and emergence:
Transcendental realism further argues for a stratified reality. The relationships between objects and the combinations of their causal powers may create entirely new structures with new causal powers. The typical example of this is of water, which has a causal power of extinguishing fire, but is made up of hydrogen and oxygen that have causal powers of combustion.
Stratification and emergence:
This stratification spans through all sciences: physics, chemistry, biology, sociology, etc. This implies that objects in sociology – labor markets, capitalism, etc. – are just as real as that of physics. This is not a reductionist position: while each stratum is dependent on the objects and their relationships in the strata below it; the difference in causal powers means that they are necessarily different objects.
Causality and mechanisms:
Other philosophies of science based on the Humean tradition assert that causality is based on regularity among sequences of events. For transcendental realism, this explanation of causation holds little weight — "what causes something to happen has nothing to do with the number of times we have observed it happening".{Sayer, 2000, p.14} Instead of referring to events, transcendental realism refers to causal mechanisms, the internal processes of objects which give rise to events. These mechanisms may lie dormant or may counteract each other and prevent events from occurring. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reflexive monism**
Reflexive monism:
Reflexive monism is a philosophical position developed by Max Velmans, in his books Understanding Consciousness (2000, 2009) and Toward a Deeper Understanding of Consciousness (2017), to address the problems of consciousness. It is a modern version of an ancient view that the basic stuff of the universe manifests itself both physically and as conscious experience (a dual-aspect theory in the traditions of Spinoza and Fechner). The argument is that the mind and, ultimately, the universe is psycho-physical.Monism is the view that the universe, at the deepest level of analysis, is composed of one fundamental kind of stuff. This is usually contrasted with substance dualism, the view found in the writings of Plato and Descartes that the universe is composed of two kinds of stuff, the physical and the stuff of soul, mind or consciousness.
Reflexive monism:
Reflexive monism maintains that, in its evolution from some primal undifferentiated state, the universe differentiates into distinguishable physical entities, at least some of which have the potential for conscious experience, such as human beings. While remaining embedded within and dependent on the surrounding universe and composed of the same fundamental stuff, each human, equipped with perceptual and cognitive systems, has an individual perspective on, or view of, the rest of the universe and themself. In this sense, each human participates in a process whereby the universe differentiates into parts and becomes conscious of itself, making the process reflexive. Donald Price and James Barrell write that, according to reflexive monism, experience and matter are two complementary (first- and third-person viewable) sides of the same reality, and neither can be reduced to the other. That brain states are causes and correlates of consciousness, they write, does not mean that they are ontologically identical to it, and they develop the use of complementary first- and third-person perspectives into a non-reductive, empirical program for investigating the relationship of conscious experience to neuroscience.A similar combination of monism and reflexivity is found in later Vedic writings such as the Upanishads, as well as the Buddhist views of Chittamatra and Dzogchen. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Motion sickness**
Motion sickness:
Motion sickness occurs due to a difference between actual and expected motion. Symptoms commonly include nausea, vomiting, cold sweat, headache, dizziness, tiredness, loss of appetite, and increased salivation. Complications may rarely include dehydration, electrolyte problems, or a lower esophageal tear.The cause of motion sickness is either real or perceived motion. This may include from car travel, air travel, sea travel, space travel, or reality simulation. Risk factors include pregnancy, migraines, and Ménière's disease. The diagnosis is based on symptoms.Treatment may include behavioral measures or medications. Behavioral measures include keeping the head still and focusing on the horizon. Three types of medications are useful: antimuscarinics such as scopolamine, H1 antihistamines such as dimenhydrinate, and amphetamines such as dexamphetamine. Side effects, however, may limit the use of medications. A number of medications used for nausea such as ondansetron are not effective for motion sickness.Nearly all people are affected with sufficient motion and most people will experience motion sickness at least once in their lifetime. Susceptibility, however, is variable, with about one-third of the population being highly susceptible while most other people are affected under extreme conditions. Women are more easily affected than men. Motion sickness has been described since at least the time of Homer (c. eighth century BC).
Signs and symptoms:
Symptoms commonly include nausea, vomiting, cold sweat, headache, dizziness, tiredness, loss of appetite, and increased salivation. Occasionally, tiredness can last for hours to days after an episode of motion sickness, known as "sopite syndrome". Rarely severe symptoms such as the inability to walk, ongoing vomiting, or social isolation may occur while rare complications may include dehydration, electrolyte problems, or a lower esophageal tear from severe vomiting.
Cause:
Motion sickness can be divided into three categories: Motion sickness caused by motion that is felt but not seen, as in terrestrial motion sickness; Motion sickness caused by motion that is seen but not felt, as in space motion sickness; Motion sickness caused when both systems detect motion but they do not correspond, as in either terrestrial or space motion sickness.
Cause:
Motion felt but not seen In these cases, motion is sensed by the vestibular system and hence the motion is felt, but no motion or little motion is detected by the visual system, as in terrestrial motion sickness.
Cause:
Carsickness A specific form of terrestrial motion sickness, being carsick is quite common and evidenced by disorientation while reading a map, a book, or a small screen during travel. Carsickness results from the sensory conflict arising in the brain from differing sensory inputs. Motion sickness is caused by a conflict between signals arriving in the brain from the inner ear, which forms the base of the vestibular system, the sensory apparatus that deals with movement and balance, and which detects motion mechanically. If someone is looking at a stationary object within a vehicle, such as a magazine, their eyes will inform their brain that what they are viewing is not moving. Their inner ears, however, will contradict this by sensing the motion of the vehicle.Varying theories exist as to cause. The sensory conflict theory notes that the eyes view motion while riding in the moving vehicle while other body sensors sense stillness, creating conflict between the eyes and inner ear. Another suggests the eyes mostly see the interior of the car which is motionless while the vestibular system of the inner ear senses motion as the vehicle goes around corners or over hills and even small bumps. Therefore, the effect is worse when looking down but may be lessened by looking outside of the vehicle.
Cause:
In the early 20th century, Austro-Hungarian scientist Róbert Bárány observed the back and forth movement of the eyes of railroad passengers as they looked out the side windows at the scenery whipping by. He called this "railway nystagmus", also known as "optokinetic nystagmus". His findings were published in the journal Laeger, 83:1516, Nov.17, 1921.
Cause:
Airsickness Air sickness is a kind of terrestrial motion sickness induced by certain sensations of air travel. It is a specific form of motion sickness and is considered a normal response in healthy individuals. It is essentially the same as carsickness but occurs in an airplane. An airplane may bank and tilt sharply, and unless passengers are sitting by a window, they are likely to see only the stationary interior of the plane due to the small window sizes and during flights at night. Another factor is that while in flight, the view out of windows may be blocked by clouds, preventing passengers from seeing the moving ground or passing clouds.
Cause:
Seasickness Seasickness is a form of terrestrial motion sickness characterized by a feeling of nausea and, in extreme cases, vertigo experienced after spending time on a boat. It is essentially the same as carsickness, though the motion of a watercraft tends to be more regular. It is typically brought on by the rocking motion of the craft or movement while the craft is immersed in water. As with airsickness, it can be difficult to visually detect motion even if one looks outside the boat since water does not offer fixed points with which to visually judge motion. Poor visibility conditions, such as fog, may worsen seasickness. The greatest contributor to seasickness is the tendency for people being affected by the rolling or surging motions of the craft to seek refuge below decks, where they are unable to relate themselves to the boat's surroundings and consequent motion. Some people with carsickness are resistant to seasickness and vice versa. Adjusting to the craft's motion at sea is called "gaining one's sea legs"; it can take a significant portion of the time spent at sea after disembarking to regain a sense of stability "post-sea legs".
Cause:
Centrifuge motion sickness Rotating devices such as centrifuges used in astronaut training and amusement park rides such as the Rotor, Mission: Space and the Gravitron can cause motion sickness in many people. While the interior of the centrifuge does not appear to move, one will experience a sense of motion. In addition, centrifugal force can cause the vestibular system to give one the sense that downward is in the direction away from the center of the centrifuge rather than the true downward direction.
Cause:
Dizziness due to spinning When one spins and stops suddenly, fluid in the inner ear continues to rotate causing a sense of continued spinning while one's visual system no longer detects motion.
Cause:
Virtual reality Usually, VR programs would detect the motion of the user's head and adjust the rotation of vision to avoid dizziness. However, some cases such as system lagging or software crashing could cause lags in the screen updates. In such cases, even some small head motions could trigger the motion sickness by the defense mechanism mentioned below: the inner ear transmits to the brain that it senses motion, but the eyes tell the brain that everything is still.
Cause:
Motion seen but not felt In these cases, motion is detected by the visual system and hence the motion is seen, but no motion or little motion is sensed by the vestibular system. Motion sickness arising from such situations has been referred to as "visually induced motion sickness" (VIMS).
Cause:
Space motion sickness Zero gravity interferes with the vestibular system's gravity-dependent operations, so that the two systems, vestibular and visual, no longer provide a unified and coherent sensory representation. This causes unpleasant disorientation sensations often quite distinct from terrestrial motion sickness, but with similar symptoms. The symptoms may be more intense because a condition caused by prolonged weightlessness is usually quite unfamiliar.Space motion sickness was effectively unknown during the earliest spaceflights because the very cramped conditions of the spacecraft allowed for only minimal bodily motion, especially head motion. Space motion sickness seems to be aggravated by being able to freely move around, and so is more common in larger spacecraft. Around 60% of Space Shuttle astronauts experienced it on their first flight; the first case of space motion sickness is now thought to be the Soviet cosmonaut Gherman Titov, in August 1961 onboard Vostok 2, who reported dizziness, nausea, and vomiting. The first severe cases were in early Apollo flights; Frank Borman on Apollo 8 and Rusty Schweickart on Apollo 9. Both experienced identifiable and quite unpleasant symptoms—in the latter case causing the mission plan to be modified.
Cause:
Screen images This type of terrestrial motion sickness is particularly prevalent when susceptible people are watching films presented on very large screens such as IMAX, but may also occur in regular format theaters or even when watching TV or playing games. For the sake of novelty, IMAX and other panoramic type theaters often show dramatic motions such as flying over a landscape or riding a roller coaster. This type of motion sickness can be prevented by closing one's eyes during such scenes.In regular-format theaters, an example of a movie that caused motion sickness in many people is The Blair Witch Project. Theaters warned patrons of its possible nauseating effects, cautioning pregnant women in particular. Blair Witch was filmed with a handheld camcorder, which was subjected to considerably more motion than the average movie camera, and lacks the stabilization mechanisms of steadicams.Home movies, often filmed with a cell phone camera, also tend to cause motion sickness in those who view them. The person holding the cell phone or other camera usually is unaware of this as the recording is being made since the sense of motion seems to match the motion seen through the camera's viewfinder. Those who view the film afterward only see the movement, which may be considerable, without any sense of motion. Using the zoom function seems to contribute to motion sickness as well since zooming is not a normal function of the eye. The use of a tripod or a camera or cell phone with image stabilization while filming can reduce this effect.
Cause:
Virtual reality Motion sickness due to virtual reality is very similar to simulation sickness and motion sickness due to films. In virtual reality the effect is made more acute as all external reference points are blocked from vision, the simulated images are three-dimensional and in some cases stereo sound that may also give a sense of motion. The NADS-1, a simulator located at the National Advanced Driving Simulator, is capable of accurately stimulating the vestibular system with a 360-degree horizontal field of view and 13 degrees of freedom motion base. Studies have shown that exposure to rotational motions in a virtual environment can cause significant increases in nausea and other symptoms of motion sickness.In a study conducted by the U.S. Army Research Institute for the Behavioral and Social Sciences in a report published May 1995 titled "Technical Report 1027 – Simulator Sickness in Virtual Environments", out of 742 pilot exposures from 11 military flight simulators, "approximately half of the pilots (334) reported post-effects of some kind: 250 (34%) reported that symptoms dissipated in less than one hour, 44 (6%) reported that symptoms lasted longer than four hours, and 28 (4%) reported that symptoms lasted longer than six hours. There were also four (1%) reported cases of spontaneously occurring flashbacks." Motion that is seen and felt When moving within a rotating reference frame such as in a centrifuge or environment where gravity is simulated with centrifugal force, the coriolis effect causes a sense of motion in the vestibular system that does not match the motion that is seen.
Pathophysiology:
There are various hypotheses that attempt to explain the cause of the condition.
Pathophysiology:
Sensory conflict theory Contemporary sensory conflict theory, referring to "a discontinuity between either visual, proprioceptive, and somatosensory input, or semicircular canal and otolith input", is probably the most thoroughly studied. According to this theory, when the brain presents the mind with two incongruous states of motion; the result is often nausea and other symptoms of disorientation known as motion sickness. Such conditions happen when the vestibular system and the visual system do not present a synchronized and unified representation of one's body and surroundings.According to sensory conflict theory, the cause of terrestrial motion sickness is the opposite of the cause of space motion sickness. The former occurs when one perceives visually that one's surroundings are relatively immobile while the vestibular system reports that one's body is in motion relative to its surroundings. The latter can occur when the visual system perceives that one's surroundings are in motion while the vestibular system reports relative bodily immobility (as in zero gravity.) Neural mismatch A variation of the sensory conflict theory is known as neural mismatch, implying a mismatch occurring between ongoing sensory experience and long-term memory rather than between components of the vestibular and visual systems. This theory emphasizes "the limbic system in the integration of sensory information and long-term memory, in the expression of the symptoms of motion sickness, and the impact of anti-motion-sickness drugs and stress hormones on limbic system function. The limbic system may be the neural mismatch center of the brain." Defense against poisoning It has also been proposed that motion sickness could function as a defense mechanism against neurotoxins. The area postrema in the brain is responsible for inducing vomiting when poisons are detected, and for resolving conflicts between vision and balance. When feeling motion but not seeing it (for example, in the cabin of a ship with no portholes), the inner ear transmits to the brain that it senses motion, but the eyes tell the brain that everything is still. As a result of the incongruity, the brain concludes that the individual is hallucinating and further concludes that the hallucination is due to poison ingestion. The brain responds by inducing vomiting, to clear the supposed toxin. Treisman's indirect argument has recently been questioned via an alternative direct evolutionary hypothesis, as well as modified and extended via a direct poison hypothesis. The direct evolutionary hypothesis essentially argues that there are plausible means by which ancient real or apparent motion could have contributed directly to the evolution of aversive reactions, without the need for the co-opting of a poison response as posited by Treisman. Nevertheless, the direct poison hypothesis argues that there still are plausible ways in which the body's poison response system may have played a role in shaping the evolution of some of the signature symptoms that characterize motion sickness.
Pathophysiology:
Nystagmus hypothesis Yet another theory, known as the nystagmus hypothesis, has been proposed based on stimulation of the vagus nerve resulting from the stretching or traction of extra-ocular muscles co-occurring with eye movements caused by vestibular stimulation. There are three critical aspects to the theory: first is the close linkage between activity in the vestibular system, i.e., semicircular canals and otolith organs, and a change in tonus among various of each eye's six extra-ocular muscles. Thus, with the exception of voluntary eye movements, the vestibular and oculomotor systems are thoroughly linked. Second is the operation of Sherrington's Law describing reciprocal inhibition between agonist-antagonist muscle pairs, and by implication the stretching of extraocular muscle that must occur whenever Sherrington's Law is made to fail, thereby causing an unrelaxed (contracted) muscle to be stretched. Finally, there is the critical presence of afferent output to the Vagus nerves as a direct result of eye muscle stretch or traction. Thus, tenth nerve stimulation resulting from eye muscle stretch is proposed as the cause of motion sickness. The theory explains why labyrinthine-defective individuals are immune to motion sickness; why symptoms emerge when undergoing various body-head accelerations; why combinations of voluntary and reflexive eye movements may challenge the proper operation of Sherrington's Law, and why many drugs that suppress eye movements also serve to suppress motion sickness symptoms.A recent theory argues that the main reason motion sickness occurs is due to an imbalance in vestibular outputs favoring the semicircular canals (nauseogenic) vs. otolith organs (anti-nauseogenic). This theory attempts to integrate previous theories of motion sickness. For example, there are many sensory conflicts that are associated with motion sickness and many that are not, but those in which canal stimulation occurs in the absence of normal otolith function (e.g., in free fall) are the most provocative. The vestibular imbalance theory is also tied to the different roles of the otoliths and canals in autonomic arousal (otolith output more sympathetic).
Diagnosis:
The diagnosis is based on symptoms. Other conditions that may present similarly include vestibular disorders such as benign paroxysmal positional vertigo and vestibular migraine and stroke.
Treatment:
Treatment may include behavioral measures or medications.
Treatment:
Behavioral measures Behavioral measures to decrease motion sickness include holding the head still and lying on the back. Focusing on the horizon may also be useful. Listening to music, mindful breathing, being the driver, and not reading while moving are other techniques.Habituation is the most effective technique but requires significant time. It is often used by the military for pilots. These techniques must be carried out at least every week to retain effectiveness.A head-worn, computer device with a transparent display can be used to mitigate the effects of motion sickness (and spatial disorientation) if visual indicators of the wearer's head position are shown. Such a device functions by providing the wearer with digital reference lines in their field of vision that indicate the horizon's position relative to the user's head. This is accomplished by combining readings from accelerometers and gyroscopes mounted in the device. This technology has been implemented in both standalone devices and Google Glass. One promising looking treatment is to wear LCD shutter glasses that create a stroboscopic vision of 4 Hz with a dwell of 10 milliseconds.
Treatment:
Medication Three types of medications are sometimes prescribed to improve symptoms of motion sickness: antimuscarinics such as scopolamine, H1 antihistamines such as dimenhydrinate, and amphetamines such as dexamphetamine. Benefits are greater if used before the onset of symptoms or shortly after symptoms begin. Side effects, however, may limit the use of medications. A number of medications used for nausea such as ondansetron and metoclopramide are not effective in motion sickness.
Treatment:
Scopolamine (antimuscarinic) Scopolamine is the most effective medication. Evidence is best for when it is used preventatively. It is available as a skin patch. Side effects may include blurry vision.
Treatment:
Antihistamines Antihistamine medications are sometimes given to prevent or treat motion sickness. This class of medication is often effective at reducing the risk of getting motion sickness while in motion, however, the effectiveness of antihistamines at treating or stopping motion sickness once a person is already experiencing it has not been well studied. Effective first generation antihistamines include doxylamine, diphenhydramine, promethazine, meclizine, cyclizine, and cinnarizine. In pregnancy meclizine, dimenhydrinate and doxylamine are generally felt to be safe. Side effects include sleepiness. Second generation antihistamines have not been found to be useful.
Treatment:
Amphetamines Dextroamphetamine may be used together with an antihistamine or an antimuscarinic. Concerns include their addictive potential.Those involved in high-risk activities, such as SCUBA diving, should evaluate the risks versus the benefits of medications. Promethazine combined with ephedrine to counteract the sedation is known as "the Coast Guard cocktail".
Alternative medicine Alternative treatments include acupuncture and ginger, although their effectiveness against motion sickness is variable. Providing smells does not appear to have a significant effect on the rate of motion sickness.
Epidemiology:
Roughly one-third of people are highly susceptible to motion sickness, and most of the rest get motion sick under extreme conditions. Around 80% of the general population is susceptible to cases of medium to high motion sickness. The rates of space motion sickness have been estimated at between forty and eighty percent of those who enter weightless orbit. Several factors influence susceptibility to motion sickness, including sleep deprivation and the cubic footage allocated to each space traveler. Studies indicate that women are more likely to be affected than men, and that the risk decreases with advancing age. There is some evidence that people with Asian ancestry may develop motion sickness more frequently than people of European ancestry, and there are situational and behavioral factors, such as whether a passenger has a view of the road ahead, and diet and eating behaviors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diethylstilbestrol dipropionate**
Diethylstilbestrol dipropionate:
Diethylstilbestrol dipropionate (DESDP) (brand names Agostilben, Biokeral, Clinestrol, Cyclen, Estilbin, Estril, Neobenzoestrol, Orestol, Oroestrol, Ostregenin, Prostilbene, Stilbestriol DP, Stilboestrolum Dipropionicum, Stilboestrol, Synestrin, Willestrol, others), or diethylstilbestrol dipropanoate, also known as stilboestrol dipropionate (BANM), is a synthetic nonsteroidal estrogen of the stilbestrol group that was formerly marketed widely throughout Europe. It is an ester of diethylstilbestrol with propionic acid, and is more slowly absorbed in the body than diethylstilbestrol. The medication has been said to be one of the most potent estrogens known.The medication has been available in both oral and intramuscular formulations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vascular remodelling in the embryo**
Vascular remodelling in the embryo:
Vascular remodelling is a process which occurs when an immature heart begins contracting, pushing fluid through the early vasculature. The process typically begins at day 22, and continues to the tenth week of human embryogenesis. This first passage of fluid initiates a signal cascade and cell movement based on physical cues including shear stress and circumferential stress, which is necessary for the remodelling of the vascular network, arterial-venous identity, angiogenesis, and the regulation of genes through mechanotransduction. This embryonic process is necessary for the future stability of the mature vascular network.Vasculogenesis is the initial establishment of the components of the blood vessel network, or vascular tree. This is dictated by genetic factors and has no inherent function other than to lay down the preliminary outline of the circulatory system. Once fluid flow begins, biomechanical and hemodynamic inputs are applied to the system set up by vasculogenesis, and the active remodelling process can begin.
Vascular remodelling in the embryo:
Physical cues such as pressure, velocity, flow patterns, and shear stress are known to act on the vascular network in a number of ways, including branching morphogenesis, enlargement of vessels in high-flow areas, angiogenesis, and the development of vein valves. The mechanotransduction of these physical cues to endothelial and smooth muscle cells in the vascular wall can also trigger the promotion or repression of certain genes which are responsible for vasodilation, cell alignment, and other shear stress-mitigating factors. This relationship between genetics and environment is not clearly understood, but researchers are attempting to clarify it by combining reliable genetic techniques, such as genetically-ablated model organisms and tissues, with new technologies developed to measure and track flow patterns, velocity profiles, and pressure fluctuations in vivo.Both in vivo study and modelling are necessary tools to understand this complex process. Vascular remodelling is pertinent to wound healing and proper integration of tissue grafting and organ donations. Promoting an active remodelling process in some cases could help patients recover faster and retain functional use of donated tissues. However, outside of wound healing, chronic vascular remodelling in the adult is often symptomatic of cardiovascular disease. Thus, increased understanding of this biomedical phenomenon could aid in the development of therapeutics or preventative measures to combat diseases such as atherosclerosis.
Historical view:
Over 100 years ago, Thoma observed that increases in local blood flow cause widening of the vessel diameter and he even went so far as to postulate that blood flow might be responsible for the growth and development of blood vessels. Subsequently, Chapman in 1918 discovered that removing a chick embryo's heart disrupted the remodelling process, but the initial vessel patterns laid down by vasculogenesis remained undisturbed. Next, in 1926 Murray proposed that vessel diameter was proportional to the amount of shear stress at the vessel wall; that is, that vessels actively adapted to flow patterns based on physical cues from the environment, such as shear stress.
Historical view:
The chemical basis of morphogenesis," written in 1952 by mathematician and computer scientist Alan Turing advocated for various biological models based on molecular diffusion of nutrients. However, a diffusive model of vascular development would seem to fall short of the complexity of capillary beds and the interwoven network of arteries and veins. In 2000, Fleury proposed that instead of diffusive molecules bearing responsibility for the branching morphogenesis of the vascular tree, a long-range morphogen may be implicated. In this model, a traveling pressure wave would act upon the vasculature via shear stress to rearrange branches into the lowest-energy configuration by widening vessels carrying increased blood flow and rearranging networks upon the initiation of fluid flow. It is known that mechanical forces can have a dramatic impact on the morphology and complexity of the vascular tree. However, these forces have comparably little impact on the diffusion of nutrients, and it therefore seems unlikely that acquisition of nutrients and oxygen plays a significant role in embryonic vascular remodelling.It is now widely accepted that vascular remodelling in the embryo is a process distinct from vasculogenesis; however these two processes are inextricably linked. Vasculogenesis occurs prior to vascular remodelling, but is a necessary step in the development of the blood vessel network and has implications on the identification of vessels as either arterial or venous. Once contraction of the heart begins, vascular remodelling progresses via the interplay of forces resulting from biomechanical cues and fluid dynamics, which are translated by mechanotransduction to changes at cellular and genetic levels.
Vasculogenesis:
Vasculogenesis is the formation of early vasculature, which is laid down by genetic factors. Structures called blood islands form in the mesoderm layer of the yolk sac by cellular differentiation of hemangioblasts into endothelial and red blood cells. Next, the capillary plexus forms as endothelial cells migrate outward from blood islands and form a random network of continuous strands. These strands then undergo a process called lumenization, the spontaneous rearrangement of endothelial cells from a solid cord into a hollow tube.Inside the embryo, the dorsal aorta forms and eventually connect the heart to the capillary plexus of the yolk sac. This forms a closed-loop system of rigid endothelial tubing. Even this early in the process of vasculogenesis, before the onset of blood flow, sections of the tube system may express ephrins or neuropilins, genetic markers of arterial or venous identities, respectively. These identities are still somewhat flexible, but the initial characterization is important to the embryonic remodelling process.Angiogenesis also contributes to the complexity of the initial network; spouting endothelial buds form by an extrusion-like process which is prompted by the expression of vascular endothelial growth factor (VEGF). These endothelial buds grow away from the parent vessel to form smaller, daughter vessels reaching into new territory. Intussusception, the phenomenon of a single tube splitting to form two branching tubes, also contributes to angiogenesis. Angiogenesis is generally responsible for colonizing individual organ systems with blood vessels, whereas vasculogenesis lays down the initial pipelines of the network. Angiogenesis is also known to occur during vascular remodelling.
Arterial-venous identity:
The classification of angioblasts into arterial- or venous-identified cells is essential to form the proper branching morphology. Arterial segments of the early vasculature express ephrinB2 and DLL4 whereas venous segments express neuropilin-2 and EPHB4; this is believed to assist in guidance of flow from arterial-venous sections of the loop. However, mechanical cues provided by the heart's first contractions are still necessary for complete remodelling.The first event of biomechanical-driven hierarchal remodelling occurs just after the onset of heart beat, when the vitelline artery forms by the fusion of several smaller capillaries. Subsequently, side branches may disconnect from the main artery and reattach to the venous network, effectively changing their identity. This is thought to be due to the high luminal pressure in the arterial lines, which prevents reattachment of the branches back onto arterial vessels. This also prevents the formation of shunts between the two components of the network. Moyon et al. showed that arterial endothelial cells could become venous and vice versa. They grafted sections of quail endothelial tubing which had previously expressed arterial markers onto chick veins (or vice versa), showcasing the plasticity of the system. Reversing flow patterns in arteries and/or veins can also have the same effect, although it is unclear whether this is due to differences in physical or chemical properties of venous vs. arterial flow (i.e. pressure profile and oxygen tension).Another example of the fluidity of arterial-venous identity is that of the intersomitic vessel. At early stages, this vessel is connected to the aorta, making it part of the arterial network. However, sprouts from the cardiac vein may fuse with the intersomitic vessel, which slowly disconnects from the aorta and becomes a vein. This process is not fully understood, but may occur out of a need to balance mechanical forces such as pressure and perfusion.Arterial-venous identity in the early stages of embryonic vascular remodelling is flexible, with arterial segments often being recycled to venous lines and the physical structure and genetic markers of segments being actively remodelled along with the network itself. This indicates that the system as a whole exhibits a degree of plasticity which allows it to be shaped by transitory flow patterns and hemodynamic signals, however genetic factors do play a role in the initial specification of vessel identity.
Biomechanics:
Once the heart begins to beat, mechanical forces start acting upon the early vascular system, which rapidly expands and reorganizes to serve tissue metabolism. In embryos devoid of blood flow, endothelial cells retain an undifferentiated morphology similar to angioblasts (compared to flattened epithelial cells found in mature vasculature). Once the heart begins beating, the morphology and behaviour of endothelial cells change. By changing the heart rate, the heart can also control perfusion or pressure acting upon the system in order to trigger sprouting of new vessels. In turn, new vessel sprouting is balanced by the expansion of other embryo tissues, which compress blood vessels as they grow. The equilibrium of these forces plays a major role in vascular remodelling, but although the angiogenic mechanisms required to trigger the sprouting of new vessels have been studied, little is known about the remodelling processes required to curb the growth of unnecessary branches.
Biomechanics:
As blood perfuses the system, it exerts shear and pressure forces on the vessel walls. At the same time, tissue growth outside the cardiovascular system pushes back on the outside of the vessel walls. These forces must be balanced to obtain an efficient energy state for low-cost delivery of nutrients and oxygen to all tissues of the embryo body. When growth of the yolk sac (external tissue) is constrained, the balance between vascular forces and tissue forces is shifted and some vascular branches may be disconnected or diminished during the remodelling process because they are unable to forge new paths through the compressed tissue. In general, the stiffness and resistance of these tissues dictates the degree to which they can be deformed and the way in which biomechanical forces can affect them.The development of the vascular network is self-organized at each point in the tissue due to the balance between compressive forces of tissue expansion and circumferential stretch of the vessel walls. Over time, this means that migrating lines become straight rather than curving; this is akin to imagining two moving boundaries pushing on each other. Straight vessels are usually parallel to isopressure lines because the boundaries have acted to equilibriate pressure gradients. In addition, vessel direction tends to follow the direction of the normal to the steepest stress gradient.Additionally, biomechanic forces inside embryonic vessels have important remodelling effects. Pressure fluctuations lead to stress and strain fluctuations, which can "train" the vessels to bear loads later in the organism's development. The fusion of several small vessels can also generate large vessels in areas of the vascular tree where blood pressure and flow rate are larger. Murray's law is a relation between the radius of parent vessels to the radius of branches which holds true for the circulatory system. This outlines the balance between the lowest resistance to flow presented by vessel size (because large-diameter vessels exhibit a low pressure drop) and the maintenance of the blood itself as a living tissue which cannot diffuse ad infinitum. Therefore, complex branching is required to supply blood to organ systems, as diffusion alone cannot be responsible for this.Biomechanics act on the vascular network connections as well. Luminal pressure has been shown to direct the recycling of vessel segments to high-pressure areas, and govern the disconnection of vessel segments from arterial lines and reattachment to venous lines in order to shape the network. This type of vessel breakage may even be indirectly responsible for the development of some organ systems and the evolution of larger organisms, as without detachment and migration, large masses of tissue in the embryo would remain disconnected from the blood supply. Once vessels break away from the parent artery, they may also undergo angiogenesis to infest tissues distal to the rest of the network.
Fluid dynamics:
Fluid dynamics also plays an important role in vascular remodelling. The shear stress applied to vessel walls is proportional to the viscosity and flow patterns of the fluid. Disturbed flow patterns can promote the formation of valves and increasing pressure can affect the radial growth of vessels. The primitive heart within the first few days of contraction is best described as a peristaltic pump, however after three days the flow becomes pulsatile. Pulsatile flow plays an important role in vascular remodelling, as flow patterns can affect the mechanotransduction of stress to endothelial cells.Dimensionless relations such as the Reynolds number and Womersley number can be used to describe flow in early vasculature. The low Reynolds number present in all early vessels means that flow can be considered creeping and laminar. A low Womersley number means that viscous effects dominate flow structure and that boundary layers can be considered to be non-existent. This allows the fluid dynamic computations to rest upon certain assumptions which simplify the mathematics.During the first stages of embryonic vascular remodelling, high-velocity flow is not present solely in large-diameter vessels, but this corrects itself due to the effects of vascular remodelling over the first two days of blood flow. It is known that embryonic vessels respond to increases in pressure by increasing the diameter of the vessel. Due to the absence of smooth muscle cells and the glycocalyx, which provide elastic support in adult vessels, blood vessels in the developing embryo are much more resistant to flow. This means that increases in flow or pressure can only be answered by rapid, semi-permanent expansion of the vessel diameter, rather than by more gradual stretch and expansion experienced in adult blood vessels.Rearranging the Laplace and Poiseuille relations suggests that radial growth occurs as a result of circumferential stretch and circumferential growth occurs as a result of shear stress. Shear stress is proportional to the speed inside the vessel as well as the pressure drop between two fixed points on the vessel wall. The precise mechanism of vessel remodelling is believed to be high stress on the inner wall of the vessel which can induce growth, which heads toward uniform compressive and tensile stress on both sides of the vessel wall. Generally, it has been found that circumferential residual stress is compressive and tensile, indicating that inner layers of the endothelial tube grow more than outer layers.
Mechanotransduction and genetic regulation:
The mechanism by which different types of flow patterns and other physical cues have different effects on vascular remodelling in the embryo is called mechanotransduction. Turbulent flow, which is commonplace in the developing vasculature, plays a role in the formation of cardiac valves which prevent backflows associated with turbulence. It has also been shown that heterogeneous flow patterns in large vessels can create asymmetry, perhaps by preferentially activating genes such as PITX2 on one side of the vessel, or perhaps by inducing circumferential stretch on one side, promoting regression on the other side. Laminar flow also has genetic effects, such as reducing apoptosis, inhibiting proliferation, aligning cells in direction of flow, and regulating many cell signalling factors. Mechanotransduction may act either by positive or negative feedback loops, which may activate or repress certain genes to respond to the physical stress or strain placed on the vessel.
Mechanotransduction and genetic regulation:
The cell "reads" flow patterns through integrin sensing, receptors which provide a mechanical link between the extracellular matrix and the actin cytoskeleton. This mechanism dictates how a cell will respond to flow patterns and can mediate cell adhesion, which is especially relevant to the sprouting of new vessels. Through the process of mechanotransduction, shear stress can regulate the expression of many different genes. The following examples have been studied in the context of vascular remodelling by biomechanics: Endothelial nitric oxide synthase (eNOS), promotes unidirectional flow at the onset of heart beats and is upregulated by shear stress Platelet-derived growth factor (PDGF), transforming growth factor beta (TGFβ), and Kruppel-like factor 2 (Klf-2) are induced by shear stress and may have up-regulating effects on genes which deal with endothelial response to turbulent flow Shear stress induces phosphorylation of VEGF receptors, which are responsible for vascular development, especially the sprouting of new vessels Hypoxia can trigger the expression of hypoxia inducible factor 1 (HIF-1) or VEGF in order to pioneer the growth of new sprouts into oxygen-deprived areas of the embryo PDGF-β, VEGFR-2, and connexion43 are upregulated by abnormal flow patterns Shear stress upregulates NF-κB, which induces matrix metalloproteinases to trigger the enlargement of blood vesselsDifferent flow patterns and their duration can elicit very different responses based on the shear-stress-regulated genes. Both genetic regulation and physical forces are responsible for the process of embryonic vascular remodelling, yet these factors are rarely studied in tandem.,
In vivo study:
The main difficulty in the in vivo study of embryonic vascular remodelling has been to separate the effects of physical cues from the delivery of nutrients, oxygen, and other signalling factors which may have an effect on vascular remodelling. Previous work has involved control of blood viscosity in early cardiovascular flow, such as preventing the entry of red blood cells into blood plasma, thereby lowering viscosity and associated shear stresses. Starch can also be injected into the blood stream in order to increase viscosity and shear stress. Studies have shown that vascular remodelling in the embryo proceeds without the presence of erythrocytes, which are responsible for oxygen delivery. Therefore, vascular remodelling does not depend on the presence of oxygen and in fact occurs before perfused tissues require oxygen delivery. However, it is still unknown whether or not other nutrients or genetic factors may have promotional effects on vascular remodelling.Measurement of parabolic velocity profiles in live embryo vessels indicate that vessel walls are exposed to levels of laminar and shear stress which can have a bioactive effect. Shear stress on embryonic mouse and chicken vasculature ranges between 1 – 5 dyn/cm2. This can be measured by either cutting sections of blood vessels and observing the angle of the opening, which bends to relieve residual stress, or by measuring the hematocrit present in blood vessels and calculating the apparent viscosity of the fluid.Due to the difficulties involved with imaging live embryo development and accurately measuring small values of viscosity, pressure, velocity, and flow direction, increased importance has been placed on developing an accurate model of this process. This way, an effective method for studying these effects in vitro may be found.
Modelling:
A number of models have been proposed to describe fluid effects on the vascular remodelling in the embryo. One point which is often missed in these analogies is the fact that the process occurs within a living system; dead end can break off and reattach elsewhere, branches close and open at junctions or form valves, and vessels are extremely deformable, able to quickly adapt to new conditions and form new pathways. Theoretically, the formation of the vascular tree can be thought of in terms of percolation theory. The network of tubes arises randomly and will eventually establish a path between two separate and unconnected points. Once some critical number of sprouting tubes have migrated into a previously unoccupied area, a path called a fractal can be established between these two points. Fractals are biologically useful constructions, as they rely on an infinite increase in surface area, which in biological terms translates to a vast increase in transport efficiency of nutrients and wastes. The fractal path is flexible; if one connection is broken, another forms to re-establish the path. This is a useful illustration of how the vascular tree forms, although it cannot be used as a model.
Modelling:
The diffusion-limited aggregation model has given simulated results which are closest in comparison to vascular trees in vivo. This model suggests that vascular growth occurs along a gradient of shear stress at the vessel wall, which results in the growth of vessel radii. Diffusion-limited aggregation proposes that an aggregate grows by the fusion of random walkers, which themselves walk along a pressure gradient. Random walk is simply a probability-based version of the diffusion equation. Thus, in applying this model to the vascular tree, small, resistant vessels must be replaced with large, conducting vessels in order to balance the pressure across the entire system. This model yields a structure which is more random at the tips than in the major lines, which is related to the fact that Laplacian formulations are stable when speed is negative with respect to pressure gradient. In major lines, this is always so, but in small sprouts the speed fluctuates around 0, leading to unstable, random behaviour.Another large component of the remodelling process is the disconnection of branched vessels, which then migrate to distal areas in order to supply blood homogeneously. Branching morphogenesis has been found to follow the dielectric breakdown model, in that only the vessels with sufficient flow will enlarge, while others will close off. At locations inside the vessel where two tube split off from one, one arm of the split is likely to close, detach, and migrate towards the venous line, where it will re-attach. The result of the closure of a branch is that flow increases and becomes less turbulent in the main line, while blood also begins to flow towards areas which are lacking. Which branch will close depends on the flow rate, direction, and branching angle; in general, a branching angle of 75° or more will necessitate the closing of the smaller branch.Thus, several important parameters of vascular remodelling can be described using the combined models of diffusion-limited aggregation and dielectric breakdown: the probability that a branch will close off (plasticity of vessel splitting), that a vessel will reconnect to the venous line (plasticity of sprout regrowth), shrinkage resistance of sprouting tips (a balance between external compression and internal shear stress), and the ratio of external tissue growth to internal vessel expansion. However, this model does not take into effect the diffusion of oxygen or signalling factors which may play a role in embryonic vascular remodelling. These models consistently reproduce most aspects of the vasculature seen in vivo in several different specialized cases.
Application to study of disease progression:
Vascular remodelling in non-embryonic tissues is considered to be symptomatic of disease progression. Cardiovascular disease remains one of the most common causes of death globally and is often associated with the blockage or stenosis of blood vessels, which can have dramatic biomechanical effects. In acute and chronic remodelling, the increase in shear stress due to the decreased diameter of a blocked vessel can cause vasodilation, thereby restoring typical shear stress levels. However, dilation also leads to increased blood flow through the vessel, which can result in hyperaemia, affect physiological regulatory actions downstream of the afflicted vessel, and place increased pressure on atherosclerotic plaques which may lead to rupture. Blockage of blood vessels is currently treated by surgically inserting stents to force vessel diameters open and restore normal blood flow. By understanding the implication of increased shear stress on homeostatic regulators, alternative, less-invasive methods may be developed to treat vessel blockage.
Application to study of disease progression:
The growth of tumours often results in reactivation of blood vessel growth and vascular remodelling in order to perfuse the new tissue with blood and sustain its proliferation. Tumour growth has been shown to be self-organizing and to behave more similarly to embryonic tissues than to adult tissues. As well, vessel growth and flow dynamics in tumours are thought to recapitulate the vessel growth in developing embryos. In this sense, embryonic vascular remodelling can be considered a model of the same pathways which are activated in tumour growth, and increased understanding of these pathways can lead to novel therapeutics which may inhibit tumour formation.Conversely, angiogenesis and vascular remodelling is an important aspect of wound healing and the long-term stability of tissue grafts. When blood flow is disrupted, angiogenesis provides sprouting vessels which migrate into deprived tissues and restore perfusion. Thus, the study of vascular remodelling may also provide important insight into the development of new techniques to improve wound healing and benefit the integration of tissues from transplants by lowering the incidence of rejection. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**100K Pathogen Genome Project**
100K Pathogen Genome Project:
The 100K Pathogen Genome Project was launched in July 2012 by Bart Weimer (UC Davis) as an academic, public, and private partnership. It aims to sequence the genomes of 100,000 infectious microorganisms to create a database of bacterial genome sequences for use in public health, outbreak detection, and bacterial pathogen detection. This will speed up the diagnosis of foodborne illnesses and shorten infectious disease outbreaks.The 100K Pathogen Genome Project is a public-private collaborative project to sequence the genomes of 100,000 infectious microorganisms. The 100K Genome Project will provide a roadmap for developing tests to identify pathogens and trace their origins more quickly.
100K Pathogen Genome Project:
Partners announced in the launch of the project were UC Davis, Agilent Technologies, and the US Food and Drug Administration, with the US Centers for Disease Control and Prevention and the US Department of Agriculture noted as collaborators. As the project has proceeded, the partnership has evolved to include or replace these founding partners. The 100K Pathogen Genome Project was selected by the IBM/Mars Food Safety Consortium for metagenomic sequences.The 100K Pathogen Genome Project is conducting high-throughput next-generation sequencing (NGS) to investigate the genomes of targeted microorganisms, with whole genome sequencing to be carried out on a small number of microorganisms for use as a reference genome. Most bacterial strains will be sequenced and assembled as draft genomes; however, the project has also produced closed genomes for a variety of enteric pathogens in the 100K bioproject.This strategy enables worldwide collaboration to identify sets of genetic biomarkers associated with important pathogen traits. This five-year microbial pathogen project will result in a free, public database containing the sequence information for each pathogen's genome. The completed gene sequences will be stored in the National Institutes of Health (NIH)'s National Center for Biotechnology Information (NCBI)'s public database. Using the database, scientists will be able to develop new methods of controlling disease-causing bacteria in the food chain. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spelljammer**
Spelljammer:
Spelljammer is a campaign setting originally published for the Advanced Dungeons & Dragons (2nd edition) role-playing game, which features a fantastic (as opposed to scientific) outer space environment. Subsequent editions have included Spelljammer content; a Dungeons & Dragons 5th edition setting update released on August 16, 2022.
Spelljammer:
Spelljammer introduced into the AD&D universe a comprehensive system of fantasy astrophysics, including the Ptolemaic concept of crystal spheres. Crystal spheres may contain multiple worlds and are navigable using ships equipped with "spelljamming helms". Ships powered by spelljamming helms are capable of flying into not only the sky but into space. With their own fields of gravity and atmosphere, the ships have open decks and tend not to resemble the spaceships of science fiction, but instead look more like galleons, animals, birds, fish or even more wildly fantastic shapes.
Spelljammer:
The Spelljammer setting is designed to allow the usual sword and sorcery adventures of Dungeons & Dragons to take place within the framework of outer space tropes. Flying ships travel through the vast expanses of interplanetary space, visiting moons and planets and other stellar objects.
Spelljammer:
Like the Planescape setting, Spelljammer unifies most of the other AD&D settings and provides a canonical method for allowing characters from one setting (such as Dragonlance) to travel to another (such as the Forgotten Realms). However, unlike Planescape, it keeps all of the action on the Prime Material Plane and uses the crystal spheres, and the "phlogiston" between them, to form natural barriers between otherwise incompatible settings. Though the cosmology is derived largely from the Ptolemaic system of astronomy, many of the ideas owe much to the works of Jules Verne and his contemporaries, and to related games and fiction with a steampunk or planetary romance flavor. A strong Age of Sail flavor is also present.
Publication history:
Shannon Appelcline, in the book Designers & Dragons (2011), highlighted that in 1989 Spelljammer was the first of a host of new campaign settings published by TSR. It was created by Jeff Grubb and "introduced a universe of magical starships traversing the ‘crystal spheres’ that contained all the earthbound AD&D campaign worlds. It suggested a method to connect together all of TSR's settings and at the same time introduced fun new Jules Verne-esque technology that had never before been seen in the game. It was innovative and popular.": 26 Appelcline commented that Spelljammer "offered a way to connect every single D&D fantasy world, was thus one of the first true crossovers" in role-playing games.: 219 Advanced Dungeons & Dragons (2nd edition) The Spelljammer: AD&D Adventures in Space space fantasy boxed set was released in 1989. Several of TSR's other campaign worlds had their own sections in the Spelljammer Boxed Set - Realmspace for the Forgotten Realms, Krynnspace for Dragonlance, and Greyspace for Greyhawk. Along with the new sphere - Clusterspace - they were known as the "Big Three and Astromundi". Dark Sun, Ravenloft and Mystara weren't included, as the first two did not fit with the setting and the Mystara only used the D&D rules, not the AD&D rules.The product line would be expanded with a number of boxed sets and accessories such as Lost Ships (1990), Realmspace (1991) and The Astromundi Cluster (1993). Appelcline commented that The Astromundi Cluster acted as "a soft reboot of the Spelljammer line" and was more of setting focused sourcebook than previous Spelljammer books which acted more "as a conduit between all of the other AD&D settings". The first adventure module, titled Wildspace, was released in 1990; four connected adventure modules followed it. A longer campaign module, Heart of the Enemy, was then published in 1992 followed by an adventure anthology, Space Lairs, in 1993.The monsters of Spelljammer were detailed in two installments of the Monstrous Compendium series, Spelljammer Appendix in 1990 and Spelljammer Appendix II in 1991.In 1993, Space Lairs and The Astromundi Cluster were the final products of the line. Appelcline commented on the end of the setting in the Advanced Dungeons & Dragons era: "TSR’s fifth second-edition campaign world, Planescape (1993), was released to replace Spelljammer, which had just then ended. TSR wanted a new world-spanning setting and Slade Henson came up with the answer by suggesting a new setting built on Jeff Grubb's first-edition Manual of the Planes (1987). [...] Unlike Spelljammer this new setting had a strong geographical centre, the City of Sigil, resolving a flaw in the Spelljammer setting that denied players a good home base.": 26 Dungeons & Dragons (3rd edition) The Spelljammer line of products was discontinued by TSR before they were acquired by Wizards of the Coast in 1997.In May 2002, Paizo published an article for Spelljammer in Dungeon #92 titled "Spelljammer: Shadow of the Spider Moon". Using the D20 system, it provided new rules for firearms and spelljamming, as well as skills, feats and prestige classes. Spelljammer monsters such as neogi and giff were not used. Instead, it featured creatures from the Monster Manual such as drow, formians and yuan-ti.In May 2005, Wizards of the Coast updated the neogi to the 3.5 edition rules in the supplement Lords of Madness (2005). The book included a chapter with a sample map of a crashed Spelljamming vessel, cultural habits of the neogi, and the monster's stat blocks.
Publication history:
Dungeons & Dragons (4th edition) A Spelljammer homage appears in the 4th edition Manual of the Planes; the sourcebook highlights Spelljammer ships as one method of traveling between planes and provides information for in-game use of Spelljammer vessels.
Publication history:
Dungeons & Dragons (5th edition) Spelljammer content also appears in the 5th Edition adventure module Waterdeep: Dungeon of the Mad Mage (2018). In the adventure, a spelljamming ship and its illithid captain appear stranded in level 19 of the titular dungeon. Then in October 2021, Wizards released the PDF Travelers of the Multiverse which is part of the "Unearthed Arcana" public playtest series. Of the six player races it included, four races (autognome, giff, hadozee, and plasmoid) are closely associated with the Spelljammer setting. Both Polygon and Bleeding Cool highlighted that this playtest could indicate a future Spelljammer reboot.In April 2022, Wizards of the Coast announced a new boxed set titled Spelljammer: Adventures in Space which was released on August 16, 2022; this release updates the Spelljammer setting for the 5th Edition. The box set includes a Dungeon Master's screen, a double-sided poster map and three 64-page hardcover books: Astral Adventurer’s Guide (a Dungeon Master guide), Boo's Astral Menagerie (a bestiary), and Light of Xaryxis (an adventure module). A special edition, with cover art by Hydro74, was also released. A prequel adventure module, titled Spelljammer Academy, was released for free on the Wizards of the Coast website and on D&D Beyond in July 2022.Monstrous Compendium Vol 1: Spelljammer Creatures introduced ten creatures from the Spelljammer setting to the 5th Edition in April 2022.
Fictional setting:
Spelljamming helms Spelljamming helms are the central setting concept which allow interplanetary and interstellar space travel for vessels which would otherwise not be spaceworthy, in the form of a helm. Any spellcaster may sit on a spelljammer helm to move the ship. The mysterious race known as the Arcane is the sole manufacturer and distributor of spelljamming helms. Within the Dungeons & Dragons universe, they are a method of converting magical energy into motive power.
Fictional setting:
Gravity and air All bodies of a sufficiently large size have gravity. This gravity usually (but not always) exerts a force equal to the standard gravitational attraction on the surface of an Earth-sized planetary body. Gravity in the Spelljammer universe is also an exceptionally convenient force, and almost always works such that "down" orients itself in a manner most humanoids would find sensible.
Fictional setting:
All bodies of any size carry with them an envelope of air whenever they leave the surface of a planet or other stellar object. Unlike real-world astrophysics, this air envelope is not dispersed by the vacuum of space. These bubbles of air provide breathable atmosphere for varying lengths of time, but 3 months is considered "standard".
Crystal spheres A crystal sphere (also known as a crystal shell) is a gigantic spherical shell which contains an entire planetary system. Each sphere varies in size but typically they are twice the diameter of the orbit of the planet that is farthest from the sun or planet at the center of the sphere (the system's primary).
Fictional setting:
The surface of the sphere is called the "sphere wall" and separates the void of "wildspace" (within the sphere) from the "phlogiston" (that surrounds and flows outside the sphere). The sphere wall has no gravity and appears to be impossible to damage by any normal or magical means. Openings in the sphere wall called "portals" allow spelljamming ships or wildspace creatures to pass through and enter or exit from a crystal sphere. Portals can spontaneously open and close anywhere on the sphere wall. Magical spells (or magical items that reproduce their effects) can allow a portal to be located. Other magic can open a new portal or collapse an existing one. Ships or creatures passing through a portal when it closes may be cut in two.
Fictional setting:
Note that unlike the Ptolemaic system, the crystal spheres are not nested within each other.
Fictional setting:
Wildspace Wildspace is similar to the outer space of science fiction, with planets, asteroids and stars, but with different physics. Gravity is either none or the same as that of Earth, and is directed towards the center of planet-sized bodies; on large objects in space like spacecraft and enormous creatures gravity is directed towards a flat plane running through the object's long axis, allowing characters to stand on the decks of ships.
Fictional setting:
The Phlogiston The phlogiston is essentially a big ocean of a unique element that is neither air, fire, water, or earth. The phlogiston (also known as "the Flow") is a bright, extremely combustible gas-like medium that exists between the Crystal Spheres. A signature property of the substance is that it does not exist within the boundaries of a crystal sphere, to the degree that it cannot be brought into a crystal sphere by any known means up to and including the direct will of deities. Every crystal sphere floats in the phlogiston, very slowly bobbing up and down over time. Travel between Crystal Spheres is facilitated by the formation of "Flow rivers" — sections of the phlogiston which have a current and greatly reduce travel time. Travel through the "slow flow" (i.e. off the Flow rivers) is possible, but very dangerous.
Fictional setting:
The Spelljammer The Spelljammer is a legendary ship which looks like a gigantic manta ray, and houses an entire city on its back. All spacefarers (people who live in wildspace) have heard of the Spelljammer but very few have ever seen it themselves. It is this ship that gives its name to "spelljamming", "spelljamming helms" and anything else connected with spelljamming. The ship has been reported to have been seen in countless spheres for as long as records go back. Even some groundlings (people who live on planets that have very little or no commerce with spelljamming communities) have legends about it. There are hundreds of conflicting legends about this ship, and a mythology has developed about the ship that is similar to the legends surrounding The Flying Dutchman.
Fictional setting:
As a living thing (although it does not consume any matter, it does absorb heat and light through its ventral (or under) side and uses them to produce air and food for its inhabitants), the Spelljammer has a complex life cycle and means of procreation. Normally the ship has no captain and wanders the cosmos seemingly aimlessly. When the Spelljammer has a captain, obtained through another complex process, it will create Smalljammers (miniature versions of the Spelljammer) that go forth as its spawn. Apparently there can only be one Spelljammer at any one time. One Smalljammer will mature into a full Spelljammer ship if its predecessor is ever destroyed.
Fictional setting:
Races Alien races inhabiting the Spelljammer universe included humans, dwarves, xenophobic beholders, rapacious neogi, militant giff (humanoid hippopotami), centaurlike dracons, hubristic elf armadas, spacefaring orcs called "scro", mysterious arcane, the Thri-kreen insectoids, and bumbling tinker gnomes. Illithids were another major race, but were presented as more mercantile and less overtly evil than in other D&D settings. The Monstrous Compendium series added many more minor races. The simian Hadozee were also introduced into the setting, and later incorporated into the 3.5 rules in the supplemental book Stormwrack.
Official products:
Spelljammer has acted as the official campaign setting for multiple Dungeons & Dragons roleplaying adventure modules, sourcebooks and accessories.
In other media:
Comics Fifteen comics set in the Spelljammer universe were published by DC Comics between September 1990 and November 1991 with the creative team of Barbara Kesel, Michael Collins and Dan Panosian.: 21 Spelljammer comics also uses Jasmine, a winged human character originally introduced from Forgotten Realms comics, as one of the lead characters.
In other media:
Novels Six novels set in the Spelljammer universe were published by TSR, before TSR was incorporated into Wizards of the Coast. The novels were interconnected and formed "The Cloakmaster Cycle". The novels tell the story of Teldin Moore, a 'groundling' farmer on Krynn who has a powerful and apparently cursed magical cloak that was given to him. He then ends up on a quest, which takes him first into wildspace and then away from his home sphere to distant crystal spheres. The series showcases the wonders and perils of the Spelljammer universe. The novels are now out of print.
In other media:
Beyond the Moons by David Cook, (July, 1991) (ISBN 1-56076-153-9) Into the Void by Nigel Findley, (October, 1991) (ISBN 1-56076-154-7) The Maelstrom's Eye by Roger E. Moore, (May, 1992) (ISBN 1-56076-344-2) The Radiant Dragon by Elaine Cunningham, (November, 1992) (ISBN 1-56076-346-9) The Broken Sphere by Nigel Findley, (May, 1993) (ISBN 1-56076-596-8) The Ultimate Helm by Russ T. Howard, (September, 1993) (ISBN 1-56076-651-4) Computer games The only Spelljammer computer game ever produced was Spelljammer: Pirates of Realmspace, published by SSI in 1992.
In other media:
In 2002 a team of freelance game modification developers created "The Arcane Space Tileset" for Neverwinter Nights. This tileset included Spelljamming ships, space and atmospheric terrains, along with monsters and NPCs, all set within the Spelljammer Campaign setting.
In other media:
Web series Legends of the Multiverse (2022) is an official actual play streaming series broadcast on the Dungeons & Dragons channels which premiered on April 27, 2022 and is set in the Spelljammer campaign setting. It stars Deborah Ann Woll, B. Dave Walters, Gina Darling, Meagan Kenreck, and Todd Kenreck. It will also feature guest stars such as Brennan Lee Mulligan, Aabria Iyengar, Ginny Di, Anna Prosser, Deejay Knight, Emme Montgomery, Travis McElroy, SungWon Cho, and Jim Zub.
Reception:
In the January 1990 edition of Games International (Issue 12), James Wallis was not a fan of the initial release, Spelljammer: AD&D Adventures in Space, finding inconsistencies in the combat rules, saying, "The cumulative effect of these inconsistencies is to make space combat unplayable." He did find the background "imaginative and consistent, but unfortunately there is little of it." Although he admired the production values of the components, he found the book disorganized to the point of "disarray and confusion." He concluded by giving the game a poor rating of only 2 out of 5, saying, "Spelljammer may score well physically but fails mentally [...] Scavenging AD&D players who enjoy stripping tasty ideas from the carcasses of dying games may find it of interest, but I cannot recommend it to anyone else."Alexander Sowa, for CBR in October 2021, commented that Spelljammer should be one of the classic settings Wizards of the Coast brings back for the 5th Edition. Sowa wrote, "players have been asking for Spelljammer to be introduced to 5e since the release of the first setting sourcebook. Wizards tossed them a bone with the Dream of The Blue Veil spell added in Tasha's Cauldron of Everything, but it's not a replacement for the niche Spelljammer previously filled. It's not just a way to travel between different campaign settings; it's a simultaneous fulfillment of sci-fi and fantasy dreams of exploration, venturing deep into unknown depths and contending with the strange and otherworldly".Spelljammer was #3 on The Gamer's 2022 "The 8 Best Dungeons & Dragons Settings Ever" list — the article states that "Spelljammer is one of the most unique settings on this list, with endless possibilities brought up in its planet-hopping realms. The Spelljammer setting can almost best be surmised as 'pirates meets sci-fi fantasy' with its blend of magical worlds and galaxy-traversing galleons". In a separate article for The Gamer, in February 2022, Paul DiSalvo commented that "while D&D's second edition was home to a wide range of Spelljammer books including several adventure modules, the setting has since faded into obscurity, with it not being prominently featured within the game's third, fourth, and fifth editions". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MapWindow GIS**
MapWindow GIS:
MapWindow GIS is a lightweight open-source GIS (mapping) desktop application and set of programmable mapping components.
History:
MapWindow GIS and its associated MapWinGIS ActiveX Control were originally developed by Daniel P. Ames and a team of professors and students at Utah State University in 2002-2003 as part of a research project with the Idaho National Laboratory in Idaho Falls, Idaho as a GIS mapping framework for watershed modelling tools in conjunction with source water assessments conducted by the laboratory.In 2004 it the first open source version of the software was released as MapWindow GIS 3.0, after which it was adopted by the United States Environmental Protection Agency as the primary GIS platform for its BASINS (Better Assessment Science Integrating Point and Nonpoint Sources) watershed analysis and modeling software.As the project has grown, much of the day-to-day management of the code and associated website has been handled by Paul Meems and a group of volunteer user-developers from around the world.
Conferences:
The 1st International MapWindow GIS Users and Developers Conference was held in Orlando, Florida from March 31 - April 2, 2010 and included 60 participants from multiple countries and government, private, and educational institutions.
The 2nd International MapWindow GIS and DotSpatial Conference included the newly developed DotSpatial GIS programming environment and was held in San Diego, California, June 13–15, 2011.
The 2012 International Open Source GIS Conference was held in Velp, The Netherlands, from July 9–11, 2012. This was the first joint meeting of MapWindow GIS users and developers together with the broader regional open source GIS community.
Later MapWindow GIS users and developers meetings have largely been held in conjunction with other communities and conferences including the American Water Resources Association, American Geophysical Union, OSGEO, and the International Environmental Modelling & Software Society.
Technical details:
MapWindow GIS is distributed as an open source application under the Mozilla Public License distribution license, MapWindow GIS can be reprogrammed to perform different or more specialized tasks. There are also plug-ins available to expand compatibility and functionality.
Technical details:
The core component of MapWindow GIS is the MapWinGIS ActiveX Control. This component (MapWinGIS.ocx) is written in the C++ programming language and includes all of the core mapping, data management, and data analysis functions required by the MapWindow GIS desktop application. A user manual for MapWinGIS ActiveX Control written by Daniel P. Ames and Dinesh Grover was released through in 2007.The MapWindow GIS desktop application is built upon Microsoft .NET technology. Originally written using Visual Basic .NET, the application was re-written using C# .NET.
Technical details:
Project source code was originally hosted and maintained on a local SVN server on www.mapwindow.org. Later it was ported to the Microsoft open source code repository, codeplex.com. Presently all project code is hosted at GitHub.org.
Updates for MapWindow GIS are regularly released by a group of student and volunteer developers.
MapWindow GIS in scientific literature:
MapWindow GIS has found much adoption in the water resources and modelling community. Some example research projects using the software include: MapWindow GIS and its watershed delineation tool were used to generate terrain curvature networks by Burgholzer Fujisawa used MapWindow GIS in conjunction with Google Earth for data preparation.
MapWindow GIS was extended with several plug-ins and custom datasets for the United Nations WaterBase project.
MapWindow GIS was extended with a large number of watershed analysis plugins and was completely rebranded as BASINS by the United States Environmental Protection Agency.
A "cost efficient" modelling tool for distributed hydrologic modelling was created by Lei et al.
Fan et al. coupled a large scale water quality model with MapWindow GIS. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**George Santangelo**
George Santangelo:
George M. Santangelo is an American genomicist and data scientist. He is the director of the Office of Portfolio Analysis at the National Institutes of Health.
Education and career:
Santangelo received his bachelor's degree from the University of Pennsylvania, and his Ph.D. from Yale University. In 2011, he was appointed as director of the newly formed Office of Portfolio Analysis at the National Institutes of Health. Santangelo oversees a team of analysts, data scientists, and software developers to enable data-driven decision-making.
Selected works:
Hoppe, Travis; Litovitz, Aviva; Willis, Kristine; Meseroll, Rebecca; Perkins, Matthew; Hutchins, B. Ian; Davis, Alison; Lauer, Michael; Valantine, Hannah; Anderson, James; Santangelo, George (October 9, 2019). "Topic choice contributes to the lower rate of NIH awards to African-American/black scientists". Science Advances. 5 (10): eaaw7238. Bibcode:2019SciA....5.7238H. doi:10.1126/sciadv.aaw7238. PMC 6785250. PMID 31633016.
Menon, B. B.; Sarma, N. J.; Pasula, S.; Deminoff, S. J.; Willis, K. A.; Barbara, K. E.; Andrews, B.; Santangelo, G. M. (April 2005). "Reverse recruitment: The Nup84 nuclear pore subcomplex mediates Rap1/Gcr1/Gcr2 transcriptional activation". Proceedings of the National Academy of Sciences. 102 (16): 5749–5754. Bibcode:2005PNAS..102.5749M. doi:10.1073/pnas.0501768102. ISSN 0027-8424. PMC 556015. PMID 15817685.
Santangelo, G. M. (March 2006). "Glucose Signaling in Saccharomyces cerevisiae". Microbiology and Molecular Biology Reviews. 70 (1): 253–282. doi:10.1128/MMBR.70.1.253-282.2006. ISSN 1092-2172. PMC 1393250. PMID 16524925.
Hutchins, B. Ian; Yuan, Xin; Anderson, James M.; Santangelo, George M. (September 2016). Vaux, David L (ed.). "Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level". PLOS Biology. 14 (9): e1002541. doi:10.1371/journal.pbio.1002541. ISSN 1545-7885. PMC 5012559. PMID 27599104. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thermal management (electronics)**
Thermal management (electronics):
All electronic devices and circuitry generate excess heat and thus require thermal management to improve reliability and prevent premature failure. The amount of heat output is equal to the power input, if there are no other energy interactions. There are several techniques for cooling including various styles of heat sinks, thermoelectric coolers, forced air systems and fans, heat pipes, and others. In cases of extreme low environmental temperatures, it may actually be necessary to heat the electronic components to achieve satisfactory operation.
Overview:
Thermal resistance of devices This is usually quoted as the thermal resistance from junction to case of the semiconductor device. The units are °C/W. For example, a heatsink rated at 10 °C/W will get 10 °C hotter than the surrounding air when it dissipates 1 Watt of heat. Thus, a heatsink with a low °C/W value is more efficient than a heatsink with a high °C/W value.
Overview:
Given two semiconductor devices in the same package, a lower junction to ambient resistance (RθJ-C) indicates a more efficient device. However, when comparing two devices with different die-free package thermal resistances (Ex. DirectFET MT vs wirebond 5x6mm PQFN), their junction to ambient or junction to case resistance values may not correlate directly to their comparative efficiencies. Different semiconductor packages may have different die orientations, different copper(or other metal) mass surrounding the die, different die attach mechanics, and different molding thickness, all of which could yield significantly different junction to case or junction to ambient resistance values, and could thus obscure overall efficiency numbers.
Overview:
Thermal time constants A heatsink's thermal mass can be considered as a capacitor (storing heat instead of charge) and the thermal resistance as an electrical resistance (giving a measure of how fast stored heat can be dissipated). Together, these two components form a thermal RC circuit with an associated time constant given by the product of R and C. This quantity can be used to calculate the dynamic heat dissipation capability of a device, in an analogous way to the electrical case.
Overview:
Thermal interface material A thermal interface material or mastic (aka TIM) is used to fill the gaps between thermal transfer surfaces, such as between microprocessors and heatsinks, in order to increase thermal transfer efficiency.
It has a higher thermal conductivity value in Z-direction than xy-direction.
Applications:
Personal computers Due to recent technological developments and public interest, the retail heat sink market has reached an all-time high. In the early 2000s, CPUs were produced that emitted more and more heat than earlier, escalating requirements for quality cooling systems.
Applications:
Overclocking has always meant greater cooling needs, and the inherently hotter chips meant more concerns for the enthusiast. Efficient heat sinks are vital to overclocked computer systems because the higher a microprocessor's cooling rate, the faster the computer can operate without instability; generally, faster operation leads to higher performance. Many companies now compete to offer the best heat sink for PC overclocking enthusiasts. Prominent aftermarket heat sink manufacturers include: Aero Cool, Foxconn, Thermalright, Thermaltake, Swiftech, and Zalman.
Applications:
Soldering Temporary heat sinks were sometimes used while soldering circuit boards, preventing excessive heat from damaging sensitive nearby electronics. In the simplest case, this means partially gripping a component using a heavy metal crocodile clip or similar clamp. Modern semiconductor devices, which are designed to be assembled by reflow soldering, can usually tolerate soldering temperatures without damage. On the other hand, electrical components such as magnetic reed switches can malfunction if exposed to higher powered soldering irons, so this practice is still very much in use.
Applications:
Batteries In the battery used for electric vehicles, Nominal battery performance is usually specified for working temperatures somewhere in the +20 °C to +30 °C range; however, the actual performance can deviate substantially from this if the battery is operated at higher or, in particular, lower temperatures, so some electric cars have heating and cooling for their batteries.
Methodologies:
Heat sinks Heat sinks are widely used in electronics and have become essential to modern microelectronics. In common use, it is a metal object brought into contact with an electronic component's hot surface—though in most cases, a thin thermal interface material mediates between the two surfaces. Microprocessors and power handling semiconductors are examples of electronics that need a heat sink to reduce their temperature through increased thermal mass and heat dissipation (primarily by conduction and convection and to a lesser extent by radiation). Heat sinks have become almost essential to modern integrated circuits like microprocessors, DSPs, GPUs, and more.
Methodologies:
A heat sink usually consists of a metal structure with one or more flat surfaces to ensure good thermal contact with the components to be cooled, and an array of comb or fin like protrusions to increase the surface contact with the air, and thus the rate of heat dissipation.
A heat sink is sometimes used in conjunction with a fan to increase the rate of airflow over the heat sink. This maintains a larger temperature gradient by replacing warmed air faster than convection would. This is known as a forced air system.
Methodologies:
Cold plate Placing a conductive thick metal plate, referred to as a cold plate, as a heat transfer interface between a heat source and a cold flowing fluid (or any other heat sink) may improve the cooling performance. In such arrangement, the heat source is cooled under the thick plate instead of being cooled in direct contact with the cooling fluid. It is shown that the thick plate can significantly improve the heat transfer between the heat source and the cooling fluid by way of conducting the heat current in an optimal manner. The two most attractive advantages of this method are that no additional pumping power and no extra heat transfer surface area, that is quite different from fins (extended surfaces).
Methodologies:
Principle Heat sinks function by efficiently transferring thermal energy ("heat") from an object at high temperature to a second object at a lower temperature with a much greater heat capacity. This rapid transfer of thermal energy quickly brings the first object into thermal equilibrium with the second, lowering the temperature of the first object, fulfilling the heat sink's role as a cooling device. Efficient function of a heat sink relies on rapid transfer of thermal energy from the first object to the heat sink, and the heat sink to the second object.
Methodologies:
The most common design of a heat sink is a metal device with many fins. The high thermal conductivity of the metal combined with its large surface area result in the rapid transfer of thermal energy to the surrounding, cooler, air. This cools the heat sink and whatever it is in direct thermal contact with. Use of fluids (for example coolants in refrigeration) and thermal interface material (in cooling electronic devices) ensures good transfer of thermal energy to the heat sink. Similarly, a fan may improve the transfer of thermal energy from the heat sink to the air.
Methodologies:
Construction and materials A heat sink usually consists of a base with one or more flat surfaces and an array of comb or fin-like protrusions to increase the heat sink's surface area contacting the air, and thus increasing the heat dissipation rate. While a heat sink is a static object, a fan often aids a heat sink by providing increased airflow over the heat sink—thus maintaining a larger temperature gradient by replacing the warmed air more quickly than passive convection achieves alone—this is known as a forced-air system.
Methodologies:
Ideally, heat sinks are made from a good thermal conductor such as silver, gold, copper, or aluminum alloy. Copper and aluminum are among the most-frequently used materials for this purpose within electronic devices. Copper (401 W/(m·K) at 300 K) is significantly more expensive than aluminum (237 W/(m·K) at 300 K) but is also roughly twice as efficient as a thermal conductor. Aluminum has the significant advantage that it can be easily formed by extrusion, thus making complex cross-sections possible. Aluminum is also much lighter than copper, offering less mechanical stress on delicate electronic components. Some heat sinks made from aluminum have a copper core as a trade off. The heat sink's contact surface (the base) must be flat and smooth to ensure the best thermal contact with the object needing cooling. Frequently a thermally conductive grease is used to ensure optimal thermal contact; such compounds often contain colloidal silver. Further, a clamping mechanism, screws, or thermal adhesive hold the heat sink tightly onto the component, but specifically without pressure that would crush the component.
Methodologies:
Performance Heat sink performance (including free convection, forced convection, liquid cooled, and any combination thereof) is a function of material, geometry, and overall surface heat transfer coefficient. Generally, forced convection heat sink thermal performance is improved by increasing the thermal conductivity of the heat sink materials, increasing the surface area (usually by adding extended surfaces, such as fins or foam metal) and by increasing the overall area heat transfer coefficient (usually by increase fluid velocity, such as adding fans, pumps, etc.).
Methodologies:
Online heat sink calculators from companies such as Novel Concepts, Inc. and at www.heatsinkcalculator.com can accurately estimate forced and natural convection heat sink performance. For more complex heat sink geometries, or heat sinks with multiple materials or multiple fluids, computation fluid dynamics (CFD) analysis is recommended (see graphics on this page).
Convective air cooling This term describes device cooling by the convection currents of the warm air being allowed to escape the confines of the component to be replaced by cooler air. Since warm air normally rises, this method usually requires venting at the top or sides of the casing to be effective.
Forced air cooling If there is more air being forced into a system than being pumped out (due to an imbalance in the number of fans), this is referred to as a 'positive' airflow, as the pressure inside the unit is higher than outside.
Methodologies:
A balanced or neutral airflow is the most efficient, although a slightly positive airflow can result in less dust build up if filtered properly Heat pipes A heat pipe is a heat transfer device that uses evaporation and condensation of a two-phase "working fluid" or coolant to transport large quantities of heat with a very small difference in temperature between the hot and cold interfaces. A typical heat pipe consists of sealed hollow tube made of a thermoconductive metal such as copper or aluminium, and a wick to return the working fluid from the evaporator to the condenser. The pipe contains both saturated liquid and vapor of a working fluid (such as water, methanol or ammonia), all other gases being excluded. The most common heat pipe for electronics thermal management has a copper envelope and wick, with water as the working fluid. Copper/methanol is used if the heat pipe needs to operate below the freezing point of water, and aluminum/ammonia heat pipes are used for electronics cooling in space.
Methodologies:
The advantage of heat pipes is their great efficiency in transferring heat. The thermal conductivity of heat pipes can be as high as 100,000 W/m K, in contrast to copper, which has a thermal conductivity of around 400 W/m K.
Methodologies:
Peltier cooling plates Peltier cooling plates take advantage of the Peltier effect to create a heat flux between the junction of two different conductors of electricity by applying an electric current. This effect is commonly used for cooling electronic components and small instruments. In practice, many such junctions may be arranged in series to increase the effect to the amount of heating or cooling required.
Methodologies:
There are no moving parts, so a Peltier plate is maintenance free. It has a relatively low efficiency, so thermoelectric cooling is generally used for electronic devices, such as infra-red sensors, that need to operate at temperatures below ambient. For cooling these devices, the solid state nature of the Peltier plates outweighs their poor efficiency. Thermoelectric junctions are typically around 10% as efficient as the ideal Carnot cycle refrigerator, compared with 40% achieved by conventional compression cycle systems.
Methodologies:
Synthetic jet air cooling A synthetic jet is produced by a continual flow of vortices that are formed by alternating brief ejection and suction of air across an opening such that the net mass flux is zero. A unique feature of these jets is that they are formed entirely from the working fluid of the flow system in which they are deployed can produce a net momentum to the flow of a system without net mass injection to the system.
Methodologies:
Synthetic jet air movers have no moving parts and are thus maintenance free. Due to the high heat transfer coefficients, high reliability but lower overall flow rates, Synthetic jet air movers are usually used at the chip level and not at the system level for cooling. However depending on the size and complexity of the systems they can be used for both at times.
Methodologies:
Electrostatic fluid acceleration An electrostatic fluid accelerator (EFA) is a device which pumps a fluid such as air without any moving parts. Instead of using rotating blades, as in a conventional fan, an EFA uses an electric field to propel electrically charged air molecules. Because air molecules are normally neutrally charged, the EFA has to create some charged molecules, or ions, first. Thus there are three basic steps in the fluid acceleration process: ionize air molecules, use those ions to push many more neutral molecules in a desired direction, and then recapture and neutralize the ions to eliminate any net charge.
Methodologies:
The basic principle has been understood for some time but only in recent years have seen developments in the design and manufacture of EFA devices that may allow them to find practical and economical applications, such as in micro-cooling of electronics components.
Methodologies:
Recent developments More recently, high thermal conductivity materials such as synthetic diamond and boron arsenide cooling sinks are being researched to provide better cooling. Boron arsenide has been reported with high thermal conductivity and high thermal boundary conductance with gallium nitride transistors and thus better performance than diamond and silicon carbide cooling technologies. Also, some heat sinks are constructed of multiple materials with desirable characteristics, such as phase change materials, which can store a great deal of energy due to their heat of fusion.
Thermal simulation of electronics:
Thermal simulations give engineers a visual representation of the temperature and airflow inside the equipment. Thermal simulations enable engineers to design the cooling system; to optimise a design to reduce power consumption, weight and cost; and to verify the thermal design to ensure there are no issues when the equipment is built. Most thermal simulation software uses Computational fluid dynamics techniques to predict temperature and airflow of an electronics system.
Thermal simulation of electronics:
Design Thermal simulation is often required to determine how to effectively cool components within design constraints. Simulation enables the design and verification of the thermal design of the equipment at a very early stage and throughout the design of the electronic and mechanical parts. Designing with thermal properties in mind from the start reduces the risk of last minute design changes to fix thermal issues.
Thermal simulation of electronics:
Using thermal simulation as part of the design process enables the creation of an optimal and innovative product design that performs to specification and meets customers' reliability requirements.
Thermal simulation of electronics:
Optimise It is easy to design a cooling system for almost any equipment if there is unlimited space, power and budget. However, the majority of equipment will have a rigid specification that leaves a limited margin for error. There is a constant pressure to reduce power requirements, system weight and cost parts, without compromising performance or reliability. Thermal simulation allows experimentation with optimisation, such as modifying heatsink geometry or reducing fan speeds in a virtual environment, which is faster, cheaper and safer than physical experiment and measurement.
Thermal simulation of electronics:
Verify Traditionally, the first time the thermal design of the equipment is verified is after a prototype has been built. The device is powered up, perhaps inside an environmental chamber, and temperatures of the critical parts of the system are measured using sensors such as thermocouples. If any problems are discovered, the project is delayed while a solution is sought. A change to the design of a PCB or enclosure part may be required to fix the issue, which will take time and cost a significant amount of money. If thermal simulation is used as part of the design process of the equipment, thermal design issue will be identified before a prototype is built. Fixing an issue at the design stage is both quicker and cheaper than modifying the design after a prototype is created.
Thermal simulation of electronics:
Software There are a wide range of software tools that are designed for thermal simulation of electronics include 6SigmaET, Ansys' IcePak and Mentor Graphics' FloTHERM.
Telecommunications environments:
Thermal management measures must be taken to accommodate high heat release equipment in telecommunications rooms. Generic supplemental/spot cooling techniques, as well as turnkey cooling solutions developed by equipment manufacturers are viable solutions. Such solutions could allow very high heat release equipment to be housed in a central office that has a heat density at or near the cooling capacity available from the central air handler.
Telecommunications environments:
According to Telcordia GR-3028, Thermal Management in Telecommunications Central Offices, the most common way of cooling modern telecommunications equipment internally is by utilizing multiple high-speed fans to create forced convection cooling. Although direct and indirect liquid cooling may be introduced in the future, the current design of new electronic equipment is geared towards maintaining air as the cooling medium.A well-developed "holistic" approach is required to understand current and future thermal management problems. Space cooling on one hand, and equipment cooling on the other, cannot be viewed as two isolated parts of the overall thermal challenge. The main purpose of an equipment facility's air-distribution system is to distribute conditioned air in such a way that the electronic equipment is cooled effectively. The overall cooling efficiency depends on how the air distribution system moves air through the equipment room, how the equipment moves air through the equipment frames, and how these airflows interact with one another. High heat-dissipation levels rely heavily on a seamless integration of equipment-cooling and room-cooling designs.
Telecommunications environments:
The existing environmental solutions in telecommunications facilities have inherent limitations. For example, most mature central offices have limited space available for large air duct installations that are required for cooling high heat density equipment rooms. Furthermore, steep temperature gradients develop quickly should a cooling outage occur; this has been well documented through computer modeling and direct measurements and observations. Although environmental backup systems may be in place, there are situations when they will not help. In a recent case, telecommunications equipment in a major central office was overheated, and critical services were interrupted by a complete cooling shut down initiated by a false smoke alarm.
Telecommunications environments:
A major obstacle for effective thermal management is the way heat-release data is currently reported. Suppliers generally specify the maximum (nameplate) heat release from the equipment. In reality, equipment configuration and traffic diversity will result in significantly lower heat release numbers.
Equipment cooling classes As stated in GR-3028, most equipment environments maintain cool front (maintenance) aisles and hot rear (wiring) aisles, where cool supply air is delivered to the front aisles and hot air is removed from the rear aisles. This scheme provides multiple benefits, including effective equipment cooling and high thermal efficiency.
Telecommunications environments:
In the traditional room cooling class utilized by the majority of service providers, equipment cooling would benefit from air intake and exhaust locations that help move air from the front aisle to the rear aisle. The traditional front-bottom to top-rear pattern, however, has been replaced in some equipment with other airflow patterns that may not ensure adequate equipment cooling in high heat density areas.
Telecommunications environments:
A classification of equipment (shelves and cabinets) into Equipment-Cooling (EC) classes serves the purpose of classifying the equipment with regard to the cooling air intake and hot air exhaust locations, i.e., the equipment airflow schemes or protocols.
Telecommunications environments:
The EC-Class syntax provides a flexible and important “common language.” It is used for developing Heat-Release Targets (HRTs), which are important for network reliability, equipment and space planning, and infrastructure capacity planning. HRTs take into account physical limitations of the environment and environmental baseline criteria, including the supply airflow capacity, air diffusion into the equipment space, and air-distribution/equipment interactions. In addition to being used for developing the HRTs, the EC Classification can be used to show compliance on product sheets, provide internal design specifications, or specify requirements in purchase orders.
Telecommunications environments:
The Room-Cooling classification (RC-Class) refers to the way the overall equipment space is air-conditioned (cooled). The main purpose of RC-Classes is to provide a logical classification and description of legacy and non-legacy room-cooling schemes or protocols in the central office environment. In addition to being used for developing HRTs, the RC-classification can be used in internal central office design specifications or in purchase orders.
Telecommunications environments:
Supplemental-Cooling classes (SC-Class) provide a classification of supplemental cooling techniques. Service providers use supplemental/spot-cooling solutions to supplement the cooling capacity (e.g., to treat occurrences of “hot spots”) provided by the general room-cooling protocol as expressed by the RC-Class.
Telecommunications environments:
Economic impact Energy consumption by telecommunications equipment currently accounts for a high percentage of the total energy consumed in central offices. Most of this energy is subsequently released as heat to the surrounding equipment space. Since most of the remaining central office energy use goes to cool the equipment room, the economic impact of making the electronic equipment energy-efficient would be considerable for companies that use and operate telecommunications equipment. It would reduce capital costs for support systems, and improve thermal conditions in the equipment room. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clover (telescope)**
Clover (telescope):
Clover would have been an experiment to measure the polarization of the Cosmic Microwave Background. It was approved for funding in late 2004, with the aim of having the full telescope operational by 2009. The project was jointly run by Cardiff University, Oxford University, the Cavendish Astrophysics Group and the University of Manchester.
History:
The Clover Project was meant to consist of two independent telescopes, one operating at 95 GHz with the other operating at both 150 and 225 GHz. Both telescopes were to be sited near the CBI site in the Atacama Desert, Chile. The two telescope receivers would have been large format focal plane arrays of either 100 or 200 bolometric detectors.The aim of the experiment was to measure the B-mode polarization of the Cosmic Microwave Background between multipoles of 20 and 1000 down to a sensitivity limited by the foreground contamination due to lensing. This would have allowed the detection of primordial gravitational waves in the universe so long as the ratio of scalar perturbations (caused by density fluctuations in the early universe) to the tensor perturbations caused by gravitational waves was greater than 0.01 .It was hoped that the telescope would have spent around 2 years observing a total of around 1,000 degrees of sky, made up of several patches of sky where polarized foregrounds (synchrotron and thermal dust emission) are at a minimum.Clover was canceled in March 2009 as STFC were unable to provide the requested additional funds of 2.55 million pounds to finish the project. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Small nucleolar RNA SNORA69**
Small nucleolar RNA SNORA69:
In molecular biology, Small nucleolar RNA SNORA69 (also known as U69) is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a "guide RNA".
Small nucleolar RNA SNORA69:
ACA69 was originally cloned from HeLa cells and belongs to the H/ACA box class of snoRNAs as it has the predicted hairpin-hinge-hairpin-tail structure, has the conserved H/ACA-box motifs and is found associated with GAR1 protein. snoRNA ACA69 is predicted to guide the pseudouridylation of U36 of 18S and U69 of 5.8S ribosomal RNA (rRNA). Pseudouridylation is the (isomerisation of the nucleoside uridine) to the different isomeric form pseudouridine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Brass ring**
Brass ring:
A brass ring is a small grabbable ring that a dispenser presents to a carousel rider during the course of a ride. Usually there are a large number of iron rings and one brass one, or just a few. It takes some dexterity to grab a ring from the dispenser as the carousel rotates. The iron rings can be tossed at a target as an amusement. Typically, getting the brass ring gets the rider some sort of prize when presented to the operator. The prize often is a free repeat ride.
Brass ring:
The figurative phrase to grab the brass ring is derived from this device.
Background:
Brass ring devices were developed during the heyday of the carousel in the U.S.—about 1880 to 1921. At one time, the riders on the outside row of horses were often given a little challenge, perhaps as a way to draw interest or build excitement, more often as an enticement to sit on the outside row of horses which frequently did not move up and down and were therefore less enticing by themselves. Most rings were iron, but one or two per ride were made of brass; if a rider managed to grab a brass ring, it could be redeemed for a free ride.
Background:
References to a literal brass ring go back into the 1890s.As the carousel began to turn, rings were fed to one end of a wooden arm that was suspended above the riders. Riders hoped that the timing of the carousel rotation (and the rise-and-fall motion of their seat, when movable seats were included in the outer circle of the carousel) would place them within reach of the dispenser when a ring (and preferably a brass ring) was available.
Background:
Another system had mostly steel rings of no value and one brass ring, and a target into which the rings were to be thrown (for example the Santa Cruz Beach Boardwalk Looff Carousel uses a clown target shown in the photo above, and the Knoebel's Amusement Resort Grand Carousel uses a lion target), discouraging retention of the rings as souvenirs.
Cultural references:
"Grabbing the brass ring" or getting a "shot at the brass ring" also means striving for the highest prize (especially a championship ring in sports), or living life to the fullest. It is not clear when the phrase came into wide use but has been found in dictionaries as far back as the late 19th century.The term has been used as the title of at least two books.
Cultural references:
The final scene of The Catcher in the Rye features a carousel with a brass ring, which Holden Caulfield's sister Phoebe reaches for. The brass ring is symbolic of adulthood, the transition to which is a preoccupation of Holden throughout the book.
The Four Seasons song "Beggin'" references "now that big brass ring is a shade of black", in reference to having missed an important opportunity.
Dispatch song “Flying Horses” references stealing a ring from The Flying Horses Merry Go Round located in Martha’s Vineyard Massachusetts.
The Barenaked Ladies song "Get Back Up" references "getting fitted for a new brass ring", in reference to continuing to strive for success.
"Brass Rings And Daydreams" is a song written by Richard M. Sherman and Robert B. Sherman for the 1978 motion picture musical The Magic of Lassie and performed by Debby Boone.
In professional wrestling, Tyson Kidd and Cesaro formed an alliance and called themselves "The Brass Ring Club" in 2015.
Cultural references:
At the climax of the film Sneakers, all of the main characters have the opportunity to receive anything they want in exchange for handing over a crucial piece of technology to the NSA. When River Phoenix's character requests something with no monetary value, he is admonished by Robert Redford's character to think bigger, as "this is the brass ring." In the big blowup argument in the film Fools Rush In, the main character, Alex Whitman, says, "Look, this is the brass ring. I've worked my entire life for this kind of opportunity and I am not gonna throw it all away just because one night I put a five-dollar ring on your finger in front of Elvis as a witness!" The International Association of Amusement Parks and Attractions (IAAPA) presents "Brass Ring Awards" annually, to recognize achievement in the global attractions industry and to "honor excellence in food and beverage, games and retail, human resources, live entertainment, marketing, new products, and exhibits." The Grateful Dead song "Crazy Fingers" includes the lyrics "Midnight on a carousel ride / Reaching for the gold ring down inside," in which the brass ring is called a "gold ring" by means of poetic license.
Cultural references:
The American Music Club song "If I Had a Hammer" (from the Mercury album) includes the lyrics "The love cry of the traveling man goes / "No one knows who I am, / but I'm as priceless as a brass ring / that's losing the heat from your hand."..."
Brass ring carousels today:
Although there are many carousels extant, only a handful of carousels still have brass rings.
Rings removed The following carousels are no longer running rings: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aubertite**
Aubertite:
Aubertite is a mineral with the chemical formula CuAl(SO4)2Cl·14H2O. It is colored blue. Its crystals are triclinic pedial. It is transparent. It has vitreous luster. It is not radioactive. Aubertite is rated 2-3 on the Mohs Scale. The sample was collected by J. Aubert (born 1929), assistant director, National Institute of Geophysics, France, in the year 1961. Its type locality is Queténa Mine, Toki Cu deposit, Chuquicamata District, Calama, El Loa Province, Antofagasta Region, Chile. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Human uses of scorpions**
Human uses of scorpions:
Humans use scorpions both practically, for medicine, food, and pets, and symbolically, whether as gods, to ward off harm, or to associate a product or business with the evident power of the small but deadly animal.
Practical uses:
Medicine Short-chain scorpion toxins constitute the largest group of potassium (K+) channel-blocking peptides. An important physiological role of the KCNA3 channel, also known as KV1.3, is to help maintain large electrical gradients for the sustained transport of ions such as Ca2+ that controls T lymphocyte (T cell) proliferation. Thus KV1.3 blockers could be potential immunosuppressants for the treatment of autoimmune disorders (such as rheumatoid arthritis, inflammatory bowel disease, and multiple sclerosis).
Practical uses:
The venom of Uroplectes lineatus is clinically important in dermatology.Several scorpion venom toxins have been investigated for medical use. Chlorotoxin from the deathstalker scorpion (Leiurus quinquestriatus); the toxin blocks small-conductance chloride channels; Maurotoxin from the venom of the Tunisian Scorpio maurus blocks potassium channels.
Some antimicrobial peptides in the venom of Mesobuthus eupeus; meucin-13 and meucin-18 have extensive cytolytic effects on bacteria, fungi, and yeasts, while meucin-24 and meucin-25 selectively kill Plasmodium falciparum and inhibit the development of Plasmodium berghei, both malaria parasites, but do not harm mammalian cells.
Food Fried scorpion is traditionally eaten in Shandong, China.
As pets:
Scorpions are sometimes kept as pets, in the same way as other dangerous animals like snakes and tarantula spiders. Popular Science Monthly carried an article entitled "My pet scorpion" as early as 1899.
Symbolic uses:
Middle Eastern culture The scorpion is a significant animal culturally, appearing as a motif in art, especially in Islamic art in the Middle East. A scorpion motif is often woven into Turkish kilim flat-weave carpets, for protection from their sting. The scorpion is perceived both as an embodiment of evil and a protective force such as a dervish's powers to combat evil. In another context, the scorpion portrays human sexuality. Scorpions are used in folk medicine in South Asia, especially in antidotes for scorpion stings.One of the earliest occurrences of the scorpion in culture is its inclusion, as Scorpio, in the 12 signs of the Zodiac by Babylonian astronomers during the Chaldean period.
Symbolic uses:
In ancient Egypt, the goddess Serket was often depicted as a scorpion, one of several goddesses who protected the Pharaoh.Alongside serpents, scorpions are used to symbolize evil in the New Testament. In Luke 10:19 it is written, "Behold, I give unto you power to tread on serpents and scorpions, and over all the power of the enemy: and nothing shall by any means hurt you." Here, scorpions and serpents symbolize evil. Revelation 9:3 speaks of "the power of the scorpions of the earth." Western culture The scorpion with its powerful sting has been used as the name or symbol of various products and brands, including Italy's Abarth racing cars. In the Roman army, the scorpio was a torsion siege engine used to shoot a projectile. The British Army's FV101 Scorpion was an armoured reconnaissance vehicle or light tank in service from 1972 to 1994. It holds the Guinness world record for the fastest production tank. A version of the Matilda II tank, fitted with a flail to clear mines, was named the Matilda Scorpion.
Symbolic uses:
Several ships of the Royal Navy have been named HMS Scorpion, including an 18-gun sloop in 1803, a turret ship in 1863, and a destroyer in 1910.
Symbolic uses:
A hand- or forearm-balancing asana in modern yoga as exercise with the back arched and one or both legs pointing forwards over the head is called Scorpion pose, a pose of yoga which was originated in ancient India and influential practice in classical Hinduism which is currently becoming popular in the West. A variety of martial arts films and video games have been entitled Scorpion King. A Montesa scrambler motorcycle was named Scorpion.Scorpions have equally appeared in western artforms including film and poetry: the surrealist filmmaker Luis Buñuel made symbolic use of scorpions in his 1930 classic L'Age d'or (The Golden Age), while Stevie Smith's last collection of poems was entitled Scorpion and other Poems.
Symbolic uses:
Other cultures Scorpions are among the many animals modelled in the art of the Moche culture of Peru.Mimbres artists in the south of New Mexico created painted ceramics of scorpions and many other symbolic and mythological animals on funerary bowls. A hole was ritually punched through the bottom of the bowl to "kill" it during a funeral. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Senior ice hockey**
Senior ice hockey:
Senior hockey refers to amateur or semi-professional ice hockey competition. There are no age restrictions for Senior players, who typically consist of those whose Junior eligibility has expired. Senior hockey leagues operate under the jurisdiction of Hockey Canada or USA Hockey. They are not affiliated in any way with professional hockey leagues. Many former professional players play Senior hockey after their pro careers are over. The top Senior AAA teams in Canada compete annually for the Allan Cup.
History:
From the beginning of the 1900s until the 1970s, Senior hockey was immensely popular across Canada, particularly in rural towns. At a time when most households didn't have a television and few hockey games were broadcast, local arenas were filled to capacity to watch the local team take on a rival.
The popularity of Senior hockey declined in the 1980s and 1990s. A number of long-running leagues and teams vanished. Today, many players choose to play organized recreational hockey, sometimes referred to as "commercial hockey." The popularity of the National Hockey League and Junior hockey has also supplanted Senior hockey in many towns across Canada.
Senior AAA hockey leagues:
Allan Cup Hockey (Ontario Sr. AAA) Allan Cup Hockey West (Alberta Sr. AAA)
Other leagues:
Canada North Peace Hockey League Highway Hockey League Big 6 Hockey League Qu'Appelle Valley Hockey League Carillon Senior Hockey League South Eastern Manitoba Hockey League Western Ontario Athletic Association Senior Hockey League Avalon East Senior Hockey League Central West Senior Hockey League Eastern Ontario Super Hockey LeagueUnited States Great Lakes Hockey League Mountain West Hockey League Black Diamond Hockey League | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rewrite engine**
Rewrite engine:
In web applications, a rewrite engine is a software component that performs rewriting on URLs (Uniform Resource Locators), modifying their appearance. This modification is called URL rewriting. It is a way of implementing URL mapping or routing within a web application. The engine is typically a component of a web server or web application framework. Rewritten URLs (sometimes known as short, pretty or fancy URLs, search engine friendly - SEF URLs, or slugs) are used to provide shorter and more relevant-looking links to web pages. The technique adds a layer of abstraction between the files used to generate a web page and the URL that is presented to the outside world.
Usage:
Web sites with dynamic content can use URLs that generate pages from the server using query string parameters. These are often rewritten to resemble URLs for static pages on a site with a subdirectory hierarchy. For example, the URL to a wiki page with title Rewrite_engine might be: http://example.com/w/index.php?title=Rewrite_engine but can be rewritten as: http://example.com/wiki/Rewrite_engine A blog might have a URL that encodes the dates of each entry: http://www.example.com/Blog/Posts.php?Year=2006&Month=12&Day=19 It can be altered like this: http://www.example.com/Blog/2006/12/19/ which also allows the user to change the URL to see all postings available in December, simply by removing the text encoding the day '19', as though navigating "up" a directory: http://www.example.com/Blog/2006/12/ A site can pass specialized terms from the URL to its search engine as a search term. This would allow users to search directly from their browser. For example, the URL as entered into the browser's location bar: http://example.com/search term Will be urlencoded by the browser before it makes the HTTP request. The server could rewrite this to: http://example.com/search.php?q=search%20term
Benefits and drawbacks:
There are several benefits to using URL rewriting: The links are "cleaner" and more descriptive, improving their "friendliness" to both users and search engines.
They prevent undesired "inline linking", which can waste bandwidth.
Benefits and drawbacks:
The site can continue to use the same URLs even if the underlying technology used to serve them is changed (for example, switching to a new blogging engine).There can, however be drawbacks as well; if a user wants to modify a URL to retrieve new data, URL rewriting may hinder the construction of custom queries due to the lack of named variables. For example, it may be difficult to determine the date from the following format: http://www.example.com/Blog/06/04/02/ In this case, the original query string was more useful, since the query variables indicated month and day: http://www.example.com/Blog/Posts.php?Year=06&Month=04&Day=02
Web frameworks:
Many web frameworks include URL rewriting, either directly or through extension modules.
Apache HTTP Server has URL rewriting provided by the mod_rewrite module.
URL Rewrite is available as an extension to Microsoft IIS.
Ruby on Rails has built-in URL rewriting via Routes.
Jakarta Servlet has extendable URL rewriting via the OCPsoft URLRewriteFilter and Tuckey UrlRewriteFilter.
Jakarta Server Faces has simplified URL rewriting via the PrettyFaces: URLRewriteFilter.
Django uses a regular-expressions-based system. This is not strictly URL rewriting since there is no script to 'rewrite' to, nor even a directory structure; but it provides the full flexibility of URL rewriting.
Java Stripes Framework has had integrated functionality since version 1.5.
Many Perl frameworks, such as Mojolicious and Catalyst, have this feature.
CodeIgniter has URL rewriting provided.
lighttpd has a mod_rewrite module.
nginx has a rewrite module. For example, a multi-link multi-variable page generation from a URI like /f101,n61,o56,d/ifconfig is possible, where multiple individual parts like f101 get expanded with the help of regular expressions into variables to signify FreeBSD 10.1-RELEASE and so forth.
Hiawatha HTTP server has a URL Toolkit which supports URL rewriting.
Cherokee HTTP server supports regular expressions of URL rewriting and redirections.From a software development perspective, URL rewriting can aid in code modularization and control flow, making it a useful feature of modern web frameworks. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rim (crater)**
Rim (crater):
The rim or edge of an impact crater is the part that extends above the height of the local surface, usually in a circular or elliptical pattern. In a more specific sense, the rim may refer to the circular or elliptical edge that represents the uppermost tip of this raised portion. If there is no raised portion, the rim simply refers to the inside edge of the curve where the flat surface meets the curve of the crater bottom.
Simple craters:
Smaller, simple craters retain rim geometries similar to the features of many craters found on the Moon and the planet Mercury.
Complex craters:
Large craters are those with a diameter greater than 2.3 km, and are distinguished by central uplifts within the impact zone. These larger (also called “complex”) craters can form rims up to several hundred meters in height.
Complex craters:
A process to consider when determining the exact height of a crater rim is that melt may have been pushed over the crest of the initial rim from the initial impact, thereby increasing its overall height. When combined with potential weathering due to atmospheric erosion over time, determining the average height of a crater rim can be somewhat difficult. It has also been observed that the slope along the excavated interior of many craters can facilitate a spur-and-gully morphology, including mass wasting events occurring due to slope instability and nearby seismic activity.Complex crater rims observed on Earth have anywhere between 5X – 8X greater height:diameter ratio compared to those observed on the Moon, which can likely be attributed to the greater force of gravitational acceleration between the two planetary bodies that collide. Additionally, crater depth and the volume of melt produced in the impact are directly related to the gravitational acceleration between the two bodies. It has been proposed that “reverse faulting and thrusting at the final crater rim [is] one of the main contributing factors [to] forming the elevated crater rim”. When an impact crater is formed on a sloped surface, the rim will form in an asymmetric profile. As the impacted surface's angle of repose increases, the crater's profile becomes more elongate.
Classification:
The rim type classifications are full-rim craters, broken-rim craters, and depressions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sophorolipid**
Sophorolipid:
A sophorolipid is a surface-active glycolipid compound that can be synthesized by a selected number of non-pathogenic yeast species. They are potential bio-surfactants due to their biodegradability and low eco-toxicity.
Structure and properties:
Sophorolipids are glycolipids consisting of a hydrophobic fatty acid tail of 16 or 18 carbon atoms and a hydrophilic carbohydrate head sophorose, a glucose-derived di-saccharide with an unusual β-1,2 bond and can be acetylated on the 6′- and/or 6′′- positions. One terminal or sub terminal hydroxylated fatty acid is β-glycosidically linked to the sophorose module. The carboxylic end of this fatty acid is either free (acidic or open form) or internally esterified at the 4′′ or in some rare cases at the 6′- or 6′′-position (lactonic form). The physicochemical and biological properties of sophorolipids are significantly influenced by the distribution of the lactone vs. acidic forms produced in the fermentative broth. In general, lactone sophorolipids are more efficient in reducing surface tension and are better antimicrobial agents, whereas acidic sophorolipids display better foaming properties. Acetyl groups can also lower the hydrophilicity of sophorolipids and enhance their antiviral and cytokine stimulating effects.
Structure and properties:
Sophorolipids are produced by various non pathogenic yeast species such as Candida apicola, Rhodotorula bogoriensis, Wickerhamiella domercqiae, and Starmerella bombicola. Recent research has meant sophorolipids can be recovered during a fermentation using a gravity separator in a loop with the bioreactor, enabling the production of >770 g/L sophorolipid at a productivity 4.24 g/L/h, some of the highest values seen in a fermentation process Desirable properties of biosurfactants are biodegradability and low toxicity. Sophorolipids produced by several yeasts belonging to candida and the starmerella clade, and Rhamnolipid produced by Pseudomonas aeruginosa etc.
Structure and properties:
Besides biodegradability, low toxicity, and high production potential, sophorolipids have a high surface and interfacial activity. Sophorolipids are reported to lower surface tension (ST) of water from 72 to 30-35 mN/m and the interfacial tension (IT) water/hexadecane from 40 to 1 mN/m. In addition to this, sophorolipids are reported to function under wide ranges of temperatures, pressures and ionic strengths; and they also possess a number of other useful biological activities including Antimicrobial, virucidal, Anticancer, Immuno-modulatory properties.
Research:
A detailed and comprehensive literature review on the various aspects of sophorolipids production (e.g. producing micro-organisms, bio-synthetic pathway, effect of medium components and other fermentation conditions and downstream process of sophorolipids is available in the published work of Van Bogaert et al. This work also discusses potential application of sophorolipids (and their derivatives) as well as the potential for genetic engineering strains to enhance sophorolipid yields. Researchers have focused on optimization of sophorolipid production in submerged fermentation, but some efforts have also investigated the possibility of sophorololipid production using solid state fermentation (SSF). The production process can be significantly impacted by the specific properties of the carbon and oil substrates used; and several inexpensive alternatives to more traditional substrates have been investigated. These potential substrates include: biodiesel by-product streams, waste frying oil, restaurant waste oil, industrial fatty acid residues, mango seed fat, and soybean dark oil. The use of most of these substrates have resulted in lower yields compared to traditional fermentation substrates.
Chemical modifications of sophorolipids, and polysophorolipids:
To enhance the performance of surfactant properties of natural sophorolipids, chemical modification methods have been actively pursued. Recently, researchers demonstrated the possibility of applying sophorolipids as building blocks via ring-opening metathesis polymerization for a new type of polymers, known as polysophorolipids which show promising potentials in biomaterials applications. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polly Sy**
Polly Sy:
Polly Wee Sy is a Filipino mathematician specializing in functional analysis. She is a professor emeritus of mathematics at the University of the Philippines Diliman, the former head of the mathematics department at the university, and the former president of the Southeast Asia Mathematical Society.
Polly Sy:
Sy has a bachelor's degree, master's degree, and Ph.D. in mathematics from the University of the Philippines Diliman, earned in 1974, 1977, and 1982 respectively. Her doctoral dissertation, Köthe duals and matrix transformations, was supervised by Singaporean mathematician Peng Yee Lee. She also has a second doctorate, a 1992 D.Sc. from Nagoya University.Sy chaired the mathematics department at the University of the Philippines Diliman twice, from 1994 to 1996 and 1999 to 2002, and served as president of the Southeast Asia Mathematical Society from 1998 to 1999. She became a full professor at the university in 2000, and retired to become a professor emeritus in 2019.In 1988 the Philippine National Academy of Science and Technology gave Sy their Outstanding Young Scientist Award and in 1992 they gave her their Science Prize. In 2013 the Institute of Mathematics of the University of the Philippines Diliman held a workshop in honor of Sy's 60th birthday. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multivariate ENSO index**
Multivariate ENSO index:
The multivariate ENSO index, abbreviated as MEI, is a method used to characterize the intensity of an El Niño Southern Oscillation (ENSO) event. Given that ENSO arises from a complex interaction of a variety of climate systems, MEI is regarded as the most comprehensive index for monitoring ENSO since it combines analysis of multiple meteorological and oceanographic components.
Overview:
MEI is determined as the first principal component of six different parameters: sea level pressure, zonal and meridional components of the surface wind, sea surface temperature, surface air temperature and cloudiness using data from the International Comprehensive Ocean-Atmosphere Data Set (ICOADS ). MEI is calculated twelve times per year for each “sliding bi-monthly season”, characterized as January–February, February–March, March–April, and so on. Large positive MEI values indicate the occurrence of El Niño conditions, while large negative MEI values indicate the occurrence of La Niña conditions.
Extended MEI:
While the National Oceanic and Atmospheric Administration (NOAA) has recorded MEI values from 1950 to the present, various researchers have cited the need for data before 1950 in order to better characterize typical ENSO behavior versus unusual occurrences that may be a result of climate change. According to some sources, it is inadvisable to attempt to calculate MEI before 1950 given that environmental measurements were unreliable during the World Wars and there was a revolution in virtually all meteorological measurement methods on ships during the 1940s. However, a module known as “Extended MEI” or “MEI.ext” has been created that estimates MEI values from as far back as 1871. This was accomplished by using reconstructed data on sea level pressure and sea surface temperature, the two components thought to be most influential to determining MEI. Plots comparing MEI and MEI.ext values have shown that data from both methods are highly correlated, supporting the accuracy and effectiveness of MEI.ext.
Alternate indexes:
Southern Oscillation Index The Southern Oscillation Index (SOI) is calculated based on the sea level pressure difference between Tahiti and Darwin, Australia. Despite being used frequently in ENSO studies, it is not considered as reliable as MEI given that it only takes into account one environmental variable.
Niño 3.4 SST Similar to SOI, Niño 3.4 SST uses one parameter – sea surface temperature – to characterize ENSO. The Niño 3.4 SST region consists of temperature measurements from between 5° N – 5° S and 120° – 170° W.
Coupled ENSO Index The Coupled ENSO Index (CEI) uses a combination of both the SOI and Niño 3.4 SST to account for both an atmospheric and oceanic component.
Alternate indexes:
Proxy-based ENSO Index Developed by Braganza, et al., 2009, this index uses coral, tree ring and ice core data to characterize ENSO events from 1525 – 1982. The proxy ENSO index covers a wide area across the Pacific, and includes data from the western and central Pacific, New Zealand, and subtropical North America. Although the proxy ENSO index covers a large time scale of over four centuries, it shows high correlation (> 40%) with the SOI, Niño 3.4 SST and CEI, indicating its accuracy and utility. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apple IIe**
Apple IIe:
The Apple IIe (styled as Apple //e) is the third model in the Apple II series of personal computers produced by Apple Computer. The e in the name stands for enhanced, referring to the fact that several popular features were now built-in that were formerly only available as upgrades or add-ons in earlier models. Improved expandability combined with the new features made for a very attractive general-purpose machine to first-time computer shoppers. As the last surviving model of the Apple II computer line before discontinuation, and having been manufactured and sold for nearly 11 years with relatively few changes, the IIe earned the distinction of being the longest-lived computer in Apple's history.
History:
Apple Computer planned to discontinue the Apple II series after the introduction of the Apple III in 1980; the company intended to clearly establish market segmentation by designing the Apple III to appeal to the business market, leaving the Apple II for home and education users. Management believed that "once the Apple III was out, the Apple II would stop selling in six months", cofounder Steve Wozniak later said.By the time IBM released the rival IBM PC in 1981, the Apple II's technology was already four years old. In September 1981 InfoWorld reported—below the PC's announcement—that Apple was secretly developing three new computers "to be ready for release within a year": Lisa, Macintosh, and "Diana". Describing the last as a software-compatible Apple II replacement—"A 6502 machine using custom LSI" and a simpler motherboard—it said that Diana "was ready for release months ago" but decided to improve the design to better compete with the Xerox 820. "Now it appears that when Diana is ready for release, it will offer features and a price that will make the Apple II uncompetitive", the magazine wrote."Apple's plans to phase out the Apple II have also been delayed by complications in the design of the Apple III", the article also said. After the Apple III initially struggled, management decided in 1981 that the further continuation of the Apple II was in the company's best interest. After 3+1⁄2 years of the Apple II Plus, essentially at a standstill, came the introduction of a new Apple II model — the Apple IIe (codenamed "Diana" and "Super II"). The Apple IIe was released in January 1983, the successor to the Apple II Plus. The Apple IIe was the first Apple computer with a custom ASIC chip, which reduced much of the old discrete IC-based circuitry to a single chip. This change resulted in reducing the cost and size of the motherboard. Some of the hardware features of the Apple III (e.g. bank-switched memory) were borrowed in the design of the Apple IIe, and some from incorporating the Apple II Plus Language card. The culmination of these changes led to increased sales and greater market share of home, education, and small business use.
New features:
One of the most notable improvements of the Apple IIe is the addition of a full ASCII character set and keyboard. The most important addition is the ability to input and display lower-case letters. Other keyboard improvements include four-way cursor control and standard editing keys (Del and Tab ↹), two special Apple modifier keys (Open and Solid Apple), and a safe off-to-side relocation of the Reset key. The auto-repeat function (any key held down to repeat same character continuously) is now automatic, no longer requiring the REPT key found on the keyboards of previous models.
New features:
The machine came standard with 64 KB RAM, with the equivalent of a built-in Apple Language Card in its circuitry, and had a new special "Auxiliary slot" (replacing slot 0, though electronically mapped to slot 3 for compatibility with earlier third-party 80-column cards) for adding more memory via bank-switching RAM cards. Through this slot it also includes built-in support for an 80-column text display on monitors (with the addition of a plug-in 1K memory card, via bank-switching of 40 columns) and could be easily doubled to 128 KB RAM by alternatively plugging in Apple's Extended 80-Column Text Card. As time progressed, even more memory could be added through third-party cards using the same bank-switching slot or, alternatively, general-purpose slot cards that addressed memory 1 byte at a time (i.e. Slinky RAM cards). A new ROM diagnostic routine could be invoked to test the motherboard for faults and also test its main bank of memory.
New features:
The Apple IIe lowered production costs and improved reliability by merging the function of several off-the-shelf ICs into single custom chips, reducing total chip count to 31 (previous models used 120 chips). The IIe also switched to using newer single-voltage 64x1 DRAM chips instead of the triple-voltage 16x1 DRAM in the II/II+. For this reason the motherboard design is much cleaner and runs cooler as well, with enough room to add a pin-connector for an (optional) external numeric keypad. Also added was a backport-accessible DE-9 joystick connector, making it far easier for users to add and remove game and input devices (previous models requiring plugging the joystick/paddles directly into a 16-pin DIP socket on the motherboard; the IIe retained this connector for backwards compatibility). Also improved were port openings for expansion cards. Rather than cutout V-shaped slot openings as in the Apple II and II Plus, the IIe has a variety of different-sized openings, with thumb-screw holes, to accommodate mounting interface cards with DB-xx and DE-xx connectors (removable plastic covers filled the cutouts if not used).
New features:
Although the lower IC count improved reliability over previous Apple II models, Apple still retained the practice of socketing all ICs so that servicing and replacement could be performed more easily. Later-production IIe models had the RAM soldered to the system board rather than socketed.
Despite the hardware changes, the IIe maintained a high degree of backwards compatibility with the previous models, allowing most hardware and software from those systems to be used. Apple provided technical information on the IIe to hundreds of developers before its release, and claimed that, as a result, 85 to 90% of Apple II software worked with it.
Reception:
BYTE wrote in February 1983 that the IIe was "like having an Apple II with all the extras built in ... with a variety of exciting new features and capabilities" for about the same price as the Apple II. It found the computer to be highly compatible with the Apple II and praised the quality of the documentation for developers and beginners. The review concluded, "Congratulations, Apple Computer, you've produced another winner". InfoWorld's reviewers, Apple II Plus owners for four years, wished that the IIe's price were lower but stated that it "does give you more for your money, however". They also found compatibility to be very high, and concluded that "we are generally pleased with the changes Apple has provided with the IIe". Creative Computing said in December 1984 that the IIe and IIc were the best home computers with prices above $500, with the IIe better for those wanting expansion cards, color graphics, and educational and entertainment software. The magazine also chose the IIe as the best educational computer above $1000, citing Apple's strong early commitment to the market and large number of third-party education-related peripherals.
Specifications:
Microprocessor 6502 or 65C02 running at 1.023 MHz 8-bit data bus Memory 64 KB RAM built-in 16 KB ROM built-in Expandable from 64 KB up to 1 MB RAM or more Video modes 40 and 80 columns text, white-on-black, with 24 lines Low-Resolution: 40×48 (16 colors) High-Resolution: 280×192 (6 colors) Double-Low-Resolution: 80×48 (16 colors) Double-High-Resolution: 560×192 (16 colors) Audio Built-in speaker; 1-bit toggling Built-in cassette recorder interface; 1-bit toggle output, 1-bit zero-crossing input Expansion Seven Apple II Bus slots (50-pin card-edge) Auxiliary slot (60-pin card-edge) Internal connectors Game I/O socket (16-pin DIP) RF modulation output (4-pin Molex) Numeric keypad (11-pin Molex) External connectors NTSC composite video output (RCA connector) Cassette in/out (two 1⁄8-inch mono phono jacks) Joystick (DE-9)Notes
Revisions:
In production from January 1983 to November 1993, the Apple IIe remained relatively unchanged through the years. However, there was one significant motherboard update, a major firmware update and two cosmetically revised machines. These revisions are detailed below.
Revisions:
Revision A motherboard At the time of the Apple IIe's introduction, and well into the first few months of production, this motherboard shipped with all units. Graphics modes supported are identical to, and limited to, those of the Apple II Plus before it (Double-Low/Double-High resolution is not supported). This revision logic board is also incompatible with a small number of newer plug-in expansion slot cards. Under a free service upgrade program, Apple advised owners of the revision A to have authorized dealers replace it with the revision B motherboard.
Revisions:
Revision B motherboard Shortly after the "Revision A" motherboard's release in 1983, engineers discovered that the bank-switching feature (which used a paralleled 64 KB of RAM on the Extended 80-Column Card or 1 KB to produce 80 columns using bank-switching) could also be used to produce a new graphics mode, Double-High-Resolution, which doubles the horizontal resolution and increases the number of colors from the 6 of standard High-Resolution to 16. In order to support this, some modifications had to be made to the motherboard, which became the Revision B. In addition to supporting Double-High-Resolution and a rarely used Double-Low-Resolution mode (see specifications above) it also added a special video signal accessible in slot 7.
Revisions:
Apple upgraded the motherboard free of charge. In later years Apple labeled newer IIe motherboards with a "-A" suffix once again, although in terms of functionality they were Revision B motherboards.
Revisions:
New case and keyboard In 1984, Apple revised the case and keyboard. The original IIe uses a case very similar to the Apple II Plus, painted and with Velcro-type clips to secure the lid with a strip of metal mesh along the edge to eliminate radio frequency interference. The new case is made of dyed plastic mold in a slightly darker beige with a simplified snap-case lid. The other noticeable change is a new keyboard, with more-professional-looking print on darker keycaps (small black lettering, versus large white print). This was the first cosmetic change.
Revisions:
Enhanced IIe In March 1985, the company replaced the original machine with a new revision called the Enhanced IIe. It is completely identical to the previous machine except for four chips changed on the motherboard (and a small "Enhanced" or "65C02" sticker placed over the keyboard power indicator). The purpose of the update was to make the Apple IIe more compatible with the Apple IIc (released the previous year) and, to a smaller degree, the Apple II Plus. This change involved a new processor, the CMOS-based 65C02 CPU, a new character ROM for the text modes, and two new ROM firmware chips. The 65C02 added more CPU instructions, the new character ROM added 32 special "MouseText" characters (which allowed the creation of a GUI-like display in text mode, similar to IBM code page 437), and the new ROM firmware fixed problems and speed issues with 80-column text, introduced the ability to use lowercase in Applesoft BASIC and Monitor, and contained some other smaller improvements (and fixes) in the latter two (including the return of the Mini-Assembler—which had vanished with the introduction of the II Plus firmware).
Revisions:
Although it affected compatibility with a small number of software titles (particularly those that did not follow Apple programming guidelines and rules, used illegal opcodes that were no longer available in the new CMOS-based CPU, or used the alternate 80-column character set that MouseText now occupied) a fair bit of newer software — mostly productivity applications and utilities — required the Enhanced chipset to run at all. An official upgrade kit, consisting of the four replacement chips and an "Enhanced" sticker badge, was made available for purchase to owners of the original Apple IIe. An alternative at the time, which some users chose as a cost-cutting measure, was to simply purchase their own 65C02 CPU and create (unlicensed and illegal) duplicates of the updated ROMs using re-rewritable EPROM chips. When Apple phased out the Enhancement kit in the early 1990s, this became the only available method for users looking to upgrade their IIe, and remains so right up until the present day. An Enhanced machine identifies itself with the name "Apple //e" on its start-up splash screen (as opposed to the less-specific "Apple ][").
Revisions:
Platinum IIe In January 1987 came the final revision of the Apple IIe, often referred to as the Platinum IIe, due to the color change of its case to the light-grey color scheme that Apple dubbed "Platinum". Changes to this revision were mostly cosmetic to modernize the look of the machine. Besides the color change, there was a new keyboard layout with built-in numeric keypad. The keyboard was changed to match the layout of the Apple IIGS, with the reset key moved above the ESC and '1' keys, the Open and Solid Apple modifier keys replaced by Command and Option and the power LED relocated above the numeric keypad. Gone were the recessed metal ID badges (showing the Apple logo and name, with "//e" beside it) replaced with a simpler "Apple IIe" silk screened on the case lid in the Apple Garamond font. A smaller Apple logo badge remained, which was moved to the right side of the case.
Revisions:
Internally, a (reduced in size) Extended 80-Column Card was factory-installed, making the Platinum IIe come standard with 128 KB RAM and Double-Hi-Res graphics enabled. The motherboard has a reduced chip count by merging the two system ROM chips into one and using higher-density memory chips so its 64 KB RAM can be made up of two (64 Kbx4) chips rather than eight (64 Kbx1) chips, bringing the count down to a total of 24 chips. A solder pad location on the motherboard, present since the original IIe, for (optionally) making presses of the "Shift" keys detectable in software, is now shorted by default so that the feature is always active. Next, in a move to reduce radio frequency interference when a joystick plugs into the motherboard's game I/O socket, filtering capacitors were added. While this made no difference to the average user, it had the negative effect of lowering the available bandwidth to the socket, which is often used by specialized devices for such purposes as measuring temperature, controlling a robotic device, or even simplistic networking for data transfer to another computer. In such cases, the specialized devices were rendered useless on the Platinum IIe unless the user removed the capacitors from the board.
Revisions:
There were no firmware changes present, and functionally the motherboard was otherwise identical to the Enhanced IIe. This final model of the Apple IIe (which was not sold in Europe) was quietly discontinued on November 15, 1993, which (following the discontinuation of the Apple IIGS a year earlier) effectively marked the end of the Apple II family line.
Apple IIe Card for Macintosh:
In March 1991, shortly after the release of the Macintosh LC series, Apple released the PDS slot-based Apple IIe Card for the Macintosh. By plugging this card into a Macintosh LC (and later models incorporating an LC PDS slot), through hardware and (some) software emulation, the Macintosh can run most software written for the 8-bit Apple IIe computer. This miniaturized computer on a card was made possible by a chip called the Gemini, which is heavily based on the Mega II, first used in the Apple IIGS computer to emulate the Apple IIe. The Gemini duplicates most of the functions of a standard Apple IIe, minus RAM, ROM, video generation and CPU.
Apple IIe Card for Macintosh:
Many of the built-in Macintosh peripherals can be "borrowed" by the card when in Apple II mode (i.e. extra RAM, 31⁄2-inch floppy, AppleTalk networking, clock, hard disk). It can run at either standard 1 MHz speed or an accelerated 1.9 MHz. As video is emulated using Macintosh QuickDraw routines, it is sometimes unable to keep up with the speed of a real Apple IIe, especially in the case of slower host machines. With a specialized Y-cable, the card can use an actual Apple 5.25, Apple UniDisk 3.5 and Apple II joystick or paddles. The Apple IIe Card is thought of as an Apple II compatibility solution or emulator rather than as an extension of the Apple II line.
International versions:
Regional differences The Apple IIe keyboard differed depending on what region of the world it was sold in. Sometimes the differences were very minor, such as extra local language characters and symbols printed on certain keycaps (e.g. French accented characters on the Canadian IIe such as "à", "é", "ç", etc., or the British Pound "£" symbol on the UK IIe) while other times the layout and shape of keys greatly differed (e.g. a European IIe). In order to access the local character set and keyboard layout, a user-accessible switch is found on the underside of the keyboard — flipping it will instantly switch the video output and keyboard input from the US character set to the local set. To support this, special double-capacity video and keyboard ROMs are used; in early motherboards they had to reside on a tiny circuit card that plugged into the socket. In some countries these localized IIes also support 50 Hz PAL video instead of the standard 60 Hz NTSC video and the different 220/240 volt power of that region. An equivalent of the "PAL color card" for the earlier Apple II Europlus model was integrated into the motherboard of these IIes, so that color graphics are available without the addition of a slot card.
International versions:
Another difference with the European IIe, is the Auxiliary slot physically moved in location so it is in line and in front of slot 3, preventing both slots from being used simultaneously for full-sized cards. A few third-party cards are affected by this; some European cards plug into both slots simultaneously and are thus unusable on American IIes, and some American cards do not fit into the case of European IIes because the European location of the Auxiliary slot leaves less room for them.
International versions:
European Platinum IIe (hybrid) During approximately the same time period that the Platinum IIe was being produced (1987), Apple released an alternative machine for the European market. It reused the original Apple IIe case mold and keyboard, but both were redyed in the platinum color scheme—including the metal ID badges which were recolored from dark brown to platinum, blending them into the case lid. Additionally, the sticker over the keyboard power indicator was labeled "65C02" rather than "Enhanced". Internally it used the same (newer) motherboard found in the Platinum IIe with reduced chip count. Notably absent is the numeric keypad and standardized keyboard layout found on the Platinum IIe.
International versions:
This cosmetic reissue of the classic IIe, with new motherboard and new coloring scheme, was only available in Europe, and therefore also had regional differences mentioned above. It has been rumored that a small number of these machines were made available in the Canadian and US markets, using the standard North American keyboard and motherboard (photographic evidence of this North American variant can be found in some period Apple II magazines). This hybrid platinum model is somewhat rare.
Upgrades:
Apple IIGS upgrade kit When the Apple IIGS computer was introduced in September 1986, Apple announced it would be making an upgrade kit for the IIe. The upgrade cost US$500, plus the trade-in of the user's existing Apple IIe motherboard and baseplate.
Upgrades:
Users would bring their Apple IIe machines in to an authorized dealership, where the 65C02-based IIe motherboard and lower baseboard of the case were swapped for a 65C816-based Apple IIGS motherboard with a new baseboard. New metal sticker ID badges replaced those on the front of the Apple IIe, rebranding the machine. Retained were the upper half of the IIe case, the keyboard, speaker, and power supply. Original IIGS motherboards (those produced between 1986 and mid-1989) had electrical connections for the IIe power supply and keyboard present, although only about half of the units produced had the physical plug connectors factory-soldered in.
Upgrades:
The upgrade kit proved unpopular as it did not include a mouse; the keyboard did not mimic all the features of the Apple Desktop Bus keyboard; and some cards designed for the Apple IIGS did not fit in the Apple IIe's slanted case. In the end, most users found they were not saving much, once they had to purchase a 3.5-inch floppy drive, analog RGB monitor, and mouse.For a time, the Western Design Center (the company that designed the 16-bit 65C816 processor used in the Apple IIGS) also sold a 16-bit 65C802 processor that was a drop-in, pin-compatible replacement for the 65C02 that made the full 16-bit 65C816 instruction set available to the IIe, but using the same 8-bit data bus as the 65C02; however, this upgrade was insufficient, by itself, to allow IIGS software to run, as IIGS software additionally required the IIGS's firmware and specialized hardware. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thioacetic acid**
Thioacetic acid:
Thioacetic acid is an organosulfur compound with the molecular formula CH3C(O)SH. It is the sulfur analogue of acetic acid (CH3C(O)OH), as implied by the thio- prefix. It is a yellow liquid with a strong thiol-like odor. It is used in organic synthesis for the introduction of thiol groups (−SH) in molecules.
Synthesis and properties:
Thioacetic acid is prepared by the reaction of acetic anhydride with hydrogen sulfide: CH CH SH CH OH It has also been produced by the action of phosphorus pentasulfide on glacial acetic acid, followed by distillation.
CH OH CH SH OS 4 Thioacetic acid is typically contaminated by acetic acid.
The compound exists exclusively as the thiol tautomer, consistent with the strength of the C=O double bond. Reflecting the influence of hydrogen-bonding, the boiling point (93 °C) and melting points are 20 and 75 K lower than those for acetic acid.
Reactivity:
Acidity With a pKa near 3.4, thioacetic acid is about 15 times more acidic than acetic acid. The conjugate base is thioacetate: CH SH CH 3C(O)S−+H+ In neutral water, thioacetic acid is fully ionized.
Reactivity:
Reactivity of thioacetate Most of the reactivity of thioacetic acid arises from the conjugate base, thioacetate. Salts of this anion, e.g. potassium thioacetate, are used to generate thioacetate esters. Thioacetate esters undergo hydrolysis to give thiols. A typical method for preparing a thiol from an alkyl halide using thioacetic acid proceeds in four discrete steps, some of which can be conducted sequentially in the same flask: CH SH NaOH CH SNa +H2O CH SNa RX CH SR NaX Cl Br ,I,…) CH SR NaOH CH CO Na RSNa +H2O RSNa HCl RSH NaCl In an application that illustrates the use of its radical behavior, thioacetic acid is used with AIBN in a free radical mediated nucleophilic addition to an exocyclic alkene forming a thioester: Reductive acetylation Salts of thioacetic acid such as potassium thioacetate can be used convert nitroarenes to aryl acetamides in one step. This is particularly useful in the preparation of pharmaceuticals, e.g., paracetamol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Beast Man**
Beast Man:
Beast Man is a supervillain in the toy line and cartoon series Masters of the Universe; the savage right-hand man of Skeletor, he can control many wild creatures and has brute strength.
Character history:
The 1980s The original design sketch of Beast Man by Mattel toy designer Mark Taylor was rejected by Mattel for looking too much like Chewbacca.
Character history:
Figure Beast Man was one of the first eight characters to be created for the Masters of the Universe toy line by Mattel in the early 1980s, and one of the first four to be completed and released (the other three being He-Man, Man-At-Arms and Skeletor). When the character was developed by Mattel, the name of Beast Man was reused from a figure in Mattel's earlier Flash Gordon toy line.
Character history:
In an early story for the toy line (then called 'The Fighting Foe Men') written by the first mini-comics author Don Glut, Beast Man was at one stage planned to be the line's main villain, but this role ended up being given back to Skeletor (prototype name De-Man), with Beast Man as his main henchman. The character was also known as 'Tree Man' in the original conceptual drawings by Mark Taylor.The figure came with red removable chest and arm armor, and was armed with a string whip, which was recycled from Mattel's earlier Big Jim toy line. Being one of the early figures to be continually re-issued with each successive wave, late examples of the figure can be found with a hard, solid head as opposed to the more common hollow, 'squeezable' rubber one. The 'solid head' version is far rarer.
Character history:
The Weapons Pak, which consisted of existing weapons and armor, mostly in different colours to their original, included yellow versions of Beast Man's torso and arm armor (as well as his whip, in its original black). As a result, many examples of the Beast Man figure found on the second-hand market can be found to be wearing this yellow version of the armor. Some sellers even promote this as a variant version of the figure, but in actuality it is just down to previous owners mixing the parts up over the years, as Beast Man figures only ever came wearing the red version of the armor.
Character history:
Later in the original toy-line's run, Beast Man also has the unfortunate distinction of being the character most often depicted being trapped and covered with evil green slime in the Evil Horde's Slime Pit ending up as a slime-monster who willingly obeyed Hordak's commands.
Character history:
Filmation cartoon series Beast Man appears frequently in the toy line's accompanying cartoon series by Filmation, introduced in the first episode "Diamond Ray of Disappearance". Although toned down slightly for the younger-child friendly series, as were many of the characters, his cartoon portrayal is generally consistent with his mini-comic portrayal, although in some early episodes the show's writers added extra dimensions to his character in that despite his loyalty, he clearly resents being bossed around by Skeletor and secretly desires to someday overthrow his master. This side of his character is brought to the forefront in the episode "Prince Adam No More", in which he is finally thrown out of Skeletor's crew. Feeling useless without the power of Snake Mountain behind him, he sets out to prove his worth by capturing King Randor by himself and imprisoning him within Snake Mountain. Although he succeeded in capturing the King, when He-Man comes to the rescue he is subjected once again to Skeletor's wrath and admitted back into his ranks purely so Skeletor has someone to vent his anger on. But his final line in the episode "It's kind of nice to be home" indicates he now feels he belongs as Skeletor's underling, and subsequent episodes portray him mostly for comedy value, willingly succumbing to Skeletor's abuse and constantly bungling his schemes. Notable episodes for Beast Man in the show's later stages include "The Shadow of Skeletor" and "Orko's Return" which restore him to his original, darker portrayal, working independently and craftily to achieve his aims. The powers of Beast-Man are shown effective in some earlier episodes, such as "Creatures From The Tar Swamp", "A Beastly Sideshow" and "The Dragon Invasion". His ability to control animals is not impeccable, however. For instance, he cannot control Cringer, Battle Cat, a dragon defending her young, or Panthor (although does trick Cringer into a trap in "A Beastly Sideshow").
Character history:
Beast Man remained a fairly regular character throughout the run of the 1980s series, while some other earlier figures like Zodac, Mer-Man, Tri-Klops and Stratos gradually dropped out of sight when newer characters were released. He generally held his position as Skeletor's right-hand man throughout the cartoon's run, although in some later second-season episodes this position was occasionally filled by characters such as Clawful or Whiplash, as writers attempted to promote newer characters more prominently. Beast Man was often teamed up with Trap Jaw, one of the other earlier characters to remain consistent through the show's life.
Character history:
Beast Man's background is never mentioned in the cartoon, although the series bible states a surprising origin for him, explaining he was once a thuggish human from Earth called Biff Beastman who owned a farmyard on which he constantly abused the animals. He was recruited as chief technician on the spacecraft piloted by Marlena Glenn, which crashlanded on Eternia, but he wound up on Skeletor's homeworld of Infinita, where he was mutated into Beast Man and recruited by Skeletor. This origin story appears in a storybook entitled "New Champions of Eternia" but was unpopular with most of the show's writers and therefore excluded from the series.
Character history:
Other media Beast Man is included in numerous MOTU storybooks throughout the 1980s. One such range of storybooks is the UK Ladybird Books which reveals he was the leader of a tribe of Beast People from the Vine Jungle. Although this background has never been mentioned in any of the more prominent MOTU incarnations (except for the DC Comics, which features the "Beastmen"), it is generally a popular concept amongst fans that he hails from a jungle tribe.
Character history:
The 1987 live-action movie Beast Man also appears in the live action Masters of the Universe movie in 1987. Although credited as 'Beastman' (all one word), he is presented as "the Beastman" within the movie. Played by Tony Carroll, he is portrayed somewhat differently from other incarnations, appearing as a savage minion of Skeletor's, who merely growls instead of speaking. Although his lack of speech might indicate a lower level of intelligence than his usual depiction, the character is shown as capable of using high-tech weapons, working in a team and following orders. (He is also seen to carry a rather battered, simple sword at his waist, although is not seen actually using it in the movie.) When Skeletor incinerates Saurod for the broader team's failure, Beast Man clutches at his master's hand and makes a great show of begging for his life.
Character history:
Redesigned by European comic artist Moebius, Beast Man has the same concepts as his familiar version, yet at the same time has a noticeably different appearance, with longer, browner fur rather than his usual orange, no blue face markings and a possibly Samurai-influenced new design for his chest armor.
Although drawn to resemble his film counterpart, Beast Man of the movie's comic book adaptation has more in common with the cartoon and toy versions. He talks and even replaces Blade as Evil-Lyn's helper during the scene of her interrogating Kevin Corrigan.
Character history:
One of the original drafts from the script by David Odell (whose previous writing credits include Supergirl and The Dark Crystal) was reviewed in the third episode of the He-Man and She-Ra podcast, Masters Cast. The original draft included more time spent on Eternia and Snake Mountain, had Beast man in a speaking role, and even revealed that He-Man's mother was originally from Earth, as per the character Queen Marlena from the Filmation animated series He-Man and the Masters of the Universe, thus linking the two planets.
2002 revamp and Mike Young Productions animated series:
Beast Man returns in the 2002 relaunch of the MOTU toy line and series. Possessing essentially the same design as the classic version of the character, the 2002 Beast Man is depicted as being a physically much larger creature with a hunched back. He is one of the largest revamped villains, rivaled in size only by Whiplash and Clawful. The figure's colour scheme is darkened down slightly, with deeper orange-red fur instead of the vintage figure's bright orange, and dark brown armor in place of the original's red. The figure's arm armor is now molded on (whereas the original's was removable), and now also sports similar armor on his lower legs. The action feature of this new version of the figure is his arms, which swing downwards when a button on his back is pressed. When the Four Horsemen originally designed the new version, they had planned for the figure to have a vocal 'roaring' feature, but this was eventually dropped due to production budget restraints.
2002 revamp and Mike Young Productions animated series:
His portrayal in the new cartoon series is much the same as the old, although in this incarnation he never shows any signs of desire to overthrow Skeletor, remaining permanently loyal to his "pal". Beast Man is the only character shown to fully trust Skeletor as a friend, and this trust (if not the respect) is returned in "The Mystery of Anwat Gar" when his master grants him a superweapon. Although he still can control over all wild animals, he has difficulty controlling dragons as is showcased in the episode "Dragon's Brood". Beast Man still carries a whip, but relations with his animals are characterized by mutual affection.
2002 revamp and Mike Young Productions animated series:
Although his background is not mentioned in the show, the accompanying MVCreations comic series published an origin story for him (as Icons of Evil #1, written by Robert Kirkman and drawn by Tony Moore) in which he is revealed to originate from the Berserker Islands, where he first encounters Keldor before his transformation into Skeletor. He has remained subservient to Skeletor ever since he saved his life for the sake of recruiting him as his servant.
Masters of the Universe Classics:
The MOTU Classics toyline that started in 2008 includes short character biographies on the backs of the packaging. These merge elements from various incarnations of the franchise with some newly developed information to form a new, distinct "Classics" continuity. Additionally, there are several mini-comics and posters which further add to this new canon.
He-Man and the Masters of the Universe (2012):
The 2012 DC comic borrows a concept created for MOTU Classics, that there is an entire species of creatures called Beast Men. Beast Man is a member of that species. Born Raqquill Rquazz, Beast Man was banished to the Vine Jungle by his kind for his evil deeds where he met up with Keldor during a skirmish in the Berserker Islands. His animal-controlling abilities prove to be useful in Skeletor's campaign to take over Eternia.
Live Action He-Man movie:
Beast Man will appear in the live action He-Man movie. In the film, he's a savage and powerful foe and will bring a little something different to the table: he’s a shapeshifter with the ability to turn into any type of beast, making him a useful spy for Skeletor as well as a formidable fighter.
Reception:
Comic Book Resources list the character as part of He-Man: 15 Most Powerful Masters of the Universe. Beast man was rated the 2nd most useless character.
Notes:
In the German audio-book series a character biography was given in the episode "Nacht über Castle Grayskull" (Night over Castle Grayskull). It is said that Beast Man was once an intelligent scientist. He found a powerful magic plate and changed it so that Skeletor could not use it any more. For that Skeletor tortured him and gave him a toxin to destroy his intelligence. This made Beast Man Skeletor's loyal, yet stupid, servant and slave. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Naugahyde**
Naugahyde:
Naugahyde is an American brand of artificial leather. Naugahyde is a composite knit fabric backing and expanded polyvinyl chloride (PVC) coating. It was developed by Byron A. Hunter, a senior chemist at the United States Rubber Company, and is now manufactured and sold by the corporate spin-off Uniroyal Engineered Products LLC.
Its name, first used as a trademark in 1936, comes from the name of Naugatuck, Connecticut, where it was first produced. It is now manufactured in Stoughton, Wisconsin.
Uses:
The primary use for Naugahyde is as a substitute for leather in upholstery. In this application it is very durable and can be easily maintained by wiping with a damp sponge or cloth. Being a synthetic product, it is supplied in long rolls, allowing large sections of furniture to be covered seamlessly, unlike animal hides.
General Motors for several decades used the material in several of its vehicles, with the term "Cordaveen" and later "Madrid-grain vinyl" for Buick, "Morocceen" for Oldsmobile, "Morrokide" for Pontiac vehicles while Chevrolet didn't use a brand name and simply listed it in sales brochures as "vinyl interior".
Marketing:
A marketing campaign of the 1960s and 1970s asserted humorously that Naugahyde was obtained from the skin of an animal called a "Nauga". The claim became an urban myth. The campaign emphasized that, unlike other animals, which must typically be slaughtered to obtain their hides, Naugas can shed their skin without harm to themselves. The Nauga doll, a squat, horned monster with a wide, toothy grin, became popular in the 1960s and is still sold today. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Energy transformation**
Energy transformation:
Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
Energy transformation:
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy:
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Limitations in the conversion of thermal energy:
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because thermal energy represents a particularly disordered form of energy; it is spread out randomly among many available states of a collection of microscopic particles constituting the system (these combinations of position and momentum for each of the particles are said to form a phase space). The measure of this disorder or randomness is entropy, and its defining feature is that the entropy of an isolated system never decreases. One cannot take a high-entropy system (like a hot substance, with a certain amount of thermal energy) and convert it into a low entropy state (like a low-temperature substance, with correspondingly lower energy), without that entropy going somewhere else (like the surrounding air). In other words, there is no way to concentrate energy without spreading out energy somewhere else.
Limitations in the conversion of thermal energy:
Thermal energy in equilibrium at a given temperature already represents the maximal evening-out of energy between all possible states because it is not entirely convertible to a "useful" form, i.e. one that can do more than just affect temperature. The second law of thermodynamics states that the entropy of a closed system can never decrease. For this reason, thermal energy in a system may be converted to other kinds of energy with efficiencies approaching 100% only if the entropy of the universe is increased by other means, to compensate for the decrease in entropy associated with the disappearance of the thermal energy and its entropy content. Otherwise, only a part of that thermal energy may be converted to other kinds of energy (and thus useful work). This is because the remainder of the heat must be reserved to be transferred to a thermal reservoir at a lower temperature. The increase in entropy for this process is greater than the decrease in entropy associated with the transformation of the rest of the heat into other types of energy.
Limitations in the conversion of thermal energy:
In order to make energy transformation more efficient, it is desirable to avoid thermal conversion. For example, the efficiency of nuclear reactors, where the kinetic energy of the nuclei is first converted to thermal energy and then to electrical energy, lies at around 35%. By direct conversion of kinetic energy to electric energy, effected by eliminating the intermediate thermal energy transformation, the efficiency of the energy transformation process can be dramatically improved.
History of energy transformation:
Energy transformations in the universe over time are usually characterized by various kinds of energy, which have been available since the Big Bang, later being "released" (that is, transformed to more active types of energy such as kinetic or radiant energy) by a triggering mechanism.
History of energy transformation:
Release of energy from gravitational potential A direct transformation of energy occurs when hydrogen produced in the Big Bang collects into structures such as planets, in a process during which part of the gravitational potential is to be converted directly into heat. In Jupiter, Saturn, and Neptune, for example, such heat from the continued collapse of the planets' large gas atmospheres continue to drive most of the planets' weather systems. These systems, consisting of atmospheric bands, winds, and powerful storms, are only partly powered by sunlight. However, on Uranus, little of this process occurs.On Earth, a significant portion of the heat output from the interior of the planet, estimated at a third to half of the total, is caused by the slow collapse of planetary materials to a smaller size, generating heat.
History of energy transformation:
Release of energy from radioactive potential Familiar examples of other such processes transforming energy from the Big Bang include nuclear decay, which releases energy that was originally "stored" in heavy isotopes, such as uranium and thorium. This energy was stored at the time of the nucleosynthesis of these elements. This process uses the gravitational potential energy released from the collapse of Type II supernovae to create these heavy elements before they are incorporated into star systems such as the Solar System and the Earth. The energy locked into uranium is released spontaneously during most types of radioactive decay, and can be suddenly released in nuclear fission bombs. In both cases, a portion of the energy binding the atomic nuclei together is released as heat.
History of energy transformation:
Release of energy from hydrogen fusion potential In a similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of the Big Bang. At that time, according to one theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This resulted in hydrogen representing a store of potential energy which can be released by nuclear fusion. Such a fusion process is triggered by heat and pressure generated from the gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into starlight. Considering the solar system, starlight, overwhelmingly from the Sun, may again be stored as gravitational potential energy after it strikes the Earth. This occurs in the case of avalanches, or when water evaporates from oceans and is deposited as precipitation high above sea level (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity).
History of energy transformation:
Sunlight also drives many weather phenomena on Earth. One example is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as a chemical potential energy via photosynthesis, when carbon dioxide and water are converted into a combustible combination of carbohydrates, lipids, and oxygen. The release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for animal or human metabolism when these molecules are ingested, and catabolism is triggered by enzyme action.
History of energy transformation:
Through all of these transformation chains, the potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in several different ways for long periods between releases, as more active energy. All of these events involve the conversion of one kind of energy into others, including heat.
Examples:
Examples of sets of energy conversions in machines A coal-fired power plant involves these energy transformations: Chemical energy in the coal is converted into thermal energy in the exhaust gases of combustion Thermal energy of the exhaust gases converted into thermal energy of steam through heat exchange Kinetic energy of steam converted to mechanical energy in the turbine Mechanical energy of the turbine is converted to electrical energy by the generator, which is the ultimate outputIn such a system, the first and fourth steps are highly efficient, but the second and third steps are less efficient. The most efficient gas-fired electrical power stations can achieve 50% conversion efficiency. Oil- and coal-fired stations are less efficient.
Examples:
In a conventional automobile, the following energy transformations occur: Chemical energy in the fuel is converted into kinetic energy of expanding gas via combustion Kinetic energy of expanding gas converted to the linear piston movement Linear piston movement converted to rotary crankshaft movement Rotary crankshaft movement passed into transmission assembly Rotary movement passed out of transmission assembly Rotary movement passed through a differential Rotary movement passed out of differential to drive wheels Rotary movement of drive wheels converted to linear motion of the vehicle Other energy conversions There are many different machines and transducers that convert one energy form into another. A short list of examples follows: ATP hydrolysis (chemical energy in adenosine triphosphate → mechanical energy) Battery (electricity) (chemical energy → electrical energy) Electric generator (kinetic energy or mechanical work → electrical energy) Electric heater (electric energy → heat) Fire (chemical energy → heat and light) Friction (kinetic energy → heat) Fuel cell (chemical energy → electrical energy) Geothermal power (heat→ electrical energy) Heat engines, such as the internal combustion engine used in cars, or the steam engine (heat → mechanical energy) Hydroelectric dam (gravitational potential energy → electrical energy) Electric lamp (electrical energy → heat and light) Microphone (sound → electrical energy) Ocean thermal power (heat → electrical energy) Photosynthesis (electromagnetic radiation → chemical energy) Piezoelectrics (strain → electrical energy) Thermoelectric (heat → electrical energy) Wave power (mechanical energy → electrical energy) Windmill (wind energy → electrical energy or mechanical energy) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sacubitril/valsartan**
Sacubitril/valsartan:
Sacubitril/valsartan, sold under the brand name Entresto, is a fixed-dose combination medication for use in heart failure. It consists of the neprilysin inhibitor sacubitril and the angiotensin receptor blocker valsartan. The combination is sometimes described as an "angiotensin receptor-neprilysin inhibitor" (ARNi).
Sacubitril/valsartan:
In 2016, the American College of Cardiology/American Heart Association Task Force recommended it as a replacement for an ACE inhibitor or an angiotensin receptor blocker in people with heart failure with reduced ejection fraction.Potential side effects include angioedema, kidney problems, and low blood pressure.It was approved for medical use in the United States and in the European Union in 2015, and in Australia in 2016. In 2020, it was the 219th most commonly prescribed medication in the United States, with more than 2 million prescriptions.
Medical uses:
Sacubitril/valsartan can be used instead of an ACE inhibitor or an angiotensin receptor blocker in people with heart failure and a reduced left ventricular ejection fraction (LVEF), alongside other standard therapies (e.g. beta-blockers) for heart failure. To investigate its use for heart failure in those with a preserved LVEF (HFpEF), Novartis funded the PARAGON-HF trial which was designed to investigate the use of sacubitril/valsartan in the treatment of HFpEF patients with a LVEF of 45% or more. Concluding in 2019, it failed to show significance for reducing hospitalisation related to heart failure or reducing death from cardiovascular causes, and therefore appearing to show limited benefit to those with HFpEF. A Cochrane systematic review of data from 37 trials investigating treatments for HFpEF suggested that evidence is also lacking to support the use of ACE Inhibitors, ARBs or ARNIs in patients with HFpEF at this time, and that the mainstay pharmacological therapy for HFpEF still remains the treatment of co-morbidities such as hypertension or other triggers for decompensation. Patients who exhibit symptoms of NYHA Class II or III heart failure and are still symptomatic despite maximally tolerated dose of an ACE inhibitor or ARB alone, may be considered for sacubitril/valsartan dual therapy to decrease the risk of cardiovascular-related and all-cause mortality. Mortality benefits have only been observed to date in those with LVEF less than 35%.Changing 100 people from an ACE inhibitor or angiotensin II receptor antagonist to sacubitril/valsartan for 2.3 years would prevent three deaths, five hospitalizations for heart failure, and eleven hospitalizations overall.
Adverse effects:
Common adverse effects [>1%] include hyperkalaemia [high potassium levels in the blood, a known side effect of Valsartan], hypotension [low blood pressure, common in vasodilators and extracellular fluid volume reducers], a persistent dry cough and renal impairment [reduced kidney function].Angioedema, a rare but more serious reaction, can occur in some patients [<1%] and involves swelling of the face and lips. Angioedema is more common in black patients. Sacubitril/Valsartan should not be taken within 36 hours of an Angiotensin Converting Enzyme Inhibitor to reduce the risk of developing Angioedema.The side effect profile in trials of sacubitril/valsartan compared to valsartan alone or enalapril [an angiotensin converting enzyme inhibitor] is very similar, with the incidence of hypotension slightly higher in sacubitril/valsartan, the risk comparable for angioedema, and the chance of hyperkalaemia, renal impairment and cough slightly lower.Sacubitril/valsartan is contraindicated in pregnancy because it contains valsartan, a known risk for birth defects.
Pharmacology:
Valsartan blocks the angiotensin II receptor type 1 (AT1). This receptor is found on both vascular smooth muscle cells, and on the zona glomerulosa cells of the adrenal gland which are responsible for aldosterone secretion. In the absence of AT1 blockade, angiotensin causes both direct vasoconstriction and adrenal aldosterone secretion, the aldosterone then acting on the distal tubular cells of the kidney to promote sodium reabsorption which expands extracellular fluid (ECF) volume. Blockade of (AT1) thus causes blood vessel dilation and reduction of ECF volume.Sacubitril is a prodrug that is activated to sacubitrilat (LBQ657) by de-ethylation via esterases. Sacubitrilat inhibits the enzyme neprilysin, a neutral endopeptidase that degrades vasoactive peptides, including natriuretic peptides, bradykinin, and adrenomedullin. Thus, sacubitril increases the levels of these peptides, causing blood vessel dilation and reduction of ECF volume via sodium excretion.Despite these actions, neprilysin inhibitors have been found to have limited efficacy in the treatment of hypertension and heart failure when taken on their own. This is attributed to a reduction in enzymatic breakdown of angiotensin II by the reduction of neprilysin activity, which results in an increase in systemic angiotensin II levels and the negation of the positive effects of this drug family in cardiovascular disease treatment. Combined treatment with a neprilysin inhibitor and an angiotensin converting enzyme (ACE) inhibitor has been shown to be effective in reducing angiotensin II levels, and demonstrated superiority in lowering blood pressure compared to ACE inhibition alone. However, due to an increase in bradykinins from the inhibition of both ACE and neprilysin, there was a threefold increase in relative risk of angioedema compared with ACE inhibition alone following this combination treatment. The combination of a neprilysin inhibitor with an angiotensin receptor blocker instead of the ACE inhibitor has been shown to have a comparable risk of angioedema, whilst also demonstrating superiority in treating moderate-severe heart failure to ACE inhibitor treatment.Neprilysin also has a role in clearing the protein amyloid beta from the cerebrospinal fluid, and its inhibition by sacubitril has shown increased levels of AB1-38 in healthy subjects (Entresto 194/206 for two weeks). Amyloid beta is considered to contribute to the development of Alzheimer's disease, and there exist concerns that sacubitril may promote the development of Alzheimer's disease.
Structure activity relationship:
Sacubitril is the molecule that is metabolically activated by de-ethylation by esterases. The active form of the molecule, sacubitrilat, is responsible for the molecule's drug lowering effects.
Chemistry:
Sacubitril/valsartan is co-crystallized sacubitril and valsartan, in a one-to-one molar ratio. One sacubitril/valsartan complex consists of six sacubitril anions, six valsartan dianions, 18 sodium cations, and 15 molecules of water, resulting in the molecular formula C288H330N36Na18O48·15H2O and a molecular mass of 5748.03 g/mol.The substance is a white powder consisting of thin hexagonal plates. It is stable in solid form as well as in aqueous (water) solution with a pH of 5 to 7, and has a melting point of about 138 °C (280 °F).
History:
During its development by Novartis, Entresto was known as LCZ696. It was approved under the FDA's priority review process on 7 July 2015. It was also approved in Europe in 2015. In 2022, Novartis sold its India marketing rights of Sacubitril Valsartan to JB Pharma, under the brand name Azmarda.
Society and culture:
Trial design There was controversy over the PARADIGM-HF trial—the Phase III trial on the basis of which the drug was approved by the FDA. For example, both Richard Lehman, a physician who writes a weekly review of key medical articles for the BMJ Blog and a December 2015, report from the Institute for Clinical and Economic Review (ICER) found that the risk–benefit ratio was not adequately determined because the design of the clinical trial was too artificial and did not reflect people with heart failure that doctors usually encounter.: 28 In 2019, the PIONEER-HF and PARAGON-HF trials studied the effect of sacubitril/valsartan in 800 patients recently hospitalised with severe heart failure and 4800 patients with less severe symptoms of heart failure respectively. The medication consistently demonstrated similar levels of safety, with higher rates of very low blood pressure, compared to current treatments across all three trials in a variety of patients, however it has only shown effectiveness in those with more advanced heart failure. In December 2015, Steven Nissen and other thought leaders in cardiology said that the approval of sacubitril/valsartan had the greatest impact on clinical practice in cardiology in 2015, and Nissen called the drug "truly a breakthrough approach."One 2015 review stated that sacubitril/valsartan represents "an advancement in the chronic treatment of heart failure with reduced ejection fraction" but that widespread clinical success with the drug will require taking care to use it in appropriate patients, specifically those with characteristics similar to those in the clinical trial population. Another 2015 review called the reductions in mortality and hospitalization conferred by sacubitril/valsartan "striking", but noted that its effects in heart failure people with hypertension, diabetes, chronic kidney disease, and the elderly needed to be evaluated further.
Society and culture:
Economics The wholesale cost to the National Health Service (NHS) in the UK is approximately £1,200 per person per year as of 2017. The wholesale cost in the United States is US$4,560 per year as of 2015. Similar class generic drugs without sacubitril, such as valsartan alone, cost approximately US$48 a year. One industry-funded analysis found a cost of US$45,017 per quality-adjusted life year (QALY).
Research:
The PARADIGM-HF trial (in which Milton Packer was one of the principal investigators) compared treatment with sacubitril/valsartan to treatment with enalapril. People with heart failure and reduced LVEF (10,513) were sequentially treated on a short-term basis with enalapril and then with sacubitril/valsartan. Those that were able to tolerate both regimens (8442, 80%) were randomly assigned to long-term treatment with either enalapril or sacubitril/valsartan. Participants were mainly white (66%), male (78%), middle aged (median 63.8 +/- 11 years) with NYHA stage II (71.6%) or stage III (23.1%) heart failure.The trial was stopped early after a prespecified interim analysis revealed a reduction in the primary endpoint of cardiovascular death or heart failure in the sacubitril/valsartan group relative to those treated with enalapril. Taken individually, the reductions in cardiovascular death and heart failure hospitalizations retained statistical significance. Relative to enalapril, sacubitril/valsartan provided reductions in: the composite endpoint of cardiovascular death or hospitalization for heart failure (incidence 21.8% vs 26.5%) cardiovascular death (incidence 13.3% vs 16.5%) first hospitalization for worsening heart failure (incidence 12.8% vs 15.6%), and all-cause mortality (incidence 17.0% vs 19.8%)Limitations of the trial include scarce experience with initiation of therapy in hospitalized patients and in those with NYHA heart failure class IV symptoms. Additionally the trial compared a maximal dose of valsartan (plus sacubitril) with a sub-maximal dose of enalapril, and was thus not directly comparable with current gold-standard use of ACE inhibitors in heart failure, diminishing the validity of the trial results. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inverse filter**
Inverse filter:
Signal processing is an electrical engineering subfield that focuses on analysing, modifying, and synthesizing signals such as sound, images, and scientific measurements. For example, with a filter g, an inverse filter h is one such that the sequence of applying g then h to a signal results in the original signal. Software or electronic inverse filters are often used to compensate for the effect of unwanted environmental filtering of signals.
In speech science:
In all proposed models for the production of human speech, an important variable is the waveform of the airflow, or volume velocity, at the glottis. The glottal volume velocity waveform provides the link between movements of the vocal folds and the acoustical results of such movements, in that the glottis acts approximately as a source of volume velocity. That is, the impedance of the glottis is usually much higher than that of the vocal tract, and so glottal airflow is controlled mostly (but not entirely) by glottal area and subglottal pressure, and not by vocal-tract acoustics. This view of voiced speech production is often referred to as the source-filter model.
In speech science:
A technique for obtaining an estimate of the glottal volume velocity waveform during voiced speech is the “inverse-filtering” of either the radiated acoustic waveform, as measured by a microphone having a good low frequency response, or the volume velocity at the mouth, as measured by a pneumotachograph at the mouth having a linear response, little speech distortion, and a response time of under approximately 1/2 ms. A pneumotachograph having these properties was first described by Rothenberg and termed by him a circumferentially vented mask or CV mask.
In speech science:
As practiced, inverse-filtering is usually limited to non-nasalized or slightly nasalized vowels, and the recorded waveform is passed through an “inverse-filter” having a transfer characteristic that is the inverse of the transfer characteristic of the supraglottal vocal tract configuration at that moment. The transfer characteristic of the supraglottal vocal tract is defined with the input to the vocal tract considered to be the volume velocity at the glottis. For non-nasalized vowels, assuming a high-impedance volume velocity source at the glottis, the transfer function of the vocal tract below about 3000 Hz contains a number of pairs of complex-conjugate poles, more commonly referred to as resonances or formants. Thus, an inverse-filter would have a pair of complex-conjugate zeroes, more commonly referred to as an anti-resonance, for every vocal tract formant in the frequency range of interest.
In speech science:
If the input is from a microphone, and not a CV mask or its equivalent, the inverse filter also must have a pole at zero frequency (an integration operation) to account for the radiation characteristic that connects volume velocity with acoustic pressure. Inverse filtering the output of a CV mask retains the level of zero flow, while inverse filtering a microphone signal does not.
In speech science:
Inverse filtering depends on the source-filter model and a vocal tract filter that is linear system, however, the source and filter need not be independent. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Call setup**
Call setup:
In telecommunication, call setup is the process of establishing a virtual circuit across a telecommunications network. Call setup is typically accomplished using a signaling protocol.
The term call set-up time has the following meanings: The overall length of time required to establish a circuit-switched call between users.
Call setup:
For data communication, the overall length of time required to establish a circuit-switched call between terminals; i.e., the time from the initiation of a call request to the beginning of the call message.Note: Call set-up time is the summation of: (a) call request time—the time from initiation of a calling signal to the delivery to the caller of a proceed-to-select signal; (b) selection time—the time from the delivery of the proceed-to-select signal until all the selection signals have been transmitted; and (c) post selection time—the time from the end of the transmission of the selection signals until the delivery of the call-connected signal to the originating terminal.
Success rate:
In telecommunications, the call setup success rate (CSSR) is the fraction of the attempts to make a call that result in a connection to the dialled number (due to various reasons not all call attempts end with a connection to the dialled number). This fraction is usually measured as a percentage of all call attempts made.
Success rate:
In telecommunications a call attempt invokes a call setup procedure, which, if successful, results in a connected call. A call setup procedure may fail due to a number of technical reasons. Such calls are classified as failed call attempts. In many practical cases, this definition needs to be further expanded with a number of detailed specifications describing which calls exactly are counted as successfully set up and which not. This is determined to a great degree by the stage of the call setup procedure at which a call is counted as connected. In modern communications systems, such as cellular (mobile) networks, the call setup procedure maybe very complex and the point at which a call is considered successfully connected may be defined in a number of ways, thus influencing the way the call setup success rate is calculated. If a call is connected successfully but the dialled number is busy, the call is counted as successful.
Success rate:
Another term, used to denote call attempts that fail during the call setup procedure, is blocked calls.
Success rate:
The call setup success rate in conventional (so-called land-line) networks is extremely high and is significantly above 99.9%. In mobile communication systems using radio channels the call setup success rate is lower and may range for commercial networks between 90% and 98% or higher. The main reasons for unsuccessful call setups in mobile networks are lack of radio coverage (either in the downlink or the uplink), radio interference between different subscribers, imperfections in the functioning of the network (such as failed call setup redirect procedures), overload of the different elements of the network (such as cells), etc.
Success rate:
The call setup success rate is one of the key performance indicators (KPIs) used by the network operators to assess the performance of their networks. It is assumed to have direct influence on the customer satisfaction with the service provided by the network and its operator. The call setup success rate is usually included, together with other technical parameters of the network, in a key performance indicator known as service accessibility.
Success rate:
The operators of telecommunication networks aim at increasing the call setup success rate as much as practical and affordable. In mobile networks this is achieved by improving radio coverage, expanding the capacity of the network and optimising the performance of its elements, all of which may require considerable effort and significant investments on the part of the network operator. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dosage Index**
Dosage Index:
The Dosage Index is a mathematical figure used by breeders of Thoroughbred race horses, and sometimes by bettors handicapping horse races, to quantify a horse's ability, or inability, to negotiate the various distances at which horse races are run. It is calculated based on an analysis of the horse's pedigree.
Dosage Index:
Interest in determining which sires of race horses transmit raw speed, and which sires transmit stamina (defined as the ability to successfully compete at longer distances) to their progeny dates back to the early 20th century, when a French researcher, Lt. Col. J. J. Vullier, published a study on the subject (called Dosage), which was subsequently modified by an Italian breeding expert, Dr. Franco Varola, in two books he authored, entitled Typology Of The Race Horse and The Functional Development Of The Thoroughbred.
Dosage Index:
However, these observations attracted little interest from the general public until 1981, when Daily Racing Form breeding columnist Leon Rasmussen published a new version of Dosage developed by an American scientist and horse owner, Steven A. Roman, Ph.D., in his analysis of the upcoming Kentucky Derby for that year. The new approach, which was more accessible to owners, breeders and handicappers and was supported by solid statistical data, rapidly caught on, and the term "Dosage Index" has been a fixture in the lexicon of horse racing ever since. The details of Dosage methodology have been summarized in Dr. Roman's book entitled Dosage: Pedigree & Performance published in 2002.
Dosage Index:
The index itself is compiled by noting the presence of certain influential sires, known as chefs-de-race (French for "chiefs of racing", or, more esoterically, "masters of the breed") in the first four generations of a horse's pedigree. Based on what distances the progeny of the sires so designated excelled in during their racing careers (the distance preferences displayed by the sires themselves while racing being irrelevant), each chef-de-race (the list released in the early 1980s identified 120 such sires, and 85 more have been added as of April 2005) is placed in one or two of the following categories, or "aptitudinal groups": Brilliant, Intermediate, Classic, Solid or Professional, with "Brilliant" indicating that the sire's progeny fared best at very short distances and "Professional" denoting a propensity for very long races on the part of the sire's offspring, the other three categories ranking along the same continuum in the aforementioned order. If a chef-de-race is placed in two different aptitudinal groups, in no case can the two groups be more than two positions apart; for example, Classic-Solid or Brilliant-Classic are permissible, but Brilliant-Solid, Intermediate-Professional and Brilliant-Professional are not.
Dosage Index:
If a horse's sire is on the chef-de-race list, it counts 16 points for the group to which the sire belongs (or eight in each of two categories if the sire was placed in two groups); a grandsire counts eight points, a great-grandsire four, and a great-great-grandsire two (female progenitors do not count directly, but if any of their sires etc. are on the chef-de-race list points would accrue via such sires).
Dosage Index:
This results in a Dosage Profile consisting of five separate figures, listed in order of Brilliant-Intermediate-Classic-Solid-Professional. Secretariat, the 1973 Triple Crown winner, for example, had a Dosage Profile of 20-14-7-9-0. To arrive at the Dosage Index, the first two figures plus one-half the value of the third figure are added together, and then divided by one-half of the third figure plus the sum of the last two figures. In this case, it would be 37.5 (20 + 14+ 3.5) divided by 12.5 (3.5 + 9 + 0), giving Secretariat a Dosage Index of exactly 3.00 (the figure almost always being expressed with two places to the right of the decimal point and rounded to the nearest 0.01).
Dosage Index:
A second mathematical value, called the Center of Distribution, can also be computed from the Dosage Profile. To determine this value, the number of Brilliant points in the profile is doubled, and added to the number of Intermediate points; from this is then subtracted the number of Solid points and twice the number of Professional points. The result is then divided by the total number of points in the entire profile, including the Classic points. In Secretariat's case, this would work out as 54 (40 + 14) minus 9 (9 + 0) divided by 50 (20 + 14 + 7 + 9 + 0), yielding a Center of Distribution of 0.90 (the figure nearly always being rounded to the nearest 100th of a point, as with the Dosage Index).
Dosage Index:
High Dosage Index (and Center of Distribution) figures are associated with a tendency to perform best over shorter distances, while low numbers signify an inherent preference for longer races. The median Dosage Index of contemporary North American thoroughbreds is estimated at 2.40 (the average figure being impossible to calculate because some horses have a Dosage Index of "infinity," a scenario which arises when a horse has only Brilliant and/or Intermediate chef-de-race influences in its Dosage Profile). The average Center of Distribution for modern-day North American race horses is believed to be approximately 0.70 (both Dosage Index and Center of Distribution figures tend to be lower for European thoroughbreds because in Europe the races are longer on aggregate and European breeders thus place greater emphasis on breeding their horses for stamina rather than speed).
Dosage Index:
Retroactive research conducted at the time the term "Dosage Index" first became common knowledge revealed that at that time no horse having a Dosage Index of higher than 4.00 had won the Kentucky Derby since at least 1929 (a year chosen because by then the number of available of chefs-de-race on which to base the figures was thought to have reached a critical mass), and that over the same period only one Belmont Stakes winner (Damascus in 1967) had such a Dosage figure. It was also determined at that time that few horses with no chef-de-race influences in the two most stamina-laden groups, Solid and Professional, had won major races at distances of 1+1⁄4 miles or longer even if the horse had a sufficient Classic presence in its pedigree to keep the Dosage Index from being over 4.00 (when Affirmed won the Triple Crown in 1978, for instance, he became the first horse with no Solid or Professional points in his Dosage Profile to win either the Kentucky Derby or the Belmont Stakes since the 1930s). In recent years, however, several horses with no Solid or Professional chefs-de-race in the first four generations of their pedigrees—and indeed, a few with Dosage Indexes of above 4.00—have managed to win the Kentucky Derby and Belmont Stakes, highlighting the issue of increasing speed and decreasing stamina in contemporary American thoroughbred pedigrees. For example, 1999 Kentucky Derby winner Real Quiet had a Dosage Index of 6.02, while 2005 Kentucky Derby winner Giacomo has a Dosage Index of 4.33 and no Solid or Professional points in his Dosage Profile. Triple Crown winner American Pharoah has a Dosage Index of 4.33.
Dosage Index:
As a result of these anomalies, the theory's usefulness has been questioned by some, at least with regard to the Kentucky Derby. The system's defenders, however, point out that in recent times a large proportion of U.S.-bred horses with low Dosage figures have been sent to race in foreign countries where the distances of races are longer, resulting in most horses competing in the Kentucky Derby and similar American races having relatively high Dosage numbers and/or lacking Solid or Professional chef-de-race representation. Yet the statistical foundation of Dosage remains compelling and the theory accurately differentiates Thoroughbred pedigree type for large populations of horses competitively performing over a range of distances, track surfaces and ages. With regard to the Kentucky Derby, however, only results from 1981 onward reflect a method without retrofitting or using information unavailable at the time. Many of the chefs-de-race who "predicted" the 1929-1981 Derby winners were meade that way because of the Derby winners themselves, making the logic circular. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Arsenite-transporting ATPase**
Arsenite-transporting ATPase:
In enzymology, an arsenite-transporting ATPase (EC 3.6.3.16) is an enzyme that catalyzes the chemical reaction ATP + H2O + arsenitein ⇌ ADP + phosphate + arseniteoutThe 3 substrates of this enzyme are ATP, H2O, and arsenite, whereas its 3 products are ADP, phosphate, and arsenite.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (arsenite-exporting).
Structural studies:
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1IHU, 1II0, and 1II9. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Instrument mechanic**
Instrument mechanic:
Instrument mechanics in engineering are tradesmen who specialize in installing, troubleshooting, and repairing instrumentation, automation and control systems. The term "Instrument Mechanic" came about because it was a combination of light mechanical and specialised instrumentation skills. The term is still is used in certain industries; predominantly in industrial process control.
History:
Instrumentation has existed for hundreds of years in one form or another; the oldest manometer was invented by Evangelista Torricelli in 1643, and the thermometer has been credited to many scientists of about the same period. Over that time, small and large scale industrial plants and manufacturing processes have always needed accurate and reliable process measurements. Originally the demand would only be for measurement instruments, but as process complexity grew, automatic control became more common.
History:
The huge growth in process control instrumentation was boosted by the use of pneumatic controllers, which were used widely after 1930 when Clesson E Mason of the Foxboro Company invented a wide-band pneumatic controller by combining the nozzle and flapper high-gain pneumatic amplifier with negative feedback in a completely mechanical device. The repair and calibration of these devices required both fine mechanical skills and an understanding of the control operation. Likewise the use of control valves with positioners appeared, which required a similar combination of skills.
History:
World War II also brought about a revolution in the use of instrumentation. Further advanced processes requires tighter control than people could provide, and advanced instruments were required to provide measurements in modern processes. Also, the war left industry with a substantially reduced workforce. Industrial instrumentation solved both problems, leading to a rise in its use. Pipe fitters had to learn more about instrumentation and control theory, and a new trade was born.Today, instrument mechanics combine the skills of repair and calibration with the theoretical understand of how the instrumentation and control works, which is a specialised combination of electronic and mechanical disciplines. Now, almost all new instrumentation is electronic, using either 4-20mA control signals or digital signalling standards, the term instrument mechanic is still used colloquially in some cases.
Terminology:
In Canada, journeyman tradesmen who work with instrumentation are called "Instrument Mechanics". In the United States, Australia and elsewhere, they can be called "Instrument fitters". The term may have originated from earlier instrument-qualified people being originally mechanically trained Machinists (also known as a fitter and turner) rather than electricians or "pure" instrument fitters (No secondary trade) as is now the norm.
Terminology:
In the United Kingdom a particular trend has been to call them Electrical/instrument (E/I) craftsmen, with progression to technician level.
Training and regulation of trade:
In most countries, the job of an instrument mechanic is a regulated trade for safety reasons due to the many hazards of working with electricity, as well as the dangers posed by incorrectly installed or calibrated instrumentation. The training requires testing, registration, or licensing. Licensing of instrument mechanics is usually controlled through government or educational bodies, and/or professional societies.
The apprenticeship period has been reduced in some cases for Instrumentation Engineering Technologists, who can get their apprenticeship in 2 years rather than 4, depending on the college. In the United Kingdom, the "modern apprenticeship" is 42 months, and requires theory training to National Vocational Qualification (NVQ) level 3.
Training and regulation of trade:
Canada In Canada, the trade of Instrumentation and Control technician is included in the Red Seal inter-provincial journeyman program.The trade itself is called different things in different provinces. The two most popular names are "Industrial Instrument Mechanic" and "Instrumentation and Control Technician", though Alberta and the Northwest Territories call the certification "Instrument Technician", and Saskatchewan and Nunavut call their certification "Industrial Instrument Technician".The 1995 Agreement on Internal Trade, agreed upon by all provinces except Nunavut, states that each party to the agreement will provide automatic recognition and free access to all workers holding an inter-provincial standards (Red Seal) program qualification.Although there is a federal agreement, each province implements the program with its own legislation: (Note that these are all Provincial Acts) Prince Edward Island's Journeyman program is regulated by the Apprenticeship and Trades Qualification Act Nova Scotia's Journeyman program is regulated by the Apprenticeship and Trades Qualification Act Newfoundland's Journeyman program is regulated by the Apprenticeship and Certification Act New Brunswick's Journeyman program is regulated by the Apprenticeship and Occupational Certification Act Quebec's Journeyman program is regulated by the Manpower Vocational Training and Qualification Act Ontario 's Journeyman program is regulated by the Ontario College of Trades act.
Training and regulation of trade:
Manitoba's Journeyman program is regulated by the Apprenticeship and Certification act.
Training and regulation of trade:
Saskatchewan's Journeyman program is regulated by the Apprenticeship and Trade Certification Act Alberta's Journeyman program is regulated by the Apprenticeship and Industry Training Act British Columbia's Journeyman program is regulated by the Industry Training and Apprenticeship Act Nunavut's Journeyman program is regulated by the Trade and Occupations Certification Act The Yukon Territories' Journeyman program is regulated by the Apprenticeship Training Act The Northwest Territories' Journeyman program is regulated by the Apprenticeship, Trade, and Occupation Certification ActRecipients receive a "Certificate of Qualification".Different provincial jurisdictions may have different regulations.In Ontario, the Instrumentation and Control apprenticeship program does not contain any restricted skill sets as per Ontario Regulation 565/99, Restricted skill sets. This means that a worker does not need a certificate of apprenticeship or a certificate of qualification to practice the trade.Training of instrument mechanics follows an apprenticeship model, taking four or five years to progress to fully qualified journeyman level. Typical apprenticeship programs emphasize hands-on work under the supervision of journeymen, but also include a substantial component of classroom training and testing. Training and licensing of instrument mechanics is by province, and some provinces don't have an instrument mechanic licensing program, but provinces recognize qualifications received in others.
Training and regulation of trade:
Different provincial jurisdictions may have different regulations regarding certification. In Ontario, the On-The-Job training duration for apprentices is 8000 hours, and the in-school training duration is generally 720 hours. One person of journeyman or equivalent status must be working for every apprentice.Prior to receiving their Journeyman designation, candidates seeking their certificate of qualification must complete a trade exam, testing knowledge of a number of essential skill sets: Safe Working Practices and Procedures Occupational Skills Process Measurement and Indicating Devices Process Analyzers, Quality Control Analyzers, and Environmental Emission Analyzers Safety Systems and Security Systems Energy Development Systems Communication Systems and Devices Final Control Devices Process Control Systems The trade exam consists of a number of questions in each of these skill sets.
Training and regulation of trade:
Australia Australian instrument fitters are usually re-qualified electricians who complete a 2-year conversion course at an accredited technical college, such as a TAFE, or start as new apprentices with no prior qualifications and complete a 3 year course and a 4 year apprenticeship, in combination with workplace experience of material studied. The first year of the 3 is a basic electrical module, covering AC and DC principals, plus some workshop practicals. The 4th year generally consists of an apprentice choosing a post-trade qualification to study for.
Training and regulation of trade:
As there is no journeyman accreditation in Australia, at the completion of their trade course, and collection of the required workplace experience, aspirant instrument fitters must pass a "capstone" test, which involves theoretical testing and practical exercises to determine competency. Qualification is recognised with a craft certificate, but not a license in any form.
Other names:
Instrument mechanics are sometimes known as: Instrument artificers Instrument fitters Instrumentation Techs (Not to be confused with an Instrumentation Engineering technician) instrument technicians E/I craftsmen (United Kingdom)
Fields of study:
Instrument mechanics are required to study a large body of knowledge. This includes information on: Process Control Measurement Instrumentation Final Control Elements Motors Electronics Industrial networks Signalling standards Chemistry Fluid Dynamics | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mixter**
Mixter:
Mixter is a computer security specialist. Mixter first made the transition out of the computer underground into large-scale public awareness, in 2000, at which time newspapers and magazines worldwide mentioned a link to massively destructive and effective distributed denial of service (DDoS) attacks which crippled and shut down major websites (including Yahoo!, Buy.com, eBay, Amazon, E-Trade, MSN.com, Dell, ZDNet and CNN). Early reports stated that the FBI-led National Infrastructure Protection Center (NIPC) was questioning Mixter regarding a tool called Stacheldraht (Barbed Wire). Although Mixter himself was not a suspect, his tool, the Tribe Flood Network (TFN) and an update called TFN2K were ultimately discovered as being the ones used in the attacks, causing an estimated $1.7 billion USD in damages.In 2002 Mixter returned to the public eye, as the author of Hacktivismo's Six/Four System. The Six/Four System is a censorship resistant network proxy. It works by using "trusted peers" to relay network connections over SSL encrypted links. As an example, the distribution includes a program which will act as a web proxy, but all of the connections will be hidden until they reach the far end trusted peer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Model rocket motor classification**
Model rocket motor classification:
Motors for model rockets and high-powered rockets (together, consumer rockets) are classified by total impulse into a set of letter-designated ranges, from ⅛A up to O.
The total impulse is the integral of the thrust over burn time.
PT=∫0tFthrust(t′)dt′=Favet.
Where t is the burn time in seconds, Fthrust is the instantaneous thrust in newtons, Fave is average thrust in newtons, and PT is the total impulse in newton seconds.
Model rocket motor classification:
Class A is from 1.26 newton-seconds (conversion factor 4.448 N per lb. force) to 2.5 N·s, and each class is then double the total impulse of the preceding class, with Class B being 2.51 to 5.00 N·s. The letter (M) would represent the total impulse of between 5,120.01 and 10,240.00 N·s of impulse. Motors E and below are considered low power rocket motors. Motors between F and G are considered mid-power, while motors H and above being high-powered rocket motors. Motors which would be classified beyond O are in the realm of amateur rocketry (in this context, the term amateur refers to the rocketeer's independence from an established commercial or government organization). Professional organizations use the nomenclature of average thrust and burning time.
Rocket motor codes:
The designation for a specific motor looks like C6-3. In this example, the letter (C) represents the total impulse range of the motor, the number (6) before the dash represents the average thrust in newtons, and the number (3) after the dash represents the delay in seconds from propelling charge burnout to the firing of the ejection charge (a gas generator composition, usually black powder, designed to deploy the recovery system). A C6-3 motor would have between 5.01 and 10 N·s of impulse, produce 6 N average thrust, and fire an ejection charge 3 seconds after burnout.
Rocket motor codes:
An attempt was made by motor manufacturers in 1982 to further clarify the motor code by writing the total impulse in newton-seconds before the code. This allowed the burn duration to be computed from the provided numbers. Additionally, the motor code was followed by a letter designation denoting the type of propellant. The propellant designations are manufacturer specific. This standard is still not fully adopted, with some manufacturers adopting parts or all of the additional nomenclature.
Governmental regulation:
In many countries, the sale, possession, and use of model rocket motors is subject to governmental rules and regulations. High-power rockets in the United States are only federally regulated in their flight guidelines by the FAA. These regulations are codified in FAA FAR Part 101. Rockets under 125g propellant and 1500g liftoff mass are exempt from most requirements. Beyond that a free "Waiver" is required from a FAA field office.
Governmental regulation:
However, some of the consumer motor manufacturers and two U.S. national rocketry organizations have established a self-regulating industry and codified it in National Fire Protection Association (NFPA) "model" code documents, which are adopted only in specific circumstances and jurisdictions, largely in conjunction with fire and building codes. This self-regulation of industry suggests a user to become certified for use before a manufacturer will sell him a motor. In the United States, the two recognized organizations that provide high-power certifications are Tripoli Rocketry Association and the National Association of Rocketry. Both these organizations have three levels of certification which involves building progressively more complex and higher powered rockets and taking a test of safety rules and regulations. With the national member association bodies using published safety codes. In Canada, the Canadian Association of Rocketry has a four-step certification process, but all three organizations accept the other's certifications if a flyer shows up at a high-power launch and wishes to fly under their sanction. Level 1 certification from NAR or TRA qualifies one to purchase and use an H or I motor, Level 2 certification J, K, and L motors, and Level 3 certification M, N, and O motors. Canada adds another step in between, and has a Level 4 which is the same as US Level 3.
Governmental regulation:
In the late 1990s, the U.S. Bureau of Alcohol, Tobacco, Firearms and Explosives began requiring that individuals obtain a Low Explosives Users Permit (LEUP) to possess and use high-powered motors. On February 11, 2000, Tripoli Rocketry Association and the National Association of Rocketry filed suit in the United States District Court for the District of Columbia claiming that the BATF applied "onerous and prohibitive civil regulations" against sport rocketry hobbyists due to the Bureau's improper designation of ammonium perchlorate composite propellant (APCP) as an explosive. APCP is used in most high-power rocket motors. The commentary by BATFE staff in response to objections to adding new enforcement against hobby rocket motors is quite instructive. In 2009, the court ruled in favor of the hobby organizations and ordered the BATF to remove APCP and other slow burning materials from its list of regulated explosives. That judgement established 1 meter per second burning rate ("ATFE’s own burn rate threshold for deflagration is 1000 millimeters (or one meter) per second." Tripoli Rocketry Ass’n, 437 F.3d at 81–82) as the threshold for a material on the BATFE list of explosive materials.
Vendors:
The largest vendor of model rocket motors in the world is Estes Industries. The largest vendors of high-power rocket motors in the world are Cesaroni Technology Inc. and RCS Rocket Motor Components, Inc.
The very first model rocket motor certified was by Model Missiles Inc. (Orville Carslile). Circa 1958. The very first high-power rocket motor certified was by U.S. Rockets (Jerry Irvine). Circa 1985. The very first APCP propellant model rocket motor made was by Rocket Development Corporation (Irv Wait). Circa 1970.
The largest vendor of professional solid rockets in the world is Orbital ATK. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**VE-Suite**
VE-Suite:
VE-Suite is an open source based virtual engineering software toolkit that simplifies information management so users can simultaneously interact with engineering analyses and graphical models to create a virtual decision-making environment. It is available under the GNU Lesser General Public License (LGPL) and is composed of four main software engines: VE-CE is the software engine responsible for the synchronization of the data between the various analysis and process models and the engineer VE-Xplorer is the decision-making environment that allows the engineer to visually interact with the equipment models VE-Conductor, the graphical user interface, is the engineer's mechanism to control models and other information VE-Open connects the core engines of VE-Suite and transfers data from user-defined information sources to VE-Suite software enginesThese software engines coordinate the flow of data from the engineer to the virtual components being designed.
VE-Suite:
In nearly all aspects of the engineering process—design, manufacturing, and maintenance—the tools employed at each phase rely on virtual models (e.g., software tools) to reduce cost and shorten development time. This results in a variety of software tools being used across a wide range of vendors and engineering firms. In this environment, engineers are required to manually move information from one software package to another. VE-Suite was designed to support real-time, collaborative design using disparate software tools so engineers, designers, and managers can obtain in intuitive feel for a product's performance in real time.
VE-Suite:
VE-Suite's features include: Information Management In engineering decision making, it is necessary to understand the vast amounts of information regarding a particular product. VE-Suite enables users to interact with objects in a virtual space without being concerned with technical information such as costing.
VE-Suite:
Component Manipulation Product components are viewable at any scale and can be modified in real time without having to go back to the analysis and modeling process. They can be virtually assembled, much like building a physical model, but without the time and expense; they can be combined to create new components; and they can be distributed across computational resources.
VE-Suite:
Visualization VE-Suite provides a virtual reality environment in which users can immerse themselves in the data and better understand it. The ability to visually interact with information allows users to analyze complex patterns, synthesize opportunities, and evaluate alternative processes.
VE-Suite:
Collaboration VE-Suite is designed with an open interface to allow the integration of other open-source and commercial software packages. Combining various simulation programs, data from diverse sources, and high fidelity visualization throughout the product development lifecycle produces an experience similar to physical inspection of an actual device. In such an environment, people from various disciplines with diverse but complementary experience can collaborate.
Workflow:
Following is an illustration of the VE-Suite workflow.
Workflow:
The first VE-Suite tool the engineer works with is VE-Conductor. He or she first double-clicks a particular icon on the right hand tree view, which publishes the object to be investigated on the design canvas in VE-Conductor. The engineer can then double-click on this object to cause a customized graphical user interface (GUI) of this object to appear. Through this interface, the engineer can modify specific input parameters for the particular object under investigation. Once the appropriate values have been set by the engineer, the job is submitted to VE-CE, which schedules the appropriate models for execution and sends the input data to the respective models. Once the models have been executed, the data generated by the models is accessible in VE-Xplorer within the graphical decision-making environment.
Workflow:
Everything that has occurred up to this point has occurred without user intervention; the software tools contained within VE-Suite have handled the information integration and model execution. Once the model execution is complete, the engineer can then choose to interrogate the high fidelity data by requesting volume renders, vector planes, contour planes, streamlines, animated massless particles, or transient animations if the data is transient. During this workflow process, the engineer interacts with VE-Conductor and visually interacts with the data in the VE-Xplorer-generated graphical decision-making environment. The complexity of information integration and execution of the distributed models is handled without input from the engineer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Armourstone**
Armourstone:
Armourstone is a generic term for broken stone with stone masses between 100 and 10,000 kilograms (220 and 22,050 lb) (very coarse aggregate) that is suitable for use in hydraulic engineering. Dimensions and characteristics are laid down in European Standard EN13383.
Stone classes:
Armourstone is available in standardised stone classes, which are described by a lower and an upper value of the stone mass in these classes. Class 60-300 means that up to 10% of the stones are lighter than 60 kg (130 lb) and up to 30% of the stones are heavier than 300 kg (660 lb). The standard also gives values that cannot be exceeded by 5% or 3%.
Stone classes:
For specific use as a top layer for a breakwater or bank protection, the size of the median stone mass, the M50, is often required. This is a category A stone. This does not apply to category B stone. There are two groups, divided into classes HM and LM (Heavy and Light). Thus, a stone class is defined according to EN 13383 as, for example, HMA300-1000. Attached graphs give an overview of all stone classes. Any distribution between the two indicated curves meets the requirements for category B; In addition, in order to comply with category A, the MEM must pass through the small horizontal line. MEM is the mean stone mass, i.e. the total mass of the sample divided by the number of stones in the sample. Note that for wide ranges (especially the range 15 - 300 and 40 - 400 there is quite some difference, for the 15-300 class the M50= 1.57 times MEM).
Stone classes:
In addition, a stone class CP (Coarse) has been defined. The class CP is smaller than LM, although the name suggests otherwise. This is because this class is identical to the coarse class in the standard for fractional stone as a supplemental material (aggregate). In the stone class CP, the class is not indicated by kg, but in mm. On the basis of the basic information from standard EN13383, the following table can be drawn up:
Median stone mass M50:
For fine-grained materials such as sand, the size is usually given by the median diameter, which is determined by sieving the sand. It is not possible to make a sieve curve for armourstone, the stones are too large to sieve. This is why the M50 is used. This is determined by taking a sample of stones, then determining the mass of each stone, sorting these masses by size, and making a cumulative mass curve. In this curve you can read the M50. Note that the term median stone mass is factually incorrect, it is not true that the stone with mass M50 is also the middle stone of the sample.
Median stone mass M50:
As an example, a sample of 50 stones from a quarry in Bulgaria. The blue rectangle has size A4. All stones are individually weighed, and their mass is plotted in attached graph. Horizontal is the individual stone mass, and vertically the cumulative mass as a percentage of the total mass of the sample. At 50% the M50 can be read, this is 24kg. The real median of this sample is the average mass of stone 25 + 26. In this particular example the M50 is accidentally nearly equal to the median mass (26kg). This sample satisfies the requirements for LMA5-40 (apart from the fact that the sample is too small, according to EN13383 such a sample must consist of at least 200 stones).
Nominal diameter:
Since many design formulas do not contain a stone mass but a diameter, it is necessary to establish a conversion method. This is the nominal diameter, this is the size of a rib of a cube with the same weight as the stone, so dn=M/ρ3 Usually the median value is also used for this: dn50. In general, the relation can be used for conversion: 50 50 0.84 50 where Fs is the shape factor. By the way, the shape factor varies quite a bit, the range is between 0.7 and 0.9.
Nominal diameter:
For the above example from Bulgaria, the dn50 has also been determined. Since the density of the local stone (a limestone) is 2284 kg/m³, the dn50 is 22 cm. Notice that the sample’s stone size appears to be much larger visually. That’s because there’s a couple of big stones in it that gives a wrong impression.
Other parameters:
Standard EN13383 describes many more parameters that capture the quality of armourstone, such as a shape parameter (Length/Thickness), resistance to breakage, water absorption capacity etc. It is important to realise that the standard indicates how to define the quality of armourstone, but not what quality is required for a particular application. The latter is contained in design manuals and design guidelines, such as the Rock Manual.
Determination of required stone weight:
To calculate the required weight under the influence of waves, the (outdated) Hudson Formula or the Van der Meer formula can be used. For calculation of the stone weight in flow, the Izbash formula is recommended. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Value-driven design**
Value-driven design:
Value-driven design (VDD) is a systems engineering strategy based on microeconomics which enables multidisciplinary design optimization. Value-driven design is being developed by the American Institute of Aeronautics and Astronautics, through a program committee of government, industry and academic representatives. In parallel, the U.S. Defense Advanced Research Projects Agency has promulgated an identical strategy, calling it value-centric design, on the F6 Program. At this point, the terms value-driven design and value-centric design are interchangeable. The essence of these strategies is that design choices are made to maximize system value rather than to meet performance requirements.
Value-driven design:
This is also similar to the value-driven approach of agile software development where a project's stakeholders prioritise their high-level needs (or system features) based on the perceived business value each would deliver.Value-driven design is controversial because performance requirements are a central element of systems engineering. However, value-driven design supporters claim that it can improve the development of large aerospace systems by reducing or eliminating cost overruns which are a major problem, according to independent auditors.
Concept:
Value-driven design creates an environment that enables and encourages design optimization by providing designers with an objective function and eliminating those constraints which have been expressed as performance requirements. The objective function inputs all the important attributes of the system being designed, and outputs a score. The higher the score, the better the design. Describing an early version of what is now called value-driven design, George Hazelrigg said, "The purpose of this framework is to enable the assessment of a value for every design option so that options can be rationally compared and a choice taken." At the whole system level, the objective function which performs this assessment of value is called a "value model." The value model distinguishes value-driven design from Multi-Attribute Utility Theory applied to design. Whereas in Multi-Attribute Utility Theory, an objective function is constructed from stakeholder assessments, value-driven design employs economic analysis to build a value model. The basis for the value model is often an expression of profit for a business, but economic value models have also been developed for other organizations, such as government.To design a system, engineers first take system attributes that would traditionally be assigned performance requirements, like the range and fuel consumption of an aircraft, and build a system value model that uses all these attributes as inputs. Next, the conceptual design is optimized to maximize the output of the value model. Then, when the system is decomposed into components, an objective function for each component is derived from the system value model through a sensitivity analysis.A workshop exercise implementing value-driven design for a GPS satellite was conducted in 2006, and may serve as an example of the process.
History:
The dichotomy between designing to performance requirements versus objective functions was raised by Herbert Simon in an essay called "The Science of Design" in 1969. Simon played both sides, saying that, ideally, engineered systems should be optimized according to an objective function, but realistically this is often too hard, so that attributes would need to be satisficed, which amounted to setting performance requirements. But he included optimization techniques in his recommended curriculum for engineers, and endorsed "utility theory and statistical decision theory as a logical framework for rational choice among given alternatives".
History:
Utility theory was given most of its current mathematical formulation by von Neumann and Morgenstern, but it was the economist Kenneth Arrow who proved the Expected Utility Theorem most broadly, which says in essence that, given a choice among a set of alternatives, one should choose the alternative that provides the greatest probabilistic expectation of utility, where utility is value adjusted for risk aversion.Ralph Keeney and Howard Raiffa extended utility theory in support of decision making, and Keeney developed the idea of a value model to encapsulate the calculation of utility. Keeney and Raiffa also used "attributes" to describe the inputs to an evaluation process or value model.
History:
George Hazelrigg put engineering design, business plan analysis, and decision theory together for the first time in a framework in a paper written in 1995, which was published in 1998. Meanwhile, Paul Collopy independently developed a similar framework in 1997, and Harry Cook developed the S-Model for incorporating product price and demand into a profit-based objective function for design decisions.The MIT Engineering Systems Division produced a series of papers from 2000 on, many co-authored by Daniel Hastings, in which many utility formulations were used to address various forms of uncertainty in making engineering design decisions. Saleh et al. is a good example of this work.
History:
The term value-driven design was coined by James Sturges at Lockheed Martin while he was organizing a workshop that would become the Value-Driven Design Program Committee at the American Institute of Aeronautics and Astronautics (AIAA) in 2006. Meanwhile, value centric design was coined independently by Owen Brown and Paul Eremenko of DARPA in the Phase 1 Broad Agency Announcement for the DARPA F6 satellite design program in 2007.
History:
Castagne et al. provides an example where value-driven design was used to design fuselage panels for a regional jet.
Value-based acquisition:
Implementation of value-driven design on large government systems, such as NASA or European Space Agency spacecraft or weapon systems, will require a government acquisition system that directs or incentivizes the contractor to employ a value model. Such a system is proposed in some detail in an essay by Michael Lippitz, Sean O'Keefe, and John White. They suggest that "A program office can offer a contract in which price is a function of value", where the function is derived from a value model. The price function is structured so that, in optimizing the product design in accordance with the value model, the contractor will maximize its own profit. They call this system Value Based Acquisition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multiway branch**
Multiway branch:
Multiway branch is the change to a program's control flow based upon a value matching a selected criteria. It is a form of conditional statement. A multiway branch is often the most efficient method of passing control to one of a set of program labels, especially if an index has been created beforehand from the raw data.
Examples:
Branch table Switch statement - see also alternatives below Multiple dispatch - where a subroutine is invoked and a return is made
Alternatives:
A multiway branch can, frequently, be replaced with an efficient indexed table lookup (using the data value itself or a calculated derivative of the data value, as the index of an array) "...the implementation of a switch statement has been equated with that of a multiway branch. However, for many uses of the switch statement in real code, it is possible to avoid branching altogether and replace the switch with one or more table look-ups. For example,the Has30Days example [presented earlier] can be implemented as the following:[C example]""A Superoptimizer Analysis of Multiway Branch Code Generation" by Roger Anthony Sayle can be replaced, using a "safe-hashing" technique, with - or it can be replaced, using an index mapping table lookup, with - (in view of the simplicity of the latter case, it would be preferable to implement it in-line, since the overhead of using a function call may be greater than the indexed lookup itself.)
Quotations:
Multiway branching is an important programming technique which is all too often replaced by an inefficient sequence of if tests. Peter Naur recently wrote me that he considers the use of tables to control program flow as a basic idea of computer science that has been nearly forgotten; but he expects it will be ripe for rediscovery any day now. It is the key to efficiency in all the best compilers I have studied. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Testosterone phosphate**
Testosterone phosphate:
Testosterone phosphate (brand name Telipex Aquosum) is an androgen and anabolic steroid and a testosterone ester. Its structure is contained within polytestosterone phloretin phosphate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exceptional isomorphism**
Exceptional isomorphism:
In mathematics, an exceptional isomorphism, also called an accidental isomorphism, is an isomorphism between members ai and bj of two families, usually infinite, of mathematical objects, which is incidental, in that it is not an instance of a general pattern of such isomorphisms. These coincidences are at times considered a matter of trivia, but in other respects they can give rise to consequential phenomena, such as exceptional objects. In the following, coincidences are organized according to the structures where they occur.
Groups:
Finite simple groups The exceptional isomorphisms between the series of finite simple groups mostly involve projective special linear groups and alternating groups, and are: PSL PSL 2(5)≅A5, the smallest non-abelian simple group (order 60) – icosahedral symmetry; PSL PSL 3(2), the second-smallest non-abelian simple group (order 168) – PSL(2,7); PSL 2(9)≅A6; PSL 4(2)≅A8; PSU PSp 4(3), between a projective special orthogonal group and a projective symplectic group.
Groups:
Alternating groups and symmetric groups There are coincidences between symmetric/alternating groups and small groups of Lie type/polyhedral groups: PSL 2(2)≅ Dihedral group of order 6, PSL 2(3)≅ tetrahedral group, PGL PSL 2(Z/4)≅ full tetrahedral group ≅ octahedral group, PSL PSL 2(5)≅ icosahedral group, PGL 2(5) PSL Sp 4(2)′, Sp 4(2), PSL 4(2)≅O6+(2)′, S8≅O6+(2).
Groups:
These can all be explained in a systematic way by using linear algebra (and the action of Sn on affine n -space) to define the isomorphism going from the right side to the left side. (The above isomorphisms for A8 and S8 are linked via the exceptional isomorphism SL SO 6 .) There are also some coincidences with symmetries of regular polyhedra: the alternating group A5 agrees with the icosahedral group (itself an exceptional object), and the double cover of the alternating group A5 is the binary icosahedral group.
Groups:
Trivial group The trivial group arises in numerous ways. The trivial group is often omitted from the beginning of a classical family. For instance: C1 , the cyclic group of order 1; A0≅A1≅A2 , the alternating group on 0, 1, or 2 letters; S0≅S1 , the symmetric group on 0 or 1 letters; GL SL PGL PSL (0,K) , linear groups of a 0-dimensional vector space; SL PGL PSL (1,K) , linear groups of a 1-dimensional vector space and many others.
Groups:
Spheres The spheres S0, S1, and S3 admit group structures, which can be described in many ways: Spin (1)≅O(1)≅Z/2Z≅Z× , the last being the group of units of the integers; Spin SO (2)≅U(1)≅R/Z≅ circle group; Spin SU Sp (1)≅ unit quaternions.
Spin groups In addition to Spin (1) , Spin (2) and Spin (3) above, there are isomorphisms for higher dimensional spin groups: Spin Sp Sp SU SU (2) Spin Sp (2) Spin SU (4) Also, Spin(8) has an exceptional order 3 triality automorphism.
Coxeter–Dynkin diagrams:
There are some exceptional isomorphisms of Dynkin diagrams, yielding isomorphisms of the corresponding Coxeter groups and of polytopes realizing the symmetries, as well as isomorphisms of Lie algebras whose root systems are described by the same diagrams. These are: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**V-hull (boat)**
V-hull (boat):
A V-hull, is the shape of a boat or ship in which the contours of the hull come in a straight line to the keel. V-hull designs are usually used in smaller boats and are useful in providing space for ballast inside the boat. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Triphenyl phosphite ozonide**
Triphenyl phosphite ozonide:
Triphenyl phosphite ozonide (TPPO) is a chemical compound with the formula PO3(C6H5O)3 that is used to generate singlet oxygen.When TPPO is mixed with amines, the ozonide breaks down into singlet oxygen and leaves behind triphenyl phosphite. Pyridine is the only known amine that can effectively cause the breakdown of TPPO while not quenching any of the produced oxygen.
Synthesis:
Triphenyl phosphite ozonide is created by bubbling dry ozone through dichloromethane with triphenyl phosphite being added dropwise at -78 °C. If triphenyl phosphite is added in excess in the synthesis, TPPO can be reduced to triphenyl phosphite oxide, PO(C6H5O)3, and oxygen gas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Open Message Queue**
Open Message Queue:
Open Message Queue (OpenMQ or Open MQ) is an open-sourcemessage-oriented middleware project by Oracle (formerly Sun Microsystems) that implements the Java Message Service 2.0 API (JMS). It is the default JMS provider integrated into GlassFish.
In addition to support for the JMS API, OpenMQ provides additional enterprise features including clustering for scalability and high availability, a C API, and a full JMX administration API. It also includes an implementation of the Java EE Connector Architecture (JCA) called the JMSRA, that allows OpenMQ to be used by a Java EE compliant application server. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Levator ani nerve**
Levator ani nerve:
The levator ani nerve is a nerve to the levator ani muscles. It originates from sacral spinal nerve 4. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interior algebra**
Interior algebra:
In abstract algebra, an interior algebra is a certain type of algebraic structure that encodes the idea of the topological interior of a set. Interior algebras are to topology and the modal logic S4 what Boolean algebras are to set theory and ordinary propositional logic. Interior algebras form a variety of modal algebras.
Definition:
An interior algebra is an algebraic structure with the signature ⟨S, ·, +, ′, 0, 1, I⟩where ⟨S, ·, +, ′, 0, 1⟩is a Boolean algebra and postfix I designates a unary operator, the interior operator, satisfying the identities: xI ≤ x xII = xI (xy)I = xIyI 1I = 1xI is called the interior of x.
Definition:
The dual of the interior operator is the closure operator C defined by xC = ((x′)I)′. xC is called the closure of x. By the principle of duality, the closure operator satisfies the identities: xC ≥ x xCC = xC (x + y)C = xC + yC 0C = 0If the closure operator is taken as primitive, the interior operator can be defined as xI = ((x′)C)′. Thus the theory of interior algebras may be formulated using the closure operator instead of the interior operator, in which case one considers closure algebras of the form ⟨S, ·, +, ′, 0, 1, C⟩, where ⟨S, ·, +, ′, 0, 1⟩ is again a Boolean algebra and C satisfies the above identities for the closure operator. Closure and interior algebras form dual pairs, and are paradigmatic instances of "Boolean algebras with operators." The early literature on this subject (mainly Polish topology) invoked closure operators, but the interior operator formulation eventually became the norm following the work of Wim Blok.
Open and closed elements:
Elements of an interior algebra satisfying the condition xI = x are called open. The complements of open elements are called closed and are characterized by the condition xC = x. An interior of an element is always open and the closure of an element is always closed. Interiors of closed elements are called regular open and closures of open elements are called regular closed. Elements which are both open and closed are called clopen. 0 and 1 are clopen.
Open and closed elements:
An interior algebra is called Boolean if all its elements are open (and hence clopen). Boolean interior algebras can be identified with ordinary Boolean algebras as their interior and closure operators provide no meaningful additional structure. A special case is the class of trivial interior algebras which are the single element interior algebras characterized by the identity 0 = 1.
Morphisms of interior algebras:
Homomorphisms Interior algebras, by virtue of being algebraic structures, have homomorphisms. Given two interior algebras A and B, a map f : A → B is an interior algebra homomorphism if and only if f is a homomorphism between the underlying Boolean algebras of A and B, that also preserves interiors and closures. Hence: f(xI) = f(x)I; f(xC) = f(x)C.
Morphisms of interior algebras:
Topomorphisms Topomorphisms are another important, and more general, class of morphisms between interior algebras. A map f : A → B is a topomorphism if and only if f is a homomorphism between the Boolean algebras underlying A and B, that also preserves the open and closed elements of A. Hence: If x is open in A, then f(x) is open in B; If x is closed in A, then f(x) is closed in B.(Such morphisms have also been called stable homomorphisms and closure algebra semi-homomorphisms.) Every interior algebra homomorphism is a topomorphism, but not every topomorphism is an interior algebra homomorphism.
Morphisms of interior algebras:
Boolean homomorphisms Early research often considered mappings between interior algebras which were homomorphisms of the underlying Boolean algebras but which did not necessarily preserve the interior or closure operator. Such mappings were called Boolean homomorphisms. (The terms closure homomorphism or topological homomorphism were used in the case where these were preserved, but this terminology is now redundant as the standard definition of a homomorphism in universal algebra requires that it preserves all operations.) Applications involving countably complete interior algebras (in which countable meets and joins always exist, also called σ-complete) typically made use of countably complete Boolean homomorphisms also called Boolean σ-homomorphisms - these preserve countable meets and joins.
Morphisms of interior algebras:
Continuous morphisms The earliest generalization of continuity to interior algebras was Sikorski's based on the inverse image map of a continuous map. This is a Boolean homomorphism, preserves unions of sequences and includes the closure of an inverse image in the inverse image of the closure. Sikorski thus defined a continuous homomorphism as a Boolean σ-homomorphism f between two σ-complete interior algebras such that f(x)C ≤ f(xC). This definition had several difficulties: The construction acts contravariantly producing a dual of a continuous map rather than a generalization. On the one hand σ-completeness is too weak to characterize inverse image maps (completeness is required), on the other hand it is too restrictive for a generalization. (Sikorski remarked on using non-σ-complete homomorphisms but included σ-completeness in his axioms for closure algebras.) Later J. Schmid defined a continuous homomorphism or continuous morphism for interior algebras as a Boolean homomorphism f between two interior algebras satisfying f(xC) ≤ f(x)C. This generalizes the forward image map of a continuous map - the image of a closure is contained in the closure of the image. This construction is covariant but not suitable for category theoretic applications as it only allows construction of continuous morphisms from continuous maps in the case of bijections. (C. Naturman returned to Sikorski's approach while dropping σ-completeness to produce topomorphisms as defined above. In this terminology, Sikorski's original "continuous homomorphisms" are σ-complete topomorphisms between σ-complete interior algebras.)
Relationships to other areas of mathematics:
Topology Given a topological space X = ⟨X, T⟩ one can form the power set Boolean algebra of X: ⟨P(X), ∩, ∪, ′, ø, X⟩and extend it to an interior algebra A(X) = ⟨P(X), ∩, ∪, ′, ø, X, I⟩,where I is the usual topological interior operator. For all S ⊆ X it is defined by SI = ∪ {O | O ⊆ S and O is open in X}For all S ⊆ X the corresponding closure operator is given by SC = ∩ {C | S ⊆ C and C is closed in X}SI is the largest open subset of S and SC is the smallest closed superset of S in X. The open, closed, regular open, regular closed and clopen elements of the interior algebra A(X) are just the open, closed, regular open, regular closed and clopen subsets of X respectively in the usual topological sense.
Relationships to other areas of mathematics:
Every complete atomic interior algebra is isomorphic to an interior algebra of the form A(X) for some topological space X. Moreover, every interior algebra can be embedded in such an interior algebra giving a representation of an interior algebra as a topological field of sets. The properties of the structure A(X) are the very motivation for the definition of interior algebras. Because of this intimate connection with topology, interior algebras have also been called topo-Boolean algebras or topological Boolean algebras.
Relationships to other areas of mathematics:
Given a continuous map between two topological spaces f : X → Ywe can define a complete topomorphism A(f) : A(Y) → A(X)by A(f)(S) = f−1[S]for all subsets S of Y. Every complete topomorphism between two complete atomic interior algebras can be derived in this way. If Top is the category of topological spaces and continuous maps and Cit is the category of complete atomic interior algebras and complete topomorphisms then Top and Cit are dually isomorphic and A : Top → Cit is a contravariant functor that is a dual isomorphism of categories. A(f) is a homomorphism if and only if f is a continuous open map.
Relationships to other areas of mathematics:
Under this dual isomorphism of categories many natural topological properties correspond to algebraic properties, in particular connectedness properties correspond to irreducibility properties: X is empty if and only if A(X) is trivial X is indiscrete if and only if A(X) is simple X is discrete if and only if A(X) is Boolean X is almost discrete if and only if A(X) is semisimple X is finitely generated (Alexandrov) if and only if A(X) is operator complete i.e. its interior and closure operators distribute over arbitrary meets and joins respectively X is connected if and only if A(X) is directly indecomposable X is ultraconnected if and only if A(X) is finitely subdirectly irreducible X is compact ultra-connected if and only if A(X) is subdirectly irreducible Generalized topology The modern formulation of topological spaces in terms of topologies of open subsets, motivates an alternative formulation of interior algebras: A generalized topological space is an algebraic structure of the form ⟨B, ·, +, ′, 0, 1, T⟩where ⟨B, ·, +, ′, 0, 1⟩ is a Boolean algebra as usual, and T is a unary relation on B (subset of B) such that: 0,1 ∈ T T is closed under arbitrary joins (i.e. if a join of an arbitrary subset of T exists then it will be in T) T is closed under finite meets For every element b of B, the join Σ{a ∈T | a ≤ b} existsT is said to be a generalized topology in the Boolean algebra.
Relationships to other areas of mathematics:
Given an interior algebra its open elements form a generalized topology. Conversely given a generalized topological space ⟨B, ·, +, ′, 0, 1, T⟩we can define an interior operator on B by bI = Σ{a ∈T | a ≤ b} thereby producing an interior algebra whose open elements are precisely T. Thus generalized topological spaces are equivalent to interior algebras.
Relationships to other areas of mathematics:
Considering interior algebras to be generalized topological spaces, topomorphisms are then the standard homomorphisms of Boolean algebras with added relations, so that standard results from universal algebra apply.
Relationships to other areas of mathematics:
Neighbourhood functions and neighbourhood lattices The topological concept of neighbourhoods can be generalized to interior algebras: An element y of an interior algebra is said to be a neighbourhood of an element x if x ≤ yI. The set of neighbourhoods of x is denoted by N(x) and forms a filter. This leads to another formulation of interior algebras: A neighbourhood function on a Boolean algebra is a mapping N from its underlying set B to its set of filters, such that: For all x ∈ B, max{y ∈ B | x ∈ N(y)} exists For all x,y ∈ B, x ∈ N(y) if and only if there is a z ∈ B such that y ≤ z ≤ x and z ∈ N(z).The mapping N of elements of an interior algebra to their filters of neighbourhoods is a neighbourhood function on the underlying Boolean algebra of the interior algebra. Moreover, given a neighbourhood function N on a Boolean algebra with underlying set B, we can define an interior operator by xI = max{y ∈ B | x ∈ N(y)} thereby obtaining an interior algebra. N(x) will then be precisely the filter of neighbourhoods of x in this interior algebra. Thus interior algebras are equivalent to Boolean algebras with specified neighbourhood functions.
Relationships to other areas of mathematics:
In terms of neighbourhood functions, the open elements are precisely those elements x such that x ∈ N(x). In terms of open elements x ∈ N(y) if and only if there is an open element z such that y ≤ z ≤ x.
Neighbourhood functions may be defined more generally on (meet)-semilattices producing the structures known as neighbourhood (semi)lattices. Interior algebras may thus be viewed as precisely the Boolean neighbourhood lattices i.e. those neighbourhood lattices whose underlying semilattice forms a Boolean algebra.
Relationships to other areas of mathematics:
Modal logic Given a theory (set of formal sentences) M in the modal logic S4, we can form its Lindenbaum–Tarski algebra: L(M) = ⟨M / ~, ∧, ∨, ¬, F, T, □⟩where ~ is the equivalence relation on sentences in M given by p ~ q if and only if p and q are logically equivalent in M, and M / ~ is the set of equivalence classes under this relation. Then L(M) is an interior algebra. The interior operator in this case corresponds to the modal operator □ (necessarily), while the closure operator corresponds to ◊ (possibly). This construction is a special case of a more general result for modal algebras and modal logic.
Relationships to other areas of mathematics:
The open elements of L(M) correspond to sentences that are only true if they are necessarily true, while the closed elements correspond to those that are only false if they are necessarily false.
Because of their relation to S4, interior algebras are sometimes called S4 algebras or Lewis algebras, after the logician C. I. Lewis, who first proposed the modal logics S4 and S5.
Relationships to other areas of mathematics:
Preorders Since interior algebras are (normal) Boolean algebras with operators, they can be represented by fields of sets on appropriate relational structures. In particular, since they are modal algebras, they can be represented as fields of sets on a set with a single binary relation, called a modal frame. The modal frames corresponding to interior algebras are precisely the preordered sets. Preordered sets (also called S4-frames) provide the Kripke semantics of the modal logic S4, and the connection between interior algebras and preorders is deeply related to their connection with modal logic.
Relationships to other areas of mathematics:
Given a preordered set X = ⟨X, «⟩ we can construct an interior algebra B(X) = ⟨P(X), ∩, ∪, ′, ø, X, I⟩from the power set Boolean algebra of X where the interior operator I is given by SI = {x ∈ X | for all y ∈ X, x « y implies y ∈ S} for all S ⊆ X.The corresponding closure operator is given by SC = {x ∈ X | there exists a y ∈ S with x « y} for all S ⊆ X.SI is the set of all worlds inaccessible from worlds outside S, and SC is the set of all worlds accessible from some world in S. Every interior algebra can be embedded in an interior algebra of the form B(X) for some preordered set X giving the above-mentioned representation as a field of sets (a preorder field).
Relationships to other areas of mathematics:
This construction and representation theorem is a special case of the more general result for modal algebras and modal frames. In this regard, interior algebras are particularly interesting because of their connection to topology. The construction provides the preordered set X with a topology, the Alexandrov topology, producing a topological space T(X) whose open sets are: {O ⊆ X | for all x ∈ O and all y ∈ X, x « y implies y ∈ O}.The corresponding closed sets are: {C ⊆ X | for all x ∈ C and all y ∈ X, y « x implies y ∈ C}.In other words, the open sets are the ones whose worlds are inaccessible from outside (the up-sets), and the closed sets are the ones for which every outside world is inaccessible from inside (the down-sets). Moreover, B(X) = A(T(X)).
Relationships to other areas of mathematics:
Monadic Boolean algebras Any monadic Boolean algebra can be considered to be an interior algebra where the interior operator is the universal quantifier and the closure operator is the existential quantifier. The monadic Boolean algebras are then precisely the variety of interior algebras satisfying the identity xIC = xI. In other words, they are precisely the interior algebras in which every open element is closed or equivalently, in which every closed element is open. Moreover, such interior algebras are precisely the semisimple interior algebras. They are also the interior algebras corresponding to the modal logic S5, and so have also been called S5 algebras.
Relationships to other areas of mathematics:
In the relationship between preordered sets and interior algebras they correspond to the case where the preorder is an equivalence relation, reflecting the fact that such preordered sets provide the Kripke semantics for S5. This also reflects the relationship between the monadic logic of quantification (for which monadic Boolean algebras provide an algebraic description) and S5 where the modal operators □ (necessarily) and ◊ (possibly) can be interpreted in the Kripke semantics using monadic universal and existential quantification, respectively, without reference to an accessibility relation.
Relationships to other areas of mathematics:
Heyting algebras The open elements of an interior algebra form a Heyting algebra and the closed elements form a dual Heyting algebra. The regular open elements and regular closed elements correspond to the pseudo-complemented elements and dual pseudo-complemented elements of these algebras respectively and thus form Boolean algebras. The clopen elements correspond to the complemented elements and form a common subalgebra of these Boolean algebras as well as of the interior algebra itself. Every Heyting algebra can be represented as the open elements of an interior algebra and the latter may be chosen to an interior algebra generated by its open elements - such interior algebras correspond one to one with Heyting algebras (up to isomorphism) being the free Boolean extensions of the latter.
Relationships to other areas of mathematics:
Heyting algebras play the same role for intuitionistic logic that interior algebras play for the modal logic S4 and Boolean algebras play for propositional logic. The relation between Heyting algebras and interior algebras reflects the relationship between intuitionistic logic and S4, in which one can interpret theories of intuitionistic logic as S4 theories closed under necessity. The one to one correspondence between Heyting algebras and interior algebras generated by their open elements reflects the correspondence between extensions of intuitionistic logic and normal extensions of the modal logic S4.Grz.
Relationships to other areas of mathematics:
Derivative algebras Given an interior algebra A, the closure operator obeys the axioms of the derivative operator, D. Hence we can form a derivative algebra D(A) with the same underlying Boolean algebra as A by using the closure operator as a derivative operator.
Thus interior algebras are derivative algebras. From this perspective, they are precisely the variety of derivative algebras satisfying the identity xD ≥ x. Derivative algebras provide the appropriate algebraic semantics for the modal logic WK4. Hence derivative algebras stand to topological derived sets and WK4 as interior/closure algebras stand to topological interiors/closures and S4.
Relationships to other areas of mathematics:
Given a derivative algebra V with derivative operator D, we can form an interior algebra I(V) with the same underlying Boolean algebra as V, with interior and closure operators defined by xI = x·x ′ D ′ and xC = x + xD, respectively. Thus every derivative algebra can be regarded as an interior algebra. Moreover, given an interior algebra A, we have I(D(A)) = A. However, D(I(V)) = V does not necessarily hold for every derivative algebra V.
Stone duality and representation for interior algebras:
Stone duality provides a category theoretic duality between Boolean algebras and a class of topological spaces known as Boolean spaces. Building on nascent ideas of relational semantics (later formalized by Kripke) and a result of R. S. Pierce, Jónsson, Tarski and G. Hansoul extended Stone duality to Boolean algebras with operators by equipping Boolean spaces with relations that correspond to the operators via a power set construction. In the case of interior algebras the interior (or closure) operator corresponds to a pre-order on the Boolean space. Homomorphisms between interior algebras correspond to a class of continuous maps between the Boolean spaces known as pseudo-epimorphisms or p-morphisms for short. This generalization of Stone duality to interior algebras based on the Jónsson–Tarski representation was investigated by Leo Esakia and is also known as the Esakia duality for S4-algebras (interior algebras) and is closely related to the Esakia duality for Heyting algebras.
Stone duality and representation for interior algebras:
Whereas the Jónsson–Tarski generalization of Stone duality applies to Boolean algebras with operators in general, the connection between interior algebras and topology allows for another method of generalizing Stone duality that is unique to interior algebras. An intermediate step in the development of Stone duality is Stone's representation theorem which represents a Boolean algebra as a field of sets. The Stone topology of the corresponding Boolean space is then generated using the field of sets as a topological basis. Building on the topological semantics introduced by Tang Tsao-Chen for Lewis's modal logic, McKinsey and Tarski showed that by generating a topology equivalent to using only the complexes that correspond to open elements as a basis, a representation of an interior algebra is obtained as a topological field of sets - a field of sets on a topological space that is closed with respect to taking interiors or closures. By equipping topological fields of sets with appropriate morphisms known as field maps C. Naturman showed that this approach can be formalized as a category theoretic Stone duality in which the usual Stone duality for Boolean algebras corresponds to the case of interior algebras having redundant interior operator (Boolean interior algebras).
Stone duality and representation for interior algebras:
The pre-order obtained in the Jónsson–Tarski approach corresponds to the accessibility relation in the Kripke semantics for an S4 theory, while the intermediate field of sets corresponds to a representation of the Lindenbaum–Tarski algebra for the theory using the sets of possible worlds in the Kripke semantics in which sentences of the theory hold. Moving from the field of sets to a Boolean space somewhat obfuscates this connection. By treating fields of sets on pre-orders as a category in its own right this deep connection can be formulated as a category theoretic duality that generalizes Stone representation without topology. R. Goldblatt had shown that with restrictions to appropriate homomorphisms such a duality can be formulated for arbitrary modal algebras and modal frames. Naturman showed that in the case of interior algebras this duality applies to more general topomorphisms and can be factored via a category theoretic functor through the duality with topological fields of sets. The latter represent the Lindenbaum–Tarski algebra using sets of points satisfying sentences of the S4 theory in the topological semantics. The pre-order can be obtained as the specialization pre-order of the McKinsey-Tarski topology. The Esakia duality can be recovered via a functor that replaces the field of sets with the Boolean space it generates. Via a functor that instead replaces the pre-order with its corresponding Alexandrov topology, an alternative representation of the interior algebra as a field of sets is obtained where the topology is the Alexandrov bico-reflection of the McKinsey-Tarski topology. The approach of formulating a topological duality for interior algebras using both the Stone topology of the Jónsson–Tarski approach and the Alexandrov topology of the pre-order to form a bi-topological space has been investigated by G. Bezhanishvili, R.Mines, and P.J. Morandi. The McKinsey-Tarski topology of an interior algebra is the intersection of the former two topologies.
Metamathematics:
Grzegorczyk proved the elementary theory of closure algebras undecidable. Naturman demonstrated that the theory is hereditarily undecidable (all its subtheories are undecidable) and demonstrated an infinite chain of elementary classes of interior algebras with hereditarily undecidable theories. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lego in popular culture**
Lego in popular culture:
The acknowledgement of Lego in popular culture is demonstrated by the toy's wide representation in publication, television and film, and its common usage in artistic and cultural works.
Online:
In 2001, Elbe Spurling started an online web project to create an illustrated version of the Bible using Lego bricks, called The Brick Testament. The project has grown to cover over 400 stories, with over 4000 images, each of which is a photograph of a hand-built Lego scene. The web project drew international media attention, and has been published as three hardcover books.
Online:
The search engine Google paid tribute to the 50th anniversary of the Lego patent by replacing its usual logo on the Google homepage with one made from Lego bricks, along with the Lego figure on one of the letters. Some of the hardware Google's founders had used during their early research was housed in custom-made enclosures constructed from Lego bricks.There are also several online webcomics that feature art illustrated with Lego, such as the Irregular Webcomic!, Brick House, Legostar Galactica, Tranquility Base, The Adventures of the S-Team, Brickworld Saga, Glomshire Knights, and Bricks of the Dead. Many of these webcomics make frequent jokes about the strange abbreviations, pet peeves and complaints often found in the LEGO community.
Books:
Lego bricks played a role and were featured on some covers of Douglas Coupland's novel Microserfs published in June 1995.Several unofficial books have been written about Lego. The Unofficial LEGO Builder's Guide was written by Allan Bedford, targeted at children, with the aim of teaching a variety of building techniques at various scales (including minifigure scale and Legoland 'Miniland' scale), as well as including a small encyclopedia of some of the most common different types of Lego brick available. Lego has also released some official Lego books, such as the Ultimate LEGO Book, in 1999.There have also been many different books published about the Lego Mindstorms robotics product, some of which focus on its use as an educational toy within schools.
Films:
There are a number of short movies or recreations of feature films that have been made using Lego bricks, either using stop motion animation or computer-generated imagery (CGI). Making these is a popular fan-activity, and is supported by community websites such as BrickFilms - these films are often known as Brickfilms Other examples include the award-winning music video for the song "Fell in Love with a Girl" by The White Stripes, in which director Michel Gondry filmed a live version of the video, digitized the result and then recreated it entirely with Lego bricks.
Films:
Lego and Miramax Films partnered to create a trilogy of direct-to-DVD films for Lego's highly popular Bionicle series. The films Bionicle: Mask of Light, Bionicle 2: Legends of Metru Nui and Bionicle 3: Web of Shadows were released between 2003 and 2005 respectively. A fourth film made in association with Universal was released in 2009 as Bionicle: The Legend Reborn.
Films:
A feature film based on Lego toys, The Lego Movie, was released in 2014 and became a critical and commercial success.
In 2017, The Lego Batman Movie, was released and featured popular characters based from the DC universe and featured other fictional characters from different universes such as Harry Potter, The Wizard of Oz, The Lord of the Rings, Gremlins and more.
A sequel to the 2014 film, The Lego Movie 2: The Second Part, was released in 2019 to least box-office success. The film introduced plotlines relating to Duplo, Lego's brand for younger children.
Music:
In 1995–96, the Danish composer Frederik Magle composed a symphonic LEGO Fantasia in three movements for piano and symphony orchestra, commissioned by the Lego Group. The LEGO Fantasia was premiered on 24 August 1997 at a concert in St George's Chapel, Windsor Castle with the London Philharmonic Orchestra, David Parry and Frederik Magle. In 1998 the work was recorded by the same performers and released on a CD by the Lego group.
Music:
In 2002, the American rock band The White Stripes used Lego to produce an animated music video for their single Fell in Love with a Girl. The video was directed by Michel Gondry and won three MTV video music awards.A 2011 pop song "Lego House" by British singer-songwriter Ed Sheeran has a reference to Lego in the name, even though Lego is only mentioned once in the beginning of the song as a metaphor for a breakup.
Music:
2014 song by Avicii "Addicted to You" remix video made out of Lego referencing the lobby scene from "The Matrix".
Art:
Artists have used Lego to create artwork, which is sometimes referred to as Lego art or brick art.
Art:
twenty people in 2021, in all around the world, have become Lego Certified Professionals; certified artists that use Lego bricks as their medium. The Lego Group recognizes their efforts and they have the ability to not only use the Lego name and copyrighted logo, but have earned a special, in-depth relationship with the company. They are America; Robin Sather, Graeme Dymond, Nathan Sawaya; Europe & Middle East: Georg Schmitt, Matija Puzar,Rene Hoffmeister, Kevin Hall, Riccardo Zangelmi, Caspar Bennedsenand, Balazs Doczy, Vladimir Golubev; Asia-Pacific: Prince (Shenghui) Jiang, Jumpei Mitsui, Wani Kim, Jae Won Lee, Wei Wei Shannon Gluckman, Nicholas Foo, Yenchih Huang, Andy Hung and Ryan McNaught.
Art:
Lego bricks have been employed to replicate famous works of art in a mosaic motif, often for the promotion of a Lego event or relating to the replicated artwork. There have been many art-related records (especially mosaics) set by using Lego bricks. The largest Lego mosaic record was set on May 5-7th in 2012, consisting of over 660,000 pieces and measuring 143.91 sq. meters. It appears another world record attempt it under way to build a Lego mosaic of over 2,000,000 pieces as of January 2014.A 2011 exhibition titled Da Vinci, The Genius at the Frazier History Museum in Louisville, Kentucky attracted attention by having a Brick Art Mona Lisa replica constructed by Lego artist Brian Korte.
Art:
Lego builders such as Eric Harshbarger have made multiple replicas of Mona Lisa. Matching the approximate 21 by 30 inch size (535 x 760+ mm) of Leonardo's original requires upwards of 5,000 standard Lego bricks, but replicas measuring 6 by 8 feet have been built, requiring more than 30,000 bricks.
The Little Artists (John Cake and Darren Neave) have created an entire Modern Art collection in Lego form. Their exhibition 'Art Craziest Nation' was shown at the Walker Art Gallery in Liverpool, UK.
A giant legofigure called Ego Leonard washed ashore at several beaches in The Netherlands, UK and Siesta Key Florida Polish artist Zbigniew Libera created "Lego Concentration Camp", a collection of mock Lego sets with a concentration camp theme.
Danish artist Jørn Rønnau created a sculpture called The Walker out of 120,000 Lego bricks for the travelling exhibition 'Homo Futurus' at the end of the 1980s. The sculpture later went on display in the Danish pavilion at Expo 2000.
The Lego-Brücke (Lego Bridge) is situated in Wuppertal, Germany. It received an award in 2012.
In December 2013, Romanian Raul Oaida and Australian Steve Sammartino completed construction of a Lego Car. The car is constructed of over half a million Lego pieces and runs on compressed air.
Television:
Lego was the subject of Episode 5 of the 2009 British TV series James May's Toy Stories, in which presenter James May built a full-sized two-story house from 3.3 million Lego bricks in a vineyard of the Denbies Wine Estate in Dorking, Surrey. The house was later dismantled, as the space was needed for wine-making and the house lacked planning permission, and the bricks were taken to Legoland Windsor for use as part of an annual building event.An episode of The Simpsons, "Hungry, Hungry Homer" involved the Simpsons family going to Blockoland, a parody of Legoland, which is completely made of blocks. Bart buys a T-shirt made of bricks, accidentally calling it a "Lego shirt" before Marge corrects him. Also during the scene, Lisa was seen with a model of the Eiffel Tower, which was released as an official set by LEGO in 2007. The most usage of LEGO on the show is in the Season 25 episode "Brick Like Me", where Springfield is made from LEGO bricks; it was actually Homer's dream. Also, during that scene, Bart comes out with a robot suit to save Homer from the evil Comic Book Guy, and the robot starts throwing up LEGO lightsabers.
Television:
Legoland is also mentioned in several episodes of the TV show Arrested Development.
Television:
In 2009, Lego was featured in an episode of MythBusters (Episode 117 - YouTube Special). The build team tested a myth related to a YouTube video that showed a ball of Lego being rolled down a street and into a car, where it caused major damage. The myth was declared busted when the ball started to lose pieces while being rolled down a hill and then smashed into thousands of pieces when it hit a barrier.Lego was briefly seen in the intro for Happy Endings.
Television:
In the anime Shaman King, a character called Brocken Meyers body was crippled, and he subsequently wears a Lego like armor covering him from head to toe to help his disability; and creates various objects to battle with also seemingly made out of Lego (including a T-rex, bird, and tank.) Several other characters also have similar Lego like pieces on their bodies, although far less than Brocken.
Summer Brickathon:
LEGO Summer Brickathon opened Memorial Day weekend 2012 at Broadway at the Beach in Myrtle Beach, South Carolina, and will return as a temporary attraction June 6 through July 17. Children and adults can build with Lego and have their picture taken. Other locations for the attraction have been Branson, Missouri; Lake Tahoe, California; and Traverse City, Michigan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SailFin**
SailFin:
SailFin was an open-source Java application server project led by Sun Microsystems. It implements the JCP SIP Servlet 1.1 (JSR 289) specification integrated with the open-source Java EE application server GlassFish.
SailFin effectively extends the GlassFish application server to meet the needs of communication and multimedia applications. By leveraging GlassFish as a basis, SailFin offers management, HA and clustering features along with the performance and scale to meet critical service-deployments.
SailFin, based on code donated to open source by Sun, Oracle Corporation, and Ericsson, was launched at Java ONE in May 2007. Sun also offered commercial support for SailFin under the product name "Sun GlassFish Communications Server".
This project was archived and effectively discontinued. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cumomer**
Cumomer:
A cumomer is a cumulative isotopomer and is a concept that relates to metabolic flux analysis. The concept was developed in 1999.
Description:
A cumomer is a cumulative isotopomer. The concept relates to metabolic flux analysis.
History:
The concept was developed in 1999.
Metabolic flux analysis:
Given a molecule as a set of atoms—any of which could be (isotopically) labeled—the cumomers are a set of isotopomers with particular positions of 13C-labels grouped into different levels depending on the amount of labeled atoms. At level 0, any position can be either 12C or 13C. At level 1, one position is 13C whether the others may or may not be labeled and so on. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reconstructive ladder**
Reconstructive ladder:
The reconstructive ladder is the set of levels of increasingly complex management of wounds in reconstructive plastic surgery. The surgeon should start on the lowest rung and move up until a suitable technique is reached.
There are several small variations in the reconstructive ladder in the scientific literature, but the principles remains the same: Healing by secondary intention Primary closure Delayed primary closure Split thickness graft Full thickness skin graft Tissue expansion Random flap Axial flap Free flap | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bucherer reaction**
Bucherer reaction:
The Bucherer reaction in organic chemistry is the reversible conversion of a naphthol to a naphthylamine in the presence of ammonia and sodium bisulfite. The reaction is widely used in the synthesis of dye precursors aminonaphthalenesulfonic acids.
Bucherer reaction:
C10H7-2-OH + NH3 ⇌ C10H7-2-NH2 + H2OThe French chemist Robert Lepetit was the first to discover the reaction in 1898. The German chemist Hans Theodor Bucherer (1869–1949) discovered (independent from Lepetit) its reversibility and its potential especially in industrial chemistry. Bucherer published his results in 1904 and his name is connected to this reaction. The organic reaction also goes by the name Bucherer-Lepetit reaction or (wrongly) the Bucherer-Le Petit reaction.
Bucherer reaction:
The reaction is used to convert 1,7-dihydroxynaphthalene into 7-amino-1-naphthol and 1-aminonaphthalene-4-sulfonic acid into 1-hydroxynaphthalene-4-sulfonic acid. It is also useful for transamination reactions of 2-aminonaphthalenes.
Mechanism:
In the first step of the reaction mechanism a proton adds to a carbon atom with high electron density therefore by preference to C2 or C4 of naphthol (1). This leads to resonance stabilized adducts 1a-1e.
Mechanism:
De-aromatization of the first ring of the naphthalene system occurs at the expense of 25 kcal/mol. In the next step a bisulfite anion adds to C3 through 1e. This results in the formation of 3a which tautomerizes to the more stable 3b to the sulfonic acid of tetralone. A nucleophilic addition follows of the amine with formation of 4a and its tautomer 4b loses water to form the resonance stabilized cation 5a. This compound is deprotonated to the imine 5b or the enamine 5c but an equilibrium exists between both species. The enamine eliminates sodium bisulfite with formation of naphthylamine 6.
Mechanism:
It is important to stress that this is a reversible reaction. The reaction is summarized as follows: The Bucherer carbazole synthesis is a related reaction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Healthsouk**
Healthsouk:
HealthSouk is the United States' first health plan without monthly fees. It is a real-time pricing model for health services where fees update every 60 seconds. It allows medical providers to list their prices for different procedures as well as available appointment times.
Business model:
Unlike discount plans or medical insurance, the patient does not pay any upfront cost to participate. In addition, the healthcare provider is never charged a monthly fee.
Company history:
HealthSouk was founded as The Smart Alternative to Dental Insurance and also trademarked as The Smart Dental Insurance Alternative. Both have been actively used by HealthSouk in commerce since its founding in 2011. In 2011, Dr. Neilesh Patel founded Healthsouk in Mountain View, California. He is credited as being the inventor of real-time pricing model of health services. Before starting HealthSouk, Dr. Patel founded Healthcare Volunteer. The first market for HealthSouk was Las Vegas, followed by Los Angeles, Orange County, and the San Francisco Bay Area. Patel started HealthSouk after seeing Las Vegas patients struggle to access basic and cosmetic dental care during the aftermath of the U.S. housing crisis and stock market crash. The name HealthSouk was named when Patel was visiting the souks (or marketplaces) of Morocco. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Type A videotape**
Type A videotape:
1-inch type A (designated Type A by SMPTE) is a reel-to-reel helical scan analog recording videotape format developed by Ampex in 1965, that was one of the first standardized reel-to-reel magnetic tape formats in the 1–inch (25 mm) width; most others of that size at that time were proprietary. It was capable of 350 lines.
Usage:
Type A was developed as mainly an industrial and institutional format, where it saw the most success. It was not widely used for broadcast television, since it did not meet Federal Communications Commission (FCC) specifications for broadcast videotape formats; the only format passing the FCC's muster at the time was the then-industry-standard 2-inch quadruplex.
Usage:
The Type A format received broad use by the White House Communications Agency from 1966 to 1969. The WHCA, under U.S. President Lyndon B. Johnson, used the format to videotape television broadcasts off the air or from direct White House feeds. The WHCA recorded programs and events including television appearances by President Johnson, special news broadcasts and news interview programs. Beginning on April 1, 1968, the WHCA taping system was expanded to also include daily morning and evening news programs, both network and local. When U.S. President Richard M. Nixon succeeded Johnson in office in 1969, the WHCA's Type A recording system was continued until it was gradually phased out, later that year, in favor of a recording system using a 2-inch format.The format was also used by the Vanderbilt Television News Archive at Vanderbilt University in Nashville, Tennessee, upon the archive's founding in 1968. The archive would continue to use the Type A format to make black & white recordings of national television newscasts (received off-air, and recorded by the archive, from the local Nashville network-affiliated TV stations that aired them) until 1979, when the archive upgraded to full-color-capable U-Matic VCRs for recording.
Technical details:
Early VTRs were black-and-white (B/W) only, later VTRs supported color television, with a heterodyne playback. Still later units had time base correction playback, like the VPR-1 that could be used at television station and post-production houses.
Technical details:
The VPR-1 had several problems, it did not record the vertical blanking interval (as the format in general was not capable of), which is why it was not compliant to FCC broadcast standards. The video quality was not as good as other broadcast VTRs. Thus Sony and Ampex agreed to make a SMPTE approved type C format VTR (which was based on Type A). Hitachi also later made a C format VTR.
Some Ampex Type A models:
VP-4900 (1965) B/W Player only, no record option.
VR-5000 (1965) B/W Record-player, very popular, many made.
VR-5100 (1965) B/W 3 MHz with horizontal resolution of 300 lines, noise ratio of 42 dB.
VR-5200 VR-5100 (1965) B/W with TV Tuner.
VPR-5200 VR-5200 with professional connectors VR-5800 Low and high band. Very popular, many made.
VPR-5800 VR-5800 with professional connectors XVR-5800 Medical certified 1 Type A VTR.
VR-5803 PAL VR-5800 TVR VR-6000 Low band VTR, With stop motion mode added. Wood case.
VR-6003 PAL VR-6000 VR-6050 Low band VTR, very basic, low cost.
VR-6275 (1966) Wood cabinet with a two TV Tuners (one watch one to record), load speakers.
VR-6300 (1966) VR-6275 without the TV Tuner.
VR-7000 Microphone input add, playback RF modulator, low and high band and other improvements.
VR-7100 (1967)With roll around cart, self-contained, with TV tuner, small monitor and B&W camera.
VR-7300 (1968) Color option with external color stabilizer. Hetrodyne color processor.
VR-7003 PAL VR-7300.
VL-7404 A time lapse VTR. Up to 38 hours with 9-3/4" reel of 3000' 1" tape. $5,900 VR-7450 VR-7500 Rec/play B&W and color. 4.2 MHz video bandwidth. Very popular, many made.
XVR-7500 higher record band, better color pictures. Professional connectors VR-7503 PAL VR-7500 VR-7800 Editing added, Color option. 1st with removable electronic cards for servicing.$9,500 in 1968.
VPR-7800 VR-7800 Professional.
VR-7803 PAL VR-7800 VP-4500C A VR-7800 VTR, but a Player only, no record option, heterodyne color processor.
VR-7900 Is a VPR-7800 with an extra modulation standard added is very high, same quality as quad high band. 1975 VPR-7900A (TBC option, the TBC-790, 1975) VR-7903 PAL VR-7900 VPR-7950A Console model of the VR7900, with monitoring and a TBC (TBC-790 analog, TBC-800 digital).
VPR-1 (1976) Studio VTR, Digital TBC with SlowMo and still frame. Some later VPR-1s were converted to Type C format by Ampex. Quickly replaced with C format, VPR-2 in 1976.
VPR-10 (1976) Portable VPR1, discontinued before delivery, replaced with VPR-20, C format in 1977. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ABC (programming language)**
ABC (programming language):
ABC is an imperative general-purpose programming language and integrated development environment (IDE) developed at Centrum Wiskunde & Informatica (CWI), Netherlands by Leo Geurts, Lambert Meertens, and Steven Pemberton. It is interactive, structured, high-level, and intended to be used instead of BASIC, Pascal, or AWK. It is intended for teaching or prototyping, but not as a systems-programming language.
ABC had a major influence on the design of the language Python, developed by Guido van Rossum, who formerly worked for several years on the ABC system in the mid-1980s.
Features:
Its designers claim that ABC programs are typically around a quarter the size of the equivalent Pascal or C programs, and more readable. Key features include: Only five basic data types No required variable declarations Explicit support for top-down programming Statement nesting is indicated by indentation, via the off-side rule Infinite precision arithmetic, unlimited-sized lists and strings, and other features supporting orthogonality and ease of use by novicesABC was originally a monolithic implementation, leading to an inability to adapt to new requirements, such as creating a graphical user interface (GUI). ABC could not directly access the underlying file system and operating system.
Features:
The full ABC system includes a programming environment with a structure editor (syntax-directed editor), suggestions, static variables (persistent), and multiple workspaces, and is available as an interpreter–compiler. As of 2020, the latest version is 1.05.02, and it is ported to Unix, DOS, Atari, and Apple MacOS.
Example:
An example function to collect the set of all words in a document: HOW TO RETURN words document: PUT {} IN collection FOR line IN document: FOR word IN split line: IF word not.in collection: INSERT word IN collection RETURN collection | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bioinstrumentation**
Bioinstrumentation:
Bioinstrumentation or Biomedical Instrumentation is an application of biomedical engineering, which focuses on development of devices and mechanics used to measure, evaluate, and treat biological systems. The goal of biomedical instrumentation focuses on the use of multiple sensors to monitor physiological characteristics of a human or animal for diagnostic and disease treatment purposes. Such instrumentation originated as a necessity to constantly monitor vital signs of Astronauts during NASA's Mercury, Gemini, and Apollo missions.Bioinstrumentation is a new and upcoming field, concentrating on treating diseases and bridging together the engineering and medical worlds. The majority of innovations within the field have occurred in the past 15–20 years, as of 2022. Bioinstrumentation has revolutionized the medical field, and has made treating patients much easier. The instruments/sensors produced by the bioinstrumentation field can convert signals found within the body into electrical signals that can be processed into some form of output. There are many subfields within bioinstrumentation, they include: biomedical options, creation of sensor, genetic testing, and drug delivery. Fields of engineering such as electrical engineering, biomedical engineering, and computer science, are the related sciences to bioinstrumentation.Bioinstrumentation has since been incorporated into the everyday lives of many individuals, with sensor-augmented smartphones capable of measuring heart rate and oxygen saturation, and the widespread availability of fitness apps, with over 40,000 health tracking apps on iTunes alone. Wrist-worn fitness tracking devices have also gained popularity, with a suite of on-board sensors capable of measuring the user's biometrics, and relaying them to an app that logs and tracks information for improvements.
Bioinstrumentation:
The model of a generalized instrumentation system necessitates only four parts: a measurand, a sensor, a signal processor, and an output display. More complicated instrumentation devices may also designate function for data storage and transmission, calibration, or control and feedback. However, at its core, an instrumentation systems converts energy or information from a physical property not otherwise perceivable, into an output display that users can easily interpret.Common examples include: Heart rate monitor Automated external defibrillator Blood oxygen monitor Electrocardiography Electroencephalography Pedometer Glucometer SphygmomanometerThe measurand can be classified as any physical property, quantity, or condition that a system might want to measure. There are many types of measurands including biopotential, pressure, flow, impedance, temperature and chemical concentrations. In electrical circuitry, the measurand can be the potential difference across a resistor. In Physics, a common measurand might be velocity. In the medical field, measurands vary from biopotentials and temperature to pressure and chemical concentrations. This is why instrumentation systems make up such a large portion of modern medical devices. They allow physicians up-to-date, accurate information on various bodily processes.
Bioinstrumentation:
But the measurand is of no use without the correct sensor to recognize that energy and project it. The majority of measurements mentioned above are physical (forces, pressure, etc.), so the goal of a sensor is to take a physical input and create an electrical output. These sensors do not differ, greatly, in concept from sensors we use to track the weather, atmospheric pressure, pH, etc.Normally, the signals collected by the sensor are too small or muddled by noise to make any sense of. Signal processing simply describes the overarching tools and methods utilized to amplify, filter, average, or convert that electrical signal into something meaningful.
Bioinstrumentation:
Lastly, the output display shows the results of the measurement process. The display must be legible to human operator. Output displays can be visual, auditory, numerical, or graphical. They can take discrete measurements, or continuously monitor the measurand over a period of time.
Bioinstrumentation:
Biomedical instrumentation however is not to be confused with medical devices. Medical devices are apparati used for diagnostics, treatment, or prevention of disease and injury. Most of the time these devices affect the structure or function of the body. The easiest way to tell the difference is that biomedical instruments measure, sense, and output data while medical devices do not.
Bioinstrumentation:
Examples of medical devices: IV tubing Catheters Prosthetics Oxygen masks Bandages
History:
Biomedical engineering and bioinstrumentation are new terms, but the practice behind them has existed for many generations. Since the beginning of mankind, humans have used what was available to them to treat the medical mishaps they encountered. Biomedical engineering was most developed in the nineteenth century. In the recent years, biomedical engineering has gained popularity and focused on creating solutions for issues in human physiology. Since then, inventions such as X-rays and stethoscopes have progressed and revolutionized the medical field.The concept of biomedical engineering was developed after World War II. The invention of the first artificial heart valve was successfully implanted in 1952, the first artificial kidney was created in the 1940s, and a heart-lung machine was successfully using in a human heart surgery in 1953. These advancements are major milestones within the medical field as it provides life changing procedures. The development of the Positron Emission Tomography (PET) scan was a significant advancement within the biomedical field. the PET scan was invented by Edward Hoffman and Michael E. Phelps in 1974. the machine provides an effective imaging test for understanding the metabolic activity within the tissues and organs of the patient.
History:
Space flight Bioinstrumentation was first developed in earnest by NASA during their early space missions, to gain a better understanding of how humans were affected by space travel. These early bioinstrumentation sensor arrays built by NASA constantly monitored astronauts ECG, respiration, and body temperature; and later measured blood pressure. This allowed physicians to monitor the astronauts vital-signs for potential problems. Data taken from Apollo 15 ECG bioinstrumentation showed periods of cardiac arrhythmia, which physicians and planners used to alter expected workload, diet, and the drugs in the on-board medical kits.
Classes:
Classes of biomedical instruments include: Quantity Sensed: pressure, flow, temperature Transduction: resistance, induction, capacitance Organ System: cardiovascular, pulmonary, digestive Clinical specialty: pediatrics, radiology, oncology
Components:
The basic fundamental parts for any biomedical instrument are as following below: Measurand: A physical quantity where the instrumentation systems would measure it. The human body would act as the source for measurand that would generate bio-signals. This would include the body surface or blood pressure in the heart.
Components:
Sensor/Transducer: This would be where the transducer would convert one form of energy to another form, and this would be usually electrical energy. An example would be the piezoelectric signal that would convert mechanical vibrations into the electrical signal. A usable output depending on the measurand would be produced by the transducer. The source would be used to interface the signal with the human as the sensor would be used to sense the signal from the source.
Components:
Signal Conditioner: Signal conditioning circuits would be used to convert the output of the transducer into an electrical value. The instrument system would send the quantity to the display or the recording system. The signal conditioning process would include amplification, filtering, analogue to digital and digital to analogue.
Display: A visual representation of measured parameter or quantity such as chart recorder and cathode ray oscilloscope (CRO). Alarms could also be used to hear the audio signals such as signals made in Doppler Ultrasound Scanner.
Data Storage and Data Transmission: Data storage is meant to record data for future reference and use. An example would be in telemetric systems where data transmission would occurs such that data can be transmitted from one place to another on-demand through the Internet.
Circuits/creation of sensors:
Sensors are the most well known aspect of bioinstrumentation. They include thermometers, brain scans, and electrocardiograms. Sensors take in signals from the body, and amplify them so engineers and doctors can study them. Signals from sensors are amplified using circuits, by taking in a voltage source, and modifying them using circuit components such as resistors, capacitors, and inductors. They then let out a certain amount of voltage, which is used for analysis based on some relationship between the voltage being output and the measurand of interest. The data collected using sensors is often displayed on computer programs. This field of bioinstrumentation is closely related to electrical engineering.Circuits used to measure biological signals such as electrical activity of the heart and brain generally incorporate op-amps as a means of amplifying the relatively minuscule signals for signal processing and data analysis. A commonly used amplifier is the instrumentation amplifier. Instrumentation amplifiers such as the integrated circuit (IC) AD620 amplifier are able to amplify the difference between two different voltage inputs while maintaining little offset voltage and a high CMRR, allowing it to amplify low frequency signals while rejecting noise.These circuits may also incorporate filters to better account for unwanted noise, as the small scale for biological signals requires a wide range of filtering to account for noise generated by factors such as dc offset, interference from other biological signals, or electrical noise from the equipment being used.
Current use:
Pacemakers A pacemaker is implanted to monitor the patient's heartbeat and send electrical pulses to regulate it when it is too slow. Electrodes send electrical pulses to the chambers of the heart which allow the heart to contract and pump blood. Pacemakers are for those who have damaged hearts or hearts that are not working properly. The normal electrical conduction of the heart allows impulses that are generated by the SA node to stimulate the cardiac muscle which then contracts. It is the ordered stimulation of the muscle that allows efficient contraction of the heart, pumping blood throughout our body. If the natural pacemaker malfunctions, abnormal heartbeats occur which can be very serious and even lead to death.
Current use:
Mechanical ventilators A mechanical ventilator is a form of life support. It helps the patient breathe or ventilate during surgery or when patient cannot breathe on their own. The patient is connected to the ventilator through a hollow tube called an artificial airway that goes in their month and down their trachea. They remain on the ventilator until they can breathe on their own. We use mechanical ventilators to decrease the work of breathing until the patient improves enough to no longer need it. The machine makes sure the patient receives enough oxygen and removes the carbon dioxide from the body. This is necessary for patients in surgery or with critical illnesses that prevent normal breathing. The benefits of mechanical ventilation are the patient does not have to work hard to breathe, so the patient's respiratory muscles can rest. The patient has time to recover and regain normal breathing. It helps the patient get enough oxygen and clear carbon dioxide, and it preserves a stable airway preventing injury from aspiration.
Current use:
Fitness trackers Bioinstrumentation in the commercial market has seen a large amount of growth in the field of wearables, with wrist-worn activity tracking devices surging from a market value of 0.75 billion U.S. dollars in 2012, to 5.8 billion U.S. dollars in 2018. Bioinstrumentation has also been added to smartphone designs, with smartphones now capable of measuring heart rate, blood-oxygen levels, number of steps taken, and more depending on the device.
Current use:
Biomedical optics Biomedical Optics is the field of performing noninvasive operations and procedures to patients. This has been a growing field, as it is easier and does not require the patient to be opened. Biomedical Optics is made possible through imaging such as CAT (computerized axial tomography) scans. One example of biomedical optics is LASIK eye surgery, which is a laser microsurgery done on the eyes. It helps correcting multiple eye problems, and is much easier than option than other surgeries. Other important aspects of biomedical optics include microscopy and spectroscopy.
Current use:
Genetic testing Bioinstrumentation can be used for genetic testing. This is done with the help of chemistry and medical instruments. Professionals in the field have created tissue analysis instruments, which can compare the DNA of different people. Another example of genetic testing is gel electrophoresis. Gel electrophoresis uses DNA samples, along with biosensors to compare the DNA sequence of individuals. Two other important instruments involved in genomic advances are microarray technology and DNA sequencing. Microarrays reveal the activated and repressed genes of an individual. DNA sequencing uses lasers with different wavelength, to determine the nucleotides present in different DNA strands. Bioinstrumentation has changed the world of genetic testing, and helps scientists understand DNA and the human genome better than ever before.
Current use:
Drug delivery Drug delivery and aiding machines have been improved greatly by bioinstrumentation. Pumps have been created to deliver drugs such as anesthesia and insulin. Before, patients would have to visit doctors more regularly, but with these pumps, they can treat themselves in a faster and cheaper way. Aiding machines include hearing aids and pace makers. Both of these use sensors and circuits, to amplify signals and reveal when there is an issue to the patient.
Current use:
Agriculture Bioinstruments are used immensely in the field of agriculture for monitoring and sampling the soil as well as measure plant growth. Biotechnology in agriculture requires handling compound plant genomes that is done using complex instrumentation. Devices such as tensiometers are used to measure the moisture content of the soil that helps to maintain the most favorable conditions for crop growth. Attaching an electrical transducer to it allows the crop data to be monitored at regular intervals in terms of soil moisture and water profile.
Current use:
Botany In the field of Botany, bioinstruments are widely utilized to gauge plant digestion. The PTM-48A Photosynthesis Monitor is used to register a plant's physiological qualities like carbon dioxide trade, leaf wetness, net photosynthesis and stomatal conductance. PTM-48A is used to analyze the CO2 exchange and the transpiration of the leaves through an automatic open system with four-channels. This device's capabilities include the measurement of the CO2 exchange of the leaves, CO2 concentration in the air, photosynthetically active radiation, Air vapor deficit, etc. The package for the device includes PTM-48A SYSTEM CONSOLE, LC-4B LEAF CHAMBER (4 pcs.), RTH-48 METER, 12 VDC POWER ADAPTER, HOLDER FOR LEAF CHAMBER (4 pcs.), 4-m PVC TWIN HOSE (4 pcs.), STAINLESS STEEL TRIPOD, RS232 COMMUNICATION CABLE FOR PC, DOCUMENTATION and SOFTWARE SETUP CD, CO2 ABSORBER, SPARE AIR FILTER, and USER’S GUIDE.
Current use:
Imaging systems An imaging system is a system that creates images of various parts of the body depending on what is needed to be analyzed. the system is used to diagnose conditions before they become too serious. Some examples of imaging systems include x-rays, computed tomography (CT scan), magnetic resonance imaging (MRI), and ultrasound. An x-ray is a non-invasive procedure that analyzes the bones and tumors. A disadvantage of getting an x-ray is the exposure to radiation that may lead to other conditions. A CT scan is a combination of various x-rays that provides a detailed image of organs and layers of tissue in the body. A disadvantage is the slight increased risk of cancer since this non-invasive procedure exposes the patient to radiation Bioinstruments such as the ChemiDoc Touch framework is an imaging system for electrophoresis and Western blot imaging integrated with a touchscreen on a supercomputer. It utilizes application particular trays for chemiluminescence and UV identification to offer high sensitivity and picture quality.
Current use:
Arterial blood pressure A blood pressure (BP) measurement system specifically a writ-bound BP monitor works through an applanation tonometry with a hemispheric plunger set on the radial artery. Devices such as an ambulatory blood pressure improved the management of hypertension, but remain not being widely used and inconvenient. Uprising innovations such as the HealthSTATS International in Singapore created a wrist-bound BP measurement device (BPro) that would measure BP using arterial tonometry.Prior to wrist blood pressure cuffs, blood pressures had to be measured invasively by inserting a catheter into one's artery. The catheter is connected to a fluid bag and to a monitor, which picks up the arterial pressure over time. As this is a very invasive procedure, it had to be done inside a medical facility, whereas the new technology of blood pressure cuffs allows monitoring of blood pressure from a person's home. In comparison to write blood pressure measurements, invasive blood pressure monitoring has been shown to result in a more accurate reading, although it does come with drawbacks such as risk of infection.
Current use:
Space The importance of astronaut health monitoring systems have been increasing as the duration of space missions have been consistently growing. With existing space suit bioinstrumentation system, the development of next generation of bioinstrumentation systems made it possible to provide improved health monitoring during extra-vehicular activity. This would especially be resourceful in the most physically demanding phases in space flight.[1] The National Aeronautics and Space Administration (NASA) have developed telemetric sensors in order to monitor physiological changes in animal models in space in their Sensors 2000! program. These sensors measure physiological measurands, including temperature, biopotentials, pressure, flow and acceleration, chemical levels, and transmit these signals from the animals to a receiver through a link connection.
Current use:
Surgery Biomedical instrumentation has been used in the medical world of surgery since the beginning of time and continues to evolve to improve patient care. The continuous integration of imaging and assistive robotics has allowed for surgeries to be more precise as well as less invasive. Imaging systems devices such as cameras, ultrasounds, X-rays, MRIs, PET and CT scans have been used to pinpoint disorders within the body. During surgery ultrasounds and device attached cameras may be used throughout to allow for sight of the treatment area.Robotics assistive devices are medical instruments that allow for doctors to complete a surgery with a minimal size incision. The use of the assistive device can allow for complicated surgeries to be completed in less time. The robot mimics the doctors movements within the body precisely, which ensures the safety of the procedure. Robotic assistive technology usually includes a camera, mechanical arm, and a console of some sort to allow for controlling. When using assistive devices for minimally invasive procedures many find that another result is shorter recovery times. Although assistive robotics is used in surgery and there are several pros to their use there are some major considerations. If there happens to be a major complication with surgery the robotic system will be removed and previous methods will have to be used. Along with that robotic assistive technology is still rather expensive, thus more research and improvements are constantly being made.Advancements in anesthetics have also occurred due to innovations in devices. During surgery an anesthesiologist must monitor and evaluate the patients heart rate, breathing, pain, body temperature, fluid balance, blood pressure and many other vital signs. For this reason, an anesthesiologist station is full of medical devices. One major device being the anesthesia machine, which focuses on administration of vaporous anesthesia medication, oxygenation and ventilation.
Current use:
Research Bioinstrumentation in research has a variety of applications from standard data collection to prototype testing. One unique example is the use of bioinstrumentation to characterize bone phenotypes of various animal models through strain gauging and tibial loading. Strain gauges translate deformation into an electrical resistance, and when paired with analytical software it can be utilized to determine a bone's response to mechanical load. Different animals or breeds can have different physical responses to mechanical load, thus experiments involving loading normalize to strain rather than load. Strain gauges allow researchers to apply different loads across a variety of subjects to induce the same strain, which is directly correlated with new bone formation. Bioinstrumentation has many more applications in research from development of new bioinstruments to novel incorporation into new medical devices.
Current use:
Real-time measurement Bioinstrumentation has been incorporated into novel diagnostic tools that are utilized for a variety of patients. There is a sufficient challenge to implementing real-time measurement systems that are lightweight, comfortable and efficient, so there has been increased drive for the novel development of more flexible and compact bioinstrumentation. The development of 3D-printed ion selective field effect transistors, or ISFETs, to sense and monitor ion levels in patients is a prime example.Another example of a real-time measurement system is the smart bioelectric pacifier, which was developed to monitor the electrolyte level in vulnerable newborns in hospital care. The pacifier functions through the intake of saliva through a microfluidic channel, which guides saliva to a reservoir filled with sensory nodes within the soft plastic pacifier. Small circuits integrated with ISFETS provide active measurements of any voltage change within the saliva, which can be directly correlated with the concentration of ions within the newborn's saliva and, due to known correlations between ion concentrations in saliva and blood, the bloodstream.Novel developments in bioinstrumentation continue to lend itself to the development of real-time measurement systems that can provide flexibility, compactness, and efficiency to better monitor patients.
Training and certification:
Education A considerable amount of knowledge and training is required to work with Bioinstruments. Biomedical engineering is the main stem of Engineering, under this is a branch called Biomedical instrumentation in which training in equipment use, circuitry, and safety can be found. To work in this area, a considerable amount of knowledge is required in engineering principles as well as biology, in addition to this typically a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D., or MD-PhD) degree in Biomedical Engineering is required.
Training and certification:
Licensure/certification As with most professions, there are certain requirements to become a licensed Professional engineer (PE), however in the United States a license is not required to be an employee as an engineer in most situations due to an exception known as the industrial exemption. The current model requires only the practicing engineers offering services that impact the public welfare, safety, health, or property to be licensed while engineering working in the private industry without a direct offering of engineering services to the public or businesses need not be licensed Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required.
Constraints and future development:
Biomedical Instrumentation development comes with constraints as well. Many measurands currently are inaccessible without damaging the measurand. As a result, most have to be measured indirectly. No two physiological systems are the same, but because of these limitations, measurement variation must be compared with "norms" which can vary too. Patient safety also is a key aspect and limitation of Biomedical Instrumentation. Determining the right amount of energy required to obtain data while avoiding damages to biological tissue (which can alter results) can be difficult, especially since no two persons are alike. As a result, equipment reliability and difficulty of operation are held to high standards.Even with these limitations, the fields of Biomedical Engineering and medicine is growing rapidly, and bioinstrumentation will continue to progress. Since the main focus of the field is to make the medical world faster and more efficient, major improvements in these aspects, as well as in technology and how scientists understand the human body, the field will continue to grow. The main focuses for the future of the field include cellular scanning devices and robots.
Constraints and future development:
Cellular scanning devices Olympus introduced two new microscopes, the Fluoview FV1200 biological confocal laser scanning microscope and the Fluview FV1200MPE multiphoton laser scanning microscope, for the focus of life science research in universities and research institutions. These microscopes record high-contrast 3D images by scanning a specimen with a laser beam and detecting the fluorescence. They are readily easy to use and offer more rigidity, higher sensitivity, and lower noise. The FV1200MPE uses an IR laser that would yield higher tissue transparency. This would be resourceful especially with imaging thick cells and tissues that would be difficult with the FV1200.
Constraints and future development:
Robots Technology has only been rapidly becoming a part of people's daily life in the modern world that industrial robots such as assembly and conveyance became a part of the work in manufacturing factories. These are one of the personal robots that are expected to become popular in the future, and would operate in joint work and community life with humans. Several examples of humanoid robots in the work include the entertainment humanoid QRIO developed by Sony Corporation. The study of integrating the emotions, behaviors, and personality in a human-like manner in robots is still being understood and researched. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chemicalize**
Chemicalize:
Chemicalize is an online platform for chemical calculations, search, and text processing.
It is developed and owned by ChemAxon and offers various cheminformatics tools in freemium model: chemical property predictions, structure-based and text-based search, chemical text processing, and checking compounds with respect to national regulations of different countries.
Modules of Chemicalize:
Calculations Chemical property predictions for any molecule structure. Available calculations include elemental analysis, names and identifiers (IUPAC name, SMILES, InChI), pKa, logP/logD, and solubility.Chemical Search Structure-based and text-based search against the Chemicalize database to find web page sources and associated structures of the results.Compliance Checker Checking compounds with respect to national regulations of several countries on narcotics, psychotropic drugs, explosives, hazardous materials, and toxic agents.
Short history:
January 2009 Original service launched The service was launched with the brand name chemicalize.org. The main purpose was to identify chemical names on websites, but other services were also provided, such as property predictions and chemical search.August 2010 ChemSpider integration Predicted chemical properties provided by Chemicalize were integrated to ChemSpider. ChemSpider record pages contain links to access predicted properties on Chemicalize for the considered structure.September 2016 Renewed version The platform was renewed using a new brand name Chemicalize. The new version offers enriched functionality in freemium model.May 2018 Chemicalize Professional released Embeddable web components and hosted cheminformatics services, for web developers and integrators, based on Chemicalize cloud infrastructure.
List of the predicted structure-based properties:
IUPAC name InChI name pKa logP and logD Solubility NMR spectroscopy Isoelectric point Charge Polarizability Topology Analysis Geometry data Polar Surface Area Hydrogen bond Donor-Acceptor Refractivity Structural Framework Lipinski's Rule of Five | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spewer**
Spewer:
Spewer is a 2009 browser-based puzzle-platform game. It uses liquid physics through regurgitation as its core mechanic. Taking the role of a mysterious test subject, code named "Spewer", the player must vomit their way through over 60 levels while learning new abilities, changing forms and piecing together their purpose in the game. It is also a part of The Basement Collection.
Gameplay:
The player navigates through five chapters and a bonus chapter as Spewer, a small creature that navigates through single-screen levels by utilizing its own vomit as a platform. Spewer can also walk, jump, and swim. The amount of vomit available to the player is represented by a meter, which can be replaced by Spewer eating food or its own vomit. There are four types of vomit in addition to normal vomit which are accessed by eating pills: white vomit that floats, allowing the player to swim in mid air, red vomit that pushes the player off objects at a faster speed with more power, black vomit hardens to become platforms, and yellow vomit that melts and burns objects. A level editor is included, allowing players to create their own custom levels.
Development:
Spewer was developed by Edmund McMillen and Eli Piilonen and released on Newgrounds on May 4, 2009. A group of four people developed Spewer: Edmund McMillen created the art assets for the game, and Eli Piilonen was the programmer. These two also handled game design elements together. In addition, Jordan Fehr worked on the sound effects and Daniel Baranowsky provided the music.
Reception:
Macintosh gaming magazine MacLife's Arvind Srinivasan described the game as "incredibly addictive", stating that the game managed to hold his attention during the full 60 levels, Spewer was awarded the magazine's Editors' Choice award. Independent video game developer Derek Yu stated the game is the most mature title which McMillen has developed in terms of design. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Atomic (magazine)**
Atomic (magazine):
Atomic (or Atomic MPC) once was a monthly Australian magazine and online community that focused on computing and technology, with a great emphasis on gaming, modding and computer hardware. Atomic was marketed at technology enthusiasts and covered topics that were not normally found in mainstream PC publications, including video card and CPU overclocking, Windows registry tweaking, and programming. The magazine's strapline was 'Maximum Power Computing', reflecting the broad nature of its technology content.
Atomic (magazine):
In November 2012 publisher Haymarket Media Group announced that Atomic would close and be merged into sister monthly title PC & Tech Authority (beginning with the February 2013 issue of PCTA), although the Atomic online forums would continue to exist in their own right and under the Atomic brand.In 2018, nextmedia, the successor of Haymarket Australia sold its computing assets to Future. PC & Tech Authority print content was absorbed into APC and online content was absorbed into TechRadar but the Atomic forums remained available until 11 June 2020.
History:
With a small team of writers led by magazine founder and ex-editor Ben Mansill, who is also the founder of the magazine's only competitor, PC PowerPlay, the first issue of Atomic was published in February 2001. This team consisted of John Gillooly, Bennett Ring, Tim Dean and Daniel Rutter. Gillooly and Ring later left the magazine.
History:
Atomic was originally published by AJB Publishing, but in July 2004 AJB was acquired by UK publisher Haymarket Media. The magazine was edited in 2005 and 2006 by Ashton Mills, who in the past has contributed to PC Authority, Atomic's sister publication. In 2006, Logan Booker took over as editor. In April 2005, Atomic reached the milestone of 50 issues, and the January 2006 issue celebrated its fifth birthday. Logan Booker announced at the end of August 2007 he would be stepping down, issue 81 being his last as editor. In October 2007, David Hollingworth became the new editor of the magazine.Ben Mansill announced in October 2007 that he would be leaving Haymarket Media to pursue other interests in the publishing industry.Atomic celebrated then release of its 100th issue on April 8, 2009. In late 2012 the magazine merged with PC & Tech Authority. In 2013, nextmedia acquired Haymarket Australia which effectively made PC & Tech Authority a sister title to PC PowerPlay.
The Atomic site and forums:
Atomic's online forums were launched on the same day as the magazine. They had various PC gaming and technology sections, as well as a general chat area known as the "Green Room". As of January 2006, approximately 3,600,000 posts had been made across the forums' twenty-one sections. An active community section organises 'meets' and other events regularly.
Readers and subscribers to the magazine, as well as members of the online Atomic community were colloquially referred to as Atomicans.In mid-2005, the site was revamped to include regular content, both unique to the site and taken from the magazine, including daily reviews and news.
Moderation was employed to ensure that illegal or distasteful content was not posted.The forums were finally shut down in the 11th of June, 2020. It was totally offline on 24 June.
Events:
At the end of 2005 Atomic hosted "Atomic Live", a PC gaming and technology expo in Sydney, Australia. The event culminated in the evening with a presentation of industry awards and a celebration of the magazine's 5th birthday.
Although a subsequent Atomic Live was announced in early 2006, it was postponed due to key product launch delays in the PC and gaming industry.Between 2010 and 2011 Atomic MPC hosted events across Australia including the 2010 Power to the PC Tour, Atomic Unlocked 2010, Revolver Sydney 2011 and Revolver Melbourne 2011.
On 12 November 2011, Atomic and Monash University’s Faculty of Information Technology presented AtomicCon 2011 at the university's Caulfield campus for technology and gaming enthusiasts. The event included presentations by Australian game publishers and suppliers of information technology equipment, and participants were able to play recently released games.
Atomic charity:
Since inception, Atomic and the community focussed on raising money for charity, usually the Multiple Sclerosis Society, chosen because of moderator and unofficial community organiser 'Gramyre' (Allison Reynolds) having the disease. This was achieved via auctions of various items, and a community member 'Nodnerb' (Brendon Walker) once sang I'm a Little Teapot while wearing a ballet tutu on national television for charity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HP-22**
HP-22:
The HP-22 was a finance-oriented pocket calculator produced by Hewlett-Packard between 1975 and 1978. It was designed as a replacement for the short-lived HP-70, and was one of a set of three calculators, the others being the HP-21 and HP-25, which were similarly built but aimed at different markets.As with most HP calculators then and now, the HP-25 used RPN entry logic, with a four-level stack. It also had ten user-accessible memory registers. As was normal at the time, memory was not preserved on power-down. Its principal functions were (1) time value of money (TVM) calculations, where the user could enter any three of the variables and the fourth would be calculated, and (2) statistics calculations, including linear regression. Basic logarithmic and exponential functions were also provided. For TVM calculations, a physical slider switch labelled "begin" and "end" could be used to specify whether payments would be applied at the beginning or end of periods. It had a 12-digit LED display. A shift key provided access to functions whose legends were printed on the faceplate above the corresponding keys.
HP-22:
Its HP development codename was Turnip, and it was a member of the Woodstock series. Its US price was $165 in 1975, $125 in 1978.A version adapted to support an additional backward-facing display manufactured by Educational Calculator Devices named EduCALC 22 GD existed as well. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virtual reality sickness**
Virtual reality sickness:
Virtual reality sickness, or VR sickness occurs when exposure to a virtual environment causes symptoms that are similar to motion sickness symptoms. The most common symptoms are general discomfort, eye strain, headache, stomach awareness, nausea, vomiting, pallor, sweating, fatigue, drowsiness, disorientation, and apathy. Other symptoms include postural instability and retching. Common causes are low frame rate, input lag, and the vergence-accommodation-conflict.Virtual reality sickness is different from motion sickness in that it can be caused by the visually-induced perception of self-motion; real self-motion is not needed. It is also different from simulator sickness; non-virtual reality simulator sickness tends to be characterized by oculomotor disturbances, whereas virtual reality sickness tends to be characterized by disorientation.
Consequences:
Virtual reality sickness may have undesirable consequences beyond the sickness itself. For example, Crowley (1987) argued that flight simulator sickness could discourage pilots from using flight simulators, reduce the efficiency of training through distraction and the encouragement of adaptive behaviors that are unfavorable for performance, compromise ground safety or flight safety when sick and disoriented pilots leave the simulator. Similar consequences could be expected for virtual reality systems. Although the evidence for performance decrements due to virtual reality sickness is limited, research does suggest that virtual reality sickness is a major barrier to using virtual reality, indicating that virtual reality sickness may be a barrier to the effective use of training tools and rehabilitation tools in virtual reality. Estimates of the multi-study incidence and main symptoms of virtual reality sickness (also called cybersickness) have been made.
Causes:
Virtual reality sickness is closely related to simulator and motion sickness. Sensory conflict theory provides a framework for understanding motion sickness; however, it can be applied to virtual reality sickness to better understand how it can occur, and is commonly used for that purpose. Sensory conflict theory posits that sickness will occur when a user's perception of self-motion is based on incongruent sensory inputs from the visual system, vestibular system, and non-vestibular proprioceptors, and particularly so when these inputs are at odds with the user's expectation based on prior experience. Applying this theory to virtual reality, sickness can be minimized when the sensory inputs inducing self-motion are in agreement with one another.A major trigger of virtual reality sickness is when there is disparity in apparent motion between the visual and vestibular stimuli. This disparity occurs if there is a disagreement between what the stimuli from the eyes and inner ear are sending to the brain. This is a fundamental cause of both simulator and motion sickness as well. In virtual reality, the eyes transmit that the person is running and jumping through a dimension, however, the ears transmit that no movement is occurring and that the body is sitting still. Since there is this discord between the eyes and the ears, a form of motion sickness can occur.
Causes:
The images projected from typical virtual reality headsets have a major impact on sickness. The refresh rate of on-screen images is often not high enough when VR sickness occurs. Because the refresh rate is slower than what the brain processes, it causes a disconnect between the processing rate and the refresh rate, which causes the user to perceive glitches on the screen. When these two components do not match up, it can cause the user to experience the same feelings as simulator and motion sickness which is mentioned below.
Causes:
The resolution on animation can also cause users to experience this phenomenon. When animations are poor, it causes another type of discord between what is expected and what is actually happening on the screen. When onscreen graphics do not keep the pace with the users' head movements, it can trigger a form of motion sickness.
Causes:
Not all scientists agree with sensory conflict theory. A second theory of motion sickness, which has also been used to explain virtual reality sickness, is the theory of postural instability. This theory holds that motion sickness and related sicknesses occur because of poor postural adaptations in response to unusual coupling between visual stimuli and motor coordination. Characteristic markers of postural instability occur prior to appearance of symptoms and predict the later development of symptoms. This theory can explain some otherwise surprising situations in which motion sickness did not occur in the presence of sensory conflict.
Technical aspects:
There are various technical aspects of virtual reality that can induce sickness, such as mismatched motion, field of view, motion parallax, and viewing angle. Additionally, the amount of time spent in virtual reality can increase the presence of symptoms.
Technical aspects:
Mismatched motion can be defined as a discrepancy between the motion of the simulation and the motion that the user expects. It is possible to induce motion sickness in virtual reality when the frequencies of mismatched motion are similar to those for motion sickness in reality, such as seasickness. These frequencies can be experimentally manipulated, but also have the propensity to arise from system errors.
Technical aspects:
Generally, increasing the field of view increases incidence of simulator sickness symptoms. This relationship has been shown to be curvilinear, with symptoms approaching an asymptote for fields of view above 140°.
Altering motion parallax distances to those less than the distance between the human eyes in large multiple-screen simulation setups can induce oculomotor distress, such as headaches, eyestrain, and blurred vision. There are fewer reports of oculomotor distress on smaller screens; however, most simulation setups with motion parallax effects can still induce eyestrain, fatigue, and general discomfort over time.
Technical aspects:
Viewing angle has been shown to increase a user's sickness symptoms, especially at extreme angles. One example of such an extreme angle would be when a user must look downwards a short distance in front of their virtual feet. As opposed to a forward viewing angle, an extreme downward angle such as this has been shown to markedly increase sickness in virtual environments.
Technical aspects:
Time spent immersed in a virtual environment contributes to sickness symptom presence due to the increasing effects of fatigue on the user. Oculomotor symptoms are the most common to occur due to immersion time, but the nature of the user's movements (e.g., whole-body vs. head-only) is suggested to be the primary cause of nausea or physical sickness.
Techniques for reducing VR sickness:
According to several studies, introducing a static frame of reference (independent visual background) may reduce simulation sickness. A technique called Nasum Virtualis shows a virtual nose as a fixed frame of reference for VR headsets.Other techniques for reducing nausea involve simulating ways of displacement that don't create or reduce discrepancies between the visual aspects and body movement, such as reducing rotational motions during navigation, dynamically reducing the field of view, teleportation, and movement in zero gravity.In January 2020, the French start-up Boarding Ring, known for their glasses against motion sickness, released an add-on device against virtual reality sickness. Using two small screens in the user's peripheral field of view, the device displays visual information consistent with vestibular inputs, avoiding the sensory conflict.
Techniques for reducing VR sickness:
Galvanic vestibular stimulation, which creates the illusion of motion by electric stimulation of the vestibular system, is another technique being explored for its potential to mitigate or eliminate the visual-vestibular mismatch.
Newest technology:
With the integration of virtual reality into the more commercial mainstream, issues have begun to arise in relation to VR sickness in head-mounted gaming devices. While research on head-mounted VR for gaming dates back to the early 1990s, the potential for mass usability has only become recently realized. Contemporary VR headsets appear to induce minimal to none VR sickness.While certain features are known to moderate VR sickness in head-mounted displays, such as playing from a seated position rather than standing, it has also been found that this merely puts off the onset of sickness, rather than completely preventing it. This inherently presents an issue, in that this type of interactive VR often involves standing or walking for a fully immersive experience. Gaming VR specialists argue that this unique brand of VR sickness is only a minor issue, claiming that it disappears with time spent (multiple days) using the head-mounted displays, relating it to "getting your sea legs". However, getting users interested in sickness for multiple days with the promise of "probably getting over it" is a struggle for developers of head-mounted gaming tech. Surveys have shown that a large percentage of people won't develop their "VR legs," in particular women. These same developers also argue that it has more to do with the individual game being played, and that certain gaming aspects are more likely to create issues, such as change in speed, walking up stairs, and jumping, which are all, unfortunately, fairly normal game functions in predominant genres.
Individual differences in susceptibility:
Individuals vary widely in their susceptibility to simulator and virtual reality sickness. Some of the factors in virtual reality sickness are listed below: Age: Susceptibility to motion sickness is highest between the ages of 2 and 12. It then decreases rapidly until about age 21, and continues to decrease more slowly after that. It has been suggested that virtual reality sickness might follow a similar pattern, but more recent research has suggested that adults over the age of 50 are more susceptible than younger adults to virtual reality sickness.
Individual differences in susceptibility:
Postural stability: Postural instability has been found to increase susceptibility to visually-induced motion sickness. It is also associated with increased susceptibility to nausea and disorientation symptoms of virtual reality sickness.
Flicker fusion frequency threshold: Because flicker in the display has been associated with increased risk of virtual reality sickness, people with a low threshold for detecting flicker may be more susceptible to virtual reality sickness.
Individual differences in susceptibility:
Ethnicity: Asian people may be more susceptible to virtual reality sickness. Chinese women appear to be more susceptible to virtual reality sickness than European-American and African-American women; research suggests that they are more susceptible to vision-based motion sickness. Tibetans and Northeast Indians also appear to be more susceptible to motion sickness than Caucasian people, suggesting that they would also be more susceptible to virtual reality sickness, since susceptibility to motion sickness predicts susceptibility to a wide range of motion-sickness related disturbances.
Individual differences in susceptibility:
Experience with the system: Users seem to become less likely to develop virtual reality sickness as they develop familiarity with a virtual reality system. Adaptation may occur as quickly as the second exposure to the virtual reality system.
Individual differences in susceptibility:
Gender: Women are more susceptible than men to virtual reality sickness. This may be due to hormonal differences, it may be because women have a wider field of view than men, or gender differences in depth cue recognition. Women are most susceptible to virtual reality sickness during ovulation and a wider field of view is also associated with an increase in virtual reality sickness. In more recent research, there is some disagreement as to whether gender or sex is a clear factor in susceptibility to virtual reality sickness.
Individual differences in susceptibility:
Health: Susceptibility to virtual reality sickness appears to increase in people who are not at their usual level of health, suggesting that virtual reality may not be appropriate for people who are in ill health. This includes people who are fatigued; have not had enough sleep; are nauseated; or have an upper respiratory illness, ear trouble, or influenza.
Mental rotation ability: Better mental rotation ability appears to reduce susceptibility to virtual reality sickness, suggesting that training users in mental rotation may reduce the incidence of virtual reality sickness.
Individual differences in susceptibility:
Field dependence/independence: Field dependence/independence is a measure of perceptual style. Those with strong field dependence exhibit a strong influence of surrounding environment on their perception of an object, whereas people with strong field independence show a smaller influence of surrounding environment on their perception of the object. While the relationship between field dependence/independence and virtual reality sickness is complex, it appears that, in general, people without a strong tendency towards one extreme or the other are most susceptible to virtual reality sickness.
Individual differences in susceptibility:
Motion sickness sensitivity: Those who are more sensitive to motion sickness in reality are also more sensitive to virtual reality sickness. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Translation plane**
Translation plane:
In mathematics, a translation plane is a projective plane which admits a certain group of symmetries (described below). Along with the Hughes planes and the Figueroa planes, translation planes are among the most well-studied of the known non-Desarguesian planes, and the vast majority of known non-Desarguesian planes are either translation planes, or can be obtained from a translation plane via successive iterations of dualization and/or derivation.In a projective plane, let P represent a point, and l represent a line. A central collineation with center P and axis l is a collineation fixing every point on l and every line through P. It is called an elation if P is on l, otherwise it is called a homology. The central collineations with center P and axis l form a group. A line l in a projective plane Π is a translation line if the group of all elations with axis l acts transitively on the points of the affine plane obtained by removing l from the plane Π, Πl (the affine derivative of Π). A projective plane with a translation line is called a translation plane.
Translation plane:
The affine plane obtained by removing the translation line is called an affine translation plane. While it is often easier to work with projective planes, in this context several authors use the term translation plane to mean affine translation plane.
Algebraic construction with coordinates:
Every projective plane can be coordinatized by at least one planar ternary ring. For translation planes, it is always possible to coordinatize with a quasifield. However, some quasifields satisfy additional algebraic properties, and the corresponding planar ternary rings coordinatize translation planes which admit additional symmetries. Some of these special classes are: Nearfield planes - coordinatized by nearfields.
Semifield planes - coordinatized by semifields, semifield planes have the property that their dual is also a translation plane.
Algebraic construction with coordinates:
Moufang planes - coordinatized by alternative division rings, Moufang planes are exactly those translation planes that have at least two translation lines. Every finite Moufang plane is Desarguesian and every Desarguesian plane is a Moufang plane, but there are infinite Moufang planes that are not Desarguesian (such as the Cayley plane).Given a quasifield with operations + (addition) and ⋅ (multiplication), one can define a planar ternary ring to create coordinates for a translation plane. However, it is more typical to create an affine plane directly from the quasifield by defining the points as pairs (a,b) where a and b are elements of the quasifield, and the lines are the sets of points (x,y) satisfying an equation of the form y=m⋅x+b , as m and b vary over the elements of the quasifield, together with the sets of points (x,y) satisfying an equation of the form x=a , as a varies over the elements of the quasifield.
Geometric construction with spreads (Bruck/Bose):
Translation planes are related to spreads of odd-dimensional projective spaces by the Bruck-Bose construction. A spread of PG(2n+1, K), where n≥1 is an integer and K a division ring, is a partition of the space into pairwise disjoint n-dimensional subspaces. In the finite case, a spread of PG(2n+1, q) is a set of qn+1 + 1 n-dimensional subspaces, with no two intersecting.
Geometric construction with spreads (Bruck/Bose):
Given a spread S of PG(2n +1, K), the Bruck-Bose construction produces a translation plane as follows: Embed PG(2n+1, K) as a hyperplane Σ of PG(2n+2, K). Define an incidence structure A(S) with "points," the points of PG(2n+2, K) not on Σ and "lines" the (n+1)-dimensional subspaces of PG(2n+2, K) meeting Σ in an element of S. Then A(S) is an affine translation plane. In the finite case, this procedure produces a translation plane of order qn+1.
Geometric construction with spreads (Bruck/Bose):
The converse of this statement is almost always true. Any translation plane which is coordinatized by a quasifield that is finite-dimensional over its kernel K (K is necessarily a division ring) can be generated from a spread of PG(2n+1, K) using the Bruck-Bose construction, where (n+1) is the dimension of the quasifield, considered as a module over its kernel. An instant corollary of this result is that every finite translation plane can be obtained from this construction.
Algebraic construction with spreads (André):
André gave an earlier algebraic representation of (affine) translation planes that is fundamentally the same as Bruck/Bose. Let V be a 2n-dimensional vector space over a field F. A spread of V is a set S of n-dimensional subspaces of V that partition the non-zero vectors of V. The members of S are called the components of the spread and if Vi and Vj are distinct components then Vi ⊕ Vj = V. Let A be the incidence structure whose points are the vectors of V and whose lines are the cosets of components, that is, sets of the form v + U where v is a vector of V and U is a component of the spread S. Then: A is an affine plane and the group of translations x → x + w for w in V is an automorphism group acting regularly on the points of this plane.
Algebraic construction with spreads (André):
The finite case Let F = GF(q) = Fq, the finite field of order q and V the 2n-dimensional vector space over F represented as: V={(x,y):x,y∈Fn}.
Let M0, M1, ..., Mqn - 1 be n × n matrices over F with the property that Mi – Mj is nonsingular whenever i ≠ j. For i = 0, 1, ...,qn – 1 define, Vi={(x,xMi):x∈Fn}, usually referred to as the subspaces "y = xMi". Also define: Vqn={(0,y):y∈Fn}, the subspace "x = 0".
The set {V0, V1, ..., Vqn} is a spread of V.The set of matrices Mi used in this construction is called a spread set, and this set of matrices can be used directly in the projective space PG(2n−1,q) to create a spread in the geometric sense.
Reguli and regular spreads:
Let Σ be the projective space PG(2n+1, K) for n≥1 an integer, and K a division ring. A regulus R in Σ is a collection of pairwise disjoint n-dimensional subspaces with the following properties: R contains at least 3 elements Every line meeting three elements of R, called a transversal, meets every element of R Every point of a transversal to R lies on some element of RAny three pairwise disjoint n-dimensional subspaces in Σ lie in a unique regulus. A spread S of Σ is regular if for any three distinct n-dimensional subspaces of S, all the members of the unique regulus determined by them are contained in S. For any division ring K with more than 2 elements, if a spread S of PG(2n+1, K) is regular, then the translation plane created by that spread via the André/Bruck-Bose construction is a Moufang plane. A slightly weaker converse holds: if a translation plane is Pappian, then it can be generated via the André/Bruck-Bose construction from a regular spread.In the finite case, K must be a field of order q>2 , and the classes of Moufang, Desarguesian and Pappian planes are all identical, so this theorem can be refined to state that a spread S of PG(2n+1, q) is regular if and only if the translation plane created by that spread via the André/Bruck-Bose construction is Desarguesian.
Reguli and regular spreads:
In the case where K is the field GF(2) , all spreads of PG(2n+1, 2) are trivially regular, since a regulus only contains three elements. While the only translation plane of order 8 is Desarguesian, there are known to be non-Desarguesian translation planes of order 2e for every integer e≥4
Families of non-Desarguesian translation planes:
Hall planes - constructed via Bruck/Bose from a regular spread of PG(3,q) where one regulus has been replaced by the set of transversal lines to that regulus (called the opposite regulus).
Subregular planes - constructed via Bruck/Bose from a regular spread of PG(3,q) where a set of pairwise disjoint reguli have been replaced by their opposite reguli.
André planes Nearfield planes Semifield planes
Finite translation planes of small order:
It is well known that the only projective planes of order 8 or less are Desarguesian, and there are no known non-Desarguesian planes of prime order. Finite translation planes must have prime power order. There are four projective planes of order 9, of which two are translation planes: the Desarguesian plane, and the Hall plane. The following table details the current state of knowledge: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Terse (file format)**
Terse (file format):
TERSE is an IBM archive file format that supports lossless compression. A TERSE file may contain a sequential data set, a partitioned data set (PDS), partitioned data set extended (PDSE), or a large format dataset (DSNTYPE=LARGE). Any record format (RECFM) is allowed as long as the record length is less than 32 K (64 K for RECFM=VBS). Records may contain printer control characters.Terse files are compressed using a modification of Ziv, Lempel compression algorithm developed by Victor S. Miller and Mark Wegman at the Thomas J. Watson Research Center in Yorktown Heights, New York.The Terse algorithm was proprietary to IBM; however, IBM has released an open source Java decompressor under the Apache 2 license. The compression/decompression program (called terse and unterse)—AMATERSE or TRSMAIN—is available from IBM for z/OS; the z/VM equivalents are the TERSE and DETERSE commands, for sequential datasets only. Versions for PC DOS, OS/2, AIX, Windows (2000,XP,2003), Linux, and Mac OS/X are available online.
AMATERSE:
The following JCL can be used to invoke AMATERSE on z/OS (TRSMAIN uses INFILE and OUTFILE instead of SYSUT1 and SYSUT2):
Uses:
Terse can be used as a general-purpose compression/decompression tool. IBM also distributes downloadable Program temporary fixs (PTFs) as tersed datasets. Terse is also used by IBM customers to package diagnostic information such as z/OS dumps and traces, for transmission to IBM. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.