text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Redescending M-estimator**
Redescending M-estimator:
In statistics, redescending M-estimators are Ψ-type M-estimators which have ψ functions that are non-decreasing near the origin, but decreasing toward 0 far from the origin. Their ψ functions can be chosen to redescend smoothly to zero, so that they usually satisfy ψ(x) = 0 for all x with |x| > r, where r is referred to as the minimum rejection point.
Redescending M-estimator:
Due to these properties of the ψ function, these kinds of estimators are very efficient, have a high breakdown point and, unlike other outlier rejection techniques, they do not suffer from a masking effect. They are efficient because they completely reject gross outliers, and do not completely ignore moderately large outliers (like median).
Advantages:
Redescending M-estimators have high breakdown points (close to 0.5), and their Ψ function can be chosen to redescend smoothly to 0. This means that moderately large outliers are not ignored completely, and greatly improves the efficiency of the redescending M-estimator.
The redescending M-estimators are slightly more efficient than the Huber estimator for several symmetric, wider tailed distributions, but about 20% more efficient than the Huber estimator for the Cauchy distribution. This is because they completely reject gross outliers, while the Huber estimator effectively treats these the same as moderate outliers.
As other M-estimators, but unlike other outlier rejection techniques, they do not suffer from masking effects.
Disadvantages:
The M-estimating equation for a redescending estimator may not have a unique solution. Consequently, the initial point for an iterative solution must be chosen with care, e.g., by use of another estimator.
Choosing redescending Ψ functions:
When choosing a redescending Ψ function, care must be taken such that it does not descend too steeply, which may have a very bad influence on the denominator in the expression for the asymptotic variance ∫Ψ2dF(∫Ψ′dF)2 where F is the mixture model distribution.
This effect is particularly harmful when a large negative value of ψ′(x) combines with a large positive value of ψ2(x), and there is a cluster of outliers near x.
Examples:
1. Hampel's three-part M estimators have Ψ functions which are odd functions and defined for any x by: (central segment) sign (high and low flat segments) sign (end slopes) (left and right tails) This function is plotted in the following figure for a = 1.645, b = 3 and r = 6.5.
2. Tukey's biweight or bisquare M estimators have Ψ functions for any positive k, which defined by: Ψ(x)=x(1−(x/k)2)2;|x|≤k This function is plotted in the following figure for k = 5.
3. Andrew's sine wave M estimator has the following Ψ function: sin (x);−π≤x≤π This function is plotted in the following figure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Efference copy**
Efference copy:
In physiology, an efference copy or efferent copy is an internal copy of an outflowing (efferent), movement-producing signal generated by an organism's motor system. It can be collated with the (reafferent) sensory input that results from the agent's movement, enabling a comparison of actual movement with desired movement, and a shielding of perception from particular self-induced effects on the sensory input to achieve perceptual stability. Together with internal models, efference copies can serve to enable the brain to predict the effects of an action.An equal term with a different history is corollary discharge.Efference copies are important in enabling motor adaptation such as to enhance gaze stability. They have a role in the perception of self and nonself electric fields in electric fish. They also underlie the phenomenon of tickling.
Motor control:
Motor signals A motor signal from the central nervous system (CNS) to the periphery is called an efference, and a copy of this signal is called an efference copy. Sensory information coming from sensory receptors in the peripheral nervous system to the central nervous system is called afference. On a similar basis, nerves into the nervous system are afferent nerves and ones out are termed efferent nerves.
Motor control:
When an efferent signal is produced and sent to the motor system, it has been suggested that a copy of the signal, known as an efference copy, is created so that exafference (sensory signals generated from external stimuli in the environment) can be distinguished from reafference (sensory signals resulting from an animal's own actions).
Motor control:
This efference copy, by providing the input to a forward internal model, is then used to generate the predicted sensory feedback that estimates the sensory consequences of a motor command. The actual sensory consequences of the motor command are then deployed to compare with the corollary discharge to inform the CNS about how well the expected action matched its actual external action.
Motor control:
Corollary discharge Corollary discharge is characterized as an efference copy of an action command used to inhibit any response to the self generated sensory signal which would interfere with the execution of the motor task. The inhibitory commands originate at the same time as the motor command and target the sensory pathway that would report any reafference to higher levels of the CNS. This is unique from the efference copy, since the corollary discharge is actually fed into the sensory pathway to cancel out the reafferent signals generated by the movement. Alternatively, corollary discharges briefly alters self-generated sensory responses to reduce self-induced desensitization or help distinguish between self-generated and externally generated sensory information.
History:
Steinbuch "In 1811 Johann Georg Steinbuch (1770–1818) referred repeatedly to the problem of efference copy and reafference in his book "Beytrag zur Physiologie der Sinne" ("Contribution to the Physiology of Senses"). After studying medicine, Steinbuch worked for a number of years as lecturer at the University of Erlangen and thereafter as physician in Heidenheim, Ulm, and Herrenberg (Württemberg, South Germany). As a young university teacher, he was particularly interested in the brain mechanisms which enable the perception of space and objects, but in later years his attention shifted to the more practical problems of clinical medicine. Together with Justinus Kerner he gave a very precise description in 1817 of the clinical symptoms of botulism. In his book "Beytrag zur Physiologie der Sinne”, Steinbuch presented a very careful analysis of the tactile recognition of objects by the grasping hand. Hereby, he developed the hypothesis that the cerebral mechanisms controlling the movement of the hands interact within the brain with the afferent signal flow evoked in the mechanoreceptors while the grasping hand is moving across the surface of the object. The cerebral signals controlling the movement were called "Bewegidee" (motion idea). According to Steinbuch’s model, only by the interaction of the "Bewegidee" with the afferent signal flow did object recognition become possible. He illustrated his statements by a simple experiment: if an object passively activates the mechanoreceptors of the palm and fingers of a resting hand for sufficient sequences and time, object recognition is not achieved. When the hand, however, grasps actively, object recognition occurs within a few seconds." von Helmholtz The first person to propose the existence of efferent copies was the German physician and physicist Hermann von Helmholtz in the middle of the 19th century. He argued that the brain needed to create an efference copy for the motor commands that controlled eye muscles so as to aid the brain's determining the location of an object relative to the head. His argument used the experiment in which one gently presses on one's own eye. If this is done, one notices that the visual world seems to have "moved" as a result of this passive movement of the eyeball. In contrast, if the eyeball is actively moved by the eye muscles the world is perceived as still. The reasoning made is that with a passive movement of the eyeball, no efferent copies are made as with active movements that allow sensory changes to be anticipated and controlled for with the result in their absence the world appears to move.
History:
Sherrington In 1900, Charles Sherrington, the founder of modern ideas about motor control, rejected von Helmholtz ideas and argued that efference copies were not needed as muscles had their own sense of the movements they made. "The view [of von Helmholtz and his followers] which dispenses with peripheral organs and afferent nerves for the muscular sense has had powerful adherents . . . It supposes that during ... a willed movement the outgoing current of impulses from brain to muscle is accompanied by a 'sensation for innervation'. ... it "remains unproven". This resulted in the idea of efference copies being dropped for the next 75 years.
History:
Von Holst In 1950, Erich von Holst and Horst Mittelstaedt investigated how species are able to distinguish between exafference and reafference given a seemingly identical percept of the two. To explore this question, they rotated the head of a fly 180 degrees, effectively reversing the right and left edges of the retina and reversing the subject's subsequent reafferent signals. In this state, self-initiated movements of the fly would result in a perception that the world was also moving, rather than standing still as they would in a normal fly. After rotation of the eyes, the animal showed a reinforcement of the optokinetic response in the same direction as the moving visual input. Von Holst and Mittelstaedt interpreted their findings as evidence that corollary discharge (i.e. neural inhibition with active movement) could not have accounted for this observed change as this would have been expected to inhibit the optokinetic reaction. They concluded that an "Efferenzkopie" of the motor command was responsible for this reaction due to the persistence of the reafferent signal and given the consequent discrepancy between expected and actual sensory signals which reinforced the response rather than preventing it.
History:
Sperry The Nobel Prize winner, Roger Wolcott Sperry argued for the basis of corollary discharges following his research upon the optokinetic reflex. He is also regarded as the originator of the term "corollary discharge".
Motor adaptation:
The Coriolis effect Efference copy relates to Coriolis effect in a manner that allows for learning and correction of errors experienced from self-generated Coriolis forces. During trunk rotational movements there is a learned CNS anticipation of Coriolis effects, mediated by generation of an appropriate efference copy that can be compared to re-afferent information.
Gaze stability It has been proposed that efference copy has an important role in maintaining gaze stability with active head movement by augmenting the vestibulo-ocular reflex (aVOR) during dynamic visual acuity testing.
Motor adaptation:
Grip force Efference copy within an internal model allows us to grip objects in parallel to a given load. In other words, the subject is able to properly grip any load that they are provided because the internal model provides such a good prediction of the object without any delay. Flanagan and Wing tested to see whether an internal model is used to predict movement-dependent loads by observing grip force changes with known loads during arm movements. They found that even when giving subjects different known loads the grip force was able to predict the load force. Even when the load force was suddenly changed the grip force never lagged in the phase relationship with the load force, therefore affirming the fact that there was an internal model in the CNS that was allowing for the proper prediction to occur. It has been suggested by Kawato that for gripping, the CNS uses a combination of the inverse and forward model. With the use of the efference copy the internal model can predict a future hand trajectory, thus allowing for the parallel grip to the particular load of the known object.
Tickling:
Experiments have been conducted wherein subjects' feet are tickled both by themselves and with a robotic arm controlled by their own arm movements. These experiments have shown that people find a self-produced tickling motion of the foot to be much less “tickly” than a tickling motion produced by an outside source. They have postulated that this is because when a person sends a motor command to produce the tickling motion, the efference copy anticipates and cancels out the sensory outcome. This idea is further supported by evidence that a delay between the self-produced tickling motor command and the actual execution of this movement (mediated by a robotic arm) causes an increase in the perceived tickliness of the sensation. This shows that when the efference copy is incompatible with the afference, the sensory information is perceived as if it were exafference. Therefore, it is theorized that it is not possible to tickle ourselves because when the predicted sensory feedback (efference copy) matches the actual sensory feedback, the actual feedback will be attenuated. If the predicted sensory feedback does not match the actual sensory feedback, whether caused by a delay (as in the mediation by the robotic arm) or by external influences from the environment, the brain cannot predict the tickling motion on the body and a more intense tickling sensation is perceived. This is the reason why one cannot tickle oneself.
Speech:
It has been argued that motor efference copies play an important role in speech production. Tian and Poeppel propose that a motor efference copy is used to produce a forward model of somatosensory estimation, which entails an estimation of the articulatory movement and position of the articulators as a result of planned motor action. A second (subsequent) auditory efference copy entails the estimation of auditory information as produced by the articulatory system in a second forward model. Both of these forward models can produce respective predictions and corollary discharge, which can in turn be used in comparisons with somatosensory and auditory feedback. Moreover, this system is thought by some to be the basis for inner speech, especially in relation to auditory verbal hallucinations. In the case of inner speech, the efference signal is not sent or is inhibited before action takes place, leaving only the efference copy and leading to the perception of inner speech or inner hearing. In the case of auditory verbal hallucinations, it is thought that a breakdown along the efference copy and forward model route creates a mismatch between what is expected and what is observed, leading to the experience that speech is not produced by oneself. Recent studies suggest that efference copy already occurs when an acoustic signal is generated at the press of a button. The differences in the ERP signal of the efference copy are so severe that machine learning algorithms can distinguish between schizophrenia patients and healthy control subjects, for example. Efference copies also occur not only with spoken words, but with inner language - the quiet production of words.
Mormyrid electric fish:
The mormyrid electric fish provides an example of corollary discharge in lower vertebrates. Specifically, the knollenorgan sensor (KS) is involved with electro-communication, detecting the electric organ discharges (EOD) of other fish. Unless the reafference was somehow modulated, the KS would also detect self generated EODs that would interfere with interpretation of external EODs needed for communication between fish. However, these fish display corollary discharges that inhibit the ascending sensory pathway at the first CNS relay point. These corollary discharges are timed to arrive at the same time as the reafference from the KS to minimize the interference of self-produced EODs with the perception of external EODs, and optimize the duration of inhibition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nusinersen**
Nusinersen:
Nusinersen, marketed as Spinraza, is a medication used in treating spinal muscular atrophy (SMA), a rare neuromuscular disorder. In December 2016, it became the first approved drug used in treating this disorder.
Since the condition it treats is so rare, Nusinersen has so-called "orphan drug" designation in the United States and the European Union.
Medical uses:
The drug is used to treat spinal muscular atrophy associated with a mutation in the SMN1 gene. It is administered directly to the central nervous system (CNS) using intrathecal injection.In clinical trials, the drug halted the disease progression. In around 60% of infants affected by type 1 spinal muscular atrophy, it improves motor function.
Side effects:
People treated with nusinersen had an increased risk of upper and lower respiratory infections and congestion, ear infections, constipation, pulmonary aspiration, teething, and scoliosis. There is a risk that growth of infants and children might be stunted. In older clinical trial subjects, the most common adverse events were headache, back pain, and other adverse effects from the spinal injection, such as post-dural-puncture headache.Although not observed in the trial patients, a reduction in platelets as well as a risk of kidney damage are theoretical risks for antisense drugs and therefore platelets and kidney function should be monitored during treatment.In 2018, several cases of communicating hydrocephalus in children and adults treated with nusinersen emerged; it remains unclear whether this was drug related.
Pharmacology:
Spinal muscular atrophy is caused by loss-of-function mutations in the SMN1 gene which codes for survival motor neuron (SMN) protein. People survive owing to low amounts of the SMN protein produced from the SMN2 gene. Nusinersen modulates alternative splicing of the SMN2 gene, functionally converting it into SMN1 gene, thus increasing the level of SMN protein in the CNS.The drug distributes to CNS and peripheral tissues.The half-life is estimated to be 135 to 177 days in cerebrospinal fluid (CSF) and 63 to 87 days in blood plasma. The drug is metabolized via exonuclease (3′- and 5′)-mediated hydrolysis and does not interact with CYP450 enzymes. The primary route of elimination is likely by urinary excretion for nusinersen and its metabolites.
Chemistry:
Nusinersen is an antisense oligonucleotide in which the 2'-hydroxy groups of the ribofuranosyl rings are replaced with 2'-O-2-methoxyethyl groups and the phosphate linkages are replaced with phosphorothioate linkages.
History:
Nusinersen was developed in a collaboration between Adrian Krainer at Cold Spring Harbor Laboratory and Ionis Pharmaceuticals (formerly called Isis Pharmaceuticals). Initial work of target discovery of nusinersen was done by Dr. Ravindra Singh and co-workers at the University of Massachusetts Medical School funded by Cure SMA.Starting in 2012, Ionis partnered with Biogen on development and, in 2015, Biogen acquired an exclusive license to the drug for a US$75 million license fee, milestone payments up to US$150 million, and tiered royalties thereafter; Biogen also paid the costs of development subsequent to taking the license. The license to Biogen included licenses to intellectual property that Ionis had acquired from Cold Spring Harbor Laboratory and University of Massachusetts.In November 2016, the new drug application was accepted under the FDA's priority review process on the strength of the Phase III trial and the unmet need, and was also accepted for review at the European Medicines Agency (EMA) at that time. It was approved by the FDA in December 2016 and by EMA in May 2017 as the first drug to treat SMA. Subsequently, nusinersen was approved to treat SMA in Canada (July 2017), Japan (July 2017), Brasil (August 2017), Switzerland (September 2017), and China (February 2019).
Society and culture:
Economics Nusinersen list price in the USA is US$125,000 per injection which puts the treatment cost at US$750,000 in the first year and US$375,000 annually after that. According to The New York Times, this places nusinersen "among the most expensive drugs in the world".In October 2017, the authorities in Denmark recommended nusinersen for use only in a small subset of people with SMA type 1 (young babies) and refused to offer it as a standard treatment for all other people with SMA quoting an "unreasonably high price" compared to the benefit.Norwegian authorities rejected the funding in October 2017 because the price of the medicine was "unethically high". In February 2018, the funding was approved for people under 18 years old. In April 2023 funding was expanded to include adults.In August 2018, the National Institute for Health and Care Excellence (NICE), which weighs the cost-effectiveness of therapies for the NHS in England and Wales, recommended against offering nusinersen to people with SMA. Children with SMA type 1 were treated in the UK under a Biogen-funded expanded access programme; after enrolling 80 children, the scheme closed to new people in November 2018. In May 2019, however, NICE reversed its stance and announced its decision to recommend nusinersen for use across a wide spectrum of SMA for a 5-year period.The Irish Health Service Executive decided in February 2019 that nusinersen was too expensive to fund, saying the cost would be about €600,000 per patient in the first year and around €380,000 a year thereafter "with an estimated budget impact in excess of €20 million over a five-year period" for the 25 children with SMA living in Ireland. Both the manufacturer and patient groups disputed the numbers and pointed out that actual pricing arrangements for Ireland are in line with the negotiated price for the BeneluxA initiative which Ireland has been a member of since June 2018.As of May 2019, nusinersen was available in public healthcare in more than 40 countries.In December 2021, nusinersen was included in the extended insurance coverage of China, and the price was reduced from ¥697,000 per vial to around ¥33,000 (~US$5,100) per vial. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Very short patch repair**
Very short patch repair:
Very short patch (VSP) repair is a DNA repair system that removes GT mismatches created by the deamination of 5-methylcytosine to thymine. This system exists because the glycosylases which normally target deaminated bases cannot target thymine (it being one of the regular four bases in DNA).
The components of the system are MutS, which binds to the GT mismatch, the VSR endonuclease, which cuts the DNA, and MutL, which recruits the UvrD helicase.
VSR (very short patch repair) endonucleases occur in a variety of bacteria. They work by cutting, or rather, making a nick in DNA if the base pair is mutated or damaged.
Function:
Mutations in the base pairs of DNA can be harmful to the organism. In particular, C to T mutations occur quite often due to methylation of cytosine. Hence, the VSR endonucleases have a function to protect the cell from damage caused by mutated DNA.
Mechanism:
VSR recognises a TG mismatched base pair, generated after spontaneous deamination of methylated cytosines, and it creates a nick on a single strand by cleaving the phosphate backbone on the 5' side of the thymine. Then DNA Polymerase I removes the T and some nucleotides on the 3' strand and then resynthesises the patch.Additionally, GT mismatches can lead to C-to-T transition mutations if not repaired. VSR repairs the mismatches in favour of the G-containing strand. In Escherichia coli, this endonuclease nicks double-stranded DNA within the sequence CT(AT)GN or NT(AT)GG next to the thymidine residue, which is mismatched to 2'-deoxyguanosine. The incision is mismatch-dependent and strand specific.
Structure:
The structure of VSR is similar to the core structure of restriction endonucleases, which have a 3-layer alpha/beta/alpha topology.VSR has three aromatic residues (Phe67, Trp68 and Trp86), which intercalate into the major groove, bending the DNA and separating the two strands. The N-terminal domain stabilizes the interaction between the protein and the cleaved product, thereby protecting the nick from DNA ligase until the arrival of DNA Polymerase I. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flo pass**
Flo pass:
The Flo Pass (Norwegian: Flo-pasningen) is a tactic used in association football, associated with the Norwegian national team in the early to mid-1990s. In a 4–5–1 formation, the full back hits a very long cross-field pass forward to a player on the opposite flank (sometimes called a wide target man), who would head the ball to either one of the central midfielders or to the striker.
Origin:
In the original move, employed by the Norwegian national team, the move would be started by Stig Inge Bjørnebye, and end with Jostein Flo (after whom the tactic derives its name). Flo, at 6ft 4in, was a natural centre forward with his physicality and height. When playing on the right flank, he could exploit his aerial ability against the full backs. Norwegian head coach Egil "Drillo" Olsen, who was the Norwegian national team coach between 1990–98 and 2009–13, has been credited and strongly identified with this tactic.
Origin:
The Flo pass was successfully deployed for the first time in February 1993, in a 1–1 friendly draw with Portugal. On this occasion Pål Lydersen – not Bjørnebye – launched the ball to Flo, leading to Gøran Sørloth scoring Norway's goal.The purpose of the Flo pass is to exploit the fact that the two players with best heading and aerial abilities in a back four are usually playing as centre backs. Jostein Flo was a threat in the air and when he moved out towards the right wing he only had to face the left back, who was usually weaker in the air than the central defenders. This increased the possibility of winning the ball, and the side was able to build up the attack from here, preferably before the opposing team have the opportunity to re-organize their defence.
Origin:
Another advantage with this kind of play is that a technically limited football nation such as Norway, with only about 5 million people and much snow in winter time limiting the possibilities to practice, can play to their strengths rather than their weaknesses. This moves the ball very quickly, and is also able to surprise the opponent in a counterattack. This kind of tactical play sent Norway, traditionally a weak football nation, to the runner-up spot at the FIFA ranking in 1997, second only to FIFA World Cup winner Brazil. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Responsibility-driven design**
Responsibility-driven design:
Responsibility-driven design is a design technique in object-oriented programming, which improves encapsulation by using the client–server model. It focuses on the contract by considering the actions that the object is responsible for and the information that the object shares. It was proposed by Rebecca Wirfs-Brock and Brian Wilkerson.
Responsibility-driven design is in direct contrast with data-driven design, which promotes defining the behavior of a class along with the data that it holds. Data-driven design is not the same as data-driven programming, which is concerned with using data to determine the control flow, not class design.
Responsibility-driven design:
In the client–server model they refer to, both the client and the server are classes or instances of classes. At any particular time, either the client or the server represents an object. Both the parties commit to a contract and exchange information by adhering to it. The client can only make the requests specified in the contract and the server must answer these requests. Thus, responsibility-driven design tries to avoid dealing with details, such as the way in which requests are carried out, by instead only specifying the intent of a certain request. The benefit is increased encapsulation, since the specification of the exact way in which a request is carried out is private to the server.
Responsibility-driven design:
To further the encapsulation of the server, Wirfs-Brock and Wilkerson call for language features that limit outside influence to the behavior of a class. They demand that the visibility of members and functions should be finely grained, such as in Eiffel programming language. Even finer control of the visibility of even classes is available in the Newspeak programming language.
Overview:
Responsibility-driven design focuses on the objects as behavioral abstractions which are characterized by their responsibilities. The CRC-card modelling technique is used to generate these behavioral abstractions. The rest of the object structure including data attributes are assigned later, as and when required. This makes the design follow type hierarchy for inheritance which improves encapsulation and makes it easier to identify abstract classes. It can also group the classes together based on their clients which is considered a unique ability.
Overview:
A good object-oriented design involves an early focus on behaviors to realize the capabilities meeting the stated requirements and a late binding of implementation details to the requirements. This approach especially helps to decentralize control and distribute system behavior which can help manage the complexities of high-functionality large or distributed systems. Similarly, it can help to design and maintain explanation facilities for cognitive models, intelligent agents, and other knowledge-based systems.
Building blocks:
In their book Object Design: Roles, Responsibilities and Collaborations, the authors describe the following building blocks that make up responsibility-driven design.
Application: A software application is referred to as a set of interacting objects.
Candidates: Candidates or candidate objects are key concepts in the form of objects described on CRC cards. They serve as initial inventions in the process of object design.
Collaborations: A collaboration is defined as an interaction of objects or roles (or both).
CRC Cards: CRC stands for Candidates, Responsibilities, Collaborators. They are index cards used in early design for recording candidates. These cards are split up into an unlined and a lined side.
Content of lined side: On this side the candidate's name, its responsibilities and its collaborators are recorded.
Content of unlined side: On this side the candidate's name, its purpose in the application, stereotype roles and anything worthwhile such as the names of roles in patterns it participates in are recorded.
Hot Spots: Hot Spots are points in the application where variations occur. They are recorded using Hot Spot Cards.
Building blocks:
Hot Spot Cards: Hot Spot Cards are used for recording variations with just enough detail so you can discriminate important difference. Similar to CRC cards, these are also created from index cards. These cards consist of: Hot Spot Name General description of the variation At least two specific examples where the variation occurs Objects Objects are described as things that have machine-like behaviors that can be plugged together to work in concert. These objects play well-defined roles and encapsulate scripted responses and information.
Building blocks:
Object Neighborhoods: Another term for subsystem. It is a logical grouping of collaborators.
Responsibilities: A responsibility is an obligation to perform a task or know information. These are further categorized according to their usage scenario.
Public Responsibilities: Public responsibilities are the responsibilities an object offers as services to others and the information it provides to others.
Private Responsibilities: Private responsibilities are the actions an object takes in support of public responsibilities.
Subresponsibilities: Sometimes, a large or complicated responsibility is split up into smaller ones called subresponsibilities. They are further categorized based on what they do.
Subordinate Responsibilities: These include the major steps in each subresponsibility.
Sequencing Responsibilities: These refer to the sequencing of the execution of subordinate responsibilities.
Building blocks:
Roles Object role refers to an exterior view of what general service is offered by the object. It is a set of related responsibilities. It can be implemented as a class or an interface. Interface, however, is the preferred implementation as it increases flexibility by hiding the concrete class which ultimately does the work.Role Stereotypes: Role stereotypes are simplified roles that come with predefined responsibilities. There are several categories.
Building blocks:
Controller: Object implementing this role makes decisions and closely directs the action of other objects.
Coordinator: This role reacts to events by delegating tasks to others.
Information Holder: Information holder knows and provides information.Information Provider: A slight variation of an information holder is the information provider, which takes a more active role in managing and maintaining information. This distinction can be used if a designer needs to get more specific.
Interfacer: This role transforms information and requests between distinct parts of an application. It is further divided into more specific roles.
External Interfacer: External interfacer communicates with other applications rather than its own. It is mainly used for encapsulating non-object-oriented APIs and does not collaborate a lot.
Internal Interfacer: Also called intersystem interfacer. It act as a bridge between object neighborhoods.
User Interfacer: User interfacer communicates with users by responding to events generated in the UI and then passing them on to more appropriate objects.
Service Provider: This role performs work and offers computing services.
Structurer: This role maintains relationships between objects and information about those relationships.
Control style:
An important part in the responsibility-driven design process is the distribution of control responsibilities that results in developing a control style. A control style is concerned about the control flow between subsystems.
Concept of Control : The responsibilities and collaborations among the classes.
Control Centers : An important aspect of developing a control style is the invention of so-called control centers. These are places where objects charged with controlling and coordinating reside.
Control Style Variations : A control style comes in three distinct variations. These are not precise definitions though since a control style can be said to be more centralized or delegated than another.
Centralized control style This control style inflicts a procedural paradigm on the structure of the application and places major-decision making responsibilities in only a few objects or a single object.
TypesCall-return model : The control of the objects in the application is in hierarchical way. Control starts at root and moves downwards. It is used in a sequential model.
Control style:
Manager model : The control of the objects in the application is in with only one object. Generally, it is implemented in concurrent models. It can also be implemented in sequential model using case statement.AdvantagesApplication logic is in one place.DisadvantagesControl logic can get overly complex Controllers can become dependent on information holders' contents Objects can become coupled indirectly through the actions of their controller The only interesting work is done in the controllerWhen to useWhen decisions to be made are few, simple, and related to a single task.
Control style:
Delegated control style A delegated control style lies in between a centralized and dispersed control style. It passes some of the decision making and much of the action to objects surrounding a control center. Each neighboring object has a significant role to play. It can also be called as event driven model, where the control is delegated to the object requesting it to process the event.
Control style:
Types[reference]Broadcast model : An event is broadcast to all objects in the application. The object which can handle the event can acquire the control.
Interrupt-driven model : There will be the interrupt handler to process the interrupt and passes to some object to process it.AdvantagesIt is easy to understand.
Though there is an external coordinator, Objects can be made smarter to know what they are supposed to do and can be reused in other applications.
Delegating coordinators tend to know about fewer objects than dominating controllers.
Dialogs are higher-level.
It is easy to change as changes typically affect fewer objects.
It is easier to divide design work among team members.DisadvantagesToo much distribution of responsibility can lead to weak objects and weak collaborationsWhen to useWhen one wants to delegate work to objects that are more specialized.
Control style:
Clustered control style This control style is a variation of the centralized control style wherein control is factored among a group of objects whose actions are coordinated. The main difference between a clustered and delegated control style is that in a clustered control style, the decision making objects are located within a control center whereas in a delegated control style they are mostly outside.
Control style:
Dispersed control style A dispersed control style does not contain any control centers. The logic is spread across the entire population of objects, keeping each object small and building in as few dependencies among them as possible.
AdvantagesNoneDisadvantagesWhen you want to find out how something works, you must trace the sequence of requests for services across many objects Not very reusable because no single object contributes muchWhen to useNever.
Preferred control style After extensive results of experiments conducted, only the senior management has the necessary skills to make use of delegated control style and centralized control style benefits programmers. There is no context mentioned about the mid-level employees. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pit-house**
Pit-house:
A pit-house (or pit house, pithouse) is a house built in the ground and used for shelter. Besides providing shelter from the most extreme of weather conditions, these structures may also be used to store food (just like a pantry, a larder, or a root cellar) and for cultural activities like the telling of stories, dancing, singing and celebrations. General dictionaries also describe a pit-house as a dugout, and it has similarities to a half-dugout.In archaeology, a pit-house is frequently called a sunken-featured building and occasionally (grub-)hut or grubhouse, after the German name Grubenhaus They are found in numerous cultures around the world, including the people of the Southwestern United States, the ancestral Pueblo, the ancient Fremont and Mogollon cultures, the Cherokee, the Inuit, the people of the Plateau, and archaic residents of Wyoming (Smith 2003) in North America; Archaic residents of the Lake Titicaca Basin (Craig 2005) in South America; Anglo-Saxons in Europe; and the Jōmon people in Japan. Anglo-Saxon pit-houses may have actually represented buildings for other functions than just dwellings.
Pit-house:
Usually, all that remains of the ancient pit-house is a dug-out hollow in the ground and any postholes used to support the roof. In the nineteenth century, it was believed that most prehistoric peoples lived in pit-houses, although it has since been proved that many of the features thought of as houses were in fact food prehistoric storage pits or served another purpose.
Mammoth bone dwellings:
The oldest pit dwellings were discovered in Mezhyrich, Central Ukraine. Dating back 15,000 years to the Upper Paleolithic age, the houses were made of mammoth bones. The base is circular or oval in shape, 12 to 14 feet (3.7 to 4.3 metres) in diameter, with limb bones used for walls and lighter, flat bones used for the roof. Presumably, animal hide was stretched around the exterior for insulation. Each dwelling had a hearth. Groups of houses were arranged around a base camp layout, occupied by families or relatives for weeks or months.
Early medieval Europe:
Pit-houses were built in many parts of northern Europe between the 5th and 12th centuries AD. In Germany they are known as Grubenhäuser, and in the United Kingdom, they are also known as grubhuts, grubhouses or sunken featured buildings.
Early medieval Europe:
Archaeological evidence indicates they were built in a shallow sub-rectangular pit and vary in depth (often relating to the preservation of the site). Some may measure 0.25m by around 2m by 1.5m, whilst examples from excavations from the 1950s onwards at West Stow in the United Kingdom are 3.7m-4.44m long x 2.72m-3.5m wide x 0.58m-0.97m deep. Within this pit were placed two (but sometimes 0, 4, or 6) substantial wooden posts in postholes at either end of the long axis. Some archaeologists have suggested that a suspended wooden floor lay over the pit and that the cavity beneath was used for storage or to control dampness, although others have disputed this, suggesting that grubenhäuser did not have suspended floors at all. A gabled roof supported by the timber posts covered the hut, which likely had no windows and had a single entrance at one end. Excavations at West Stow (UK) in the 1970s found preserved evidence of charred planks, suggestive of suspended floors. Hearths were also found, which sat partially over the edge of the sunken pits and appeared to have collapsed downwards when the structure supporting their overhanging sections (possibly a suspended floor) was removed.Grubenhäuser are often understood to have been domestic dwellings. However, their use may have varied, especially on a regional basis. In Western Europe their small size and the fact that they can be found near other buildings and associated finds of loom weights has led to theories that they had a specialised purpose such as for weaving sheds. In the Slavonic regions of Eastern Europe, Grubenhäuser are larger and often have a fireplace. In most settlements there have been no features of buildings at ground level.
Early medieval Europe:
There are reconstructions of pit-houses in several open-air museums, e.g. in the Hitzacker Archaeological Centre, the Kalkriese Museum and Park, the Oerlinghausen Archaeological Open Air Museum, and the Hochdorf Chieftain's Grave.
In North America:
Throughout the inland Pacific Northwest, indigenous people were nomadic during the summer and gathered resources at different spots according to the season and tradition, but overwintered in permanent semi-subterranean pit houses at lower elevations. The winter was often the only time families saw others—even if they were from the same village and tribe—and congregated in any numbers before the arrival of trading posts. Often these houses were located along major rivers and tributaries like the Columbia and Fraser; were typically round and fairly small, and were covered in layers of tule mats to keep out the weather and keep in the heat. There was a smoke hole in the center, and the interior, though warm in winter, was exceptionally smoky.In the northwestern Great Plains and the Plateau region located nearby, climate changes and extreme temperature and weather conditions made it difficult to live year-round. Hot summers led to the building of simple tent-like structures that were portable and could be packed up to move. For cold winter months, pit-houses provided the warm, protected shelter necessary for survival.
Cross-cultural patterning:
A cross-cultural middle range model of pit-house architecture using George Murdock's 1967 Ethnographic Atlas found that 82 of the 862 societies in the sample occupy pit structures as either their primary or secondary dwellings.All but six of the 82 societies live above 32° north latitude, and four of the six cases in this sample that are below 32° north latitude are from "high mountain" regions in east Africa, Paraguay, and eastern Brazil. The last example is from the Yami who occupied a small island south of Formosa.
Cross-cultural patterning:
Three conditions were always present among groups in the sample: 1) non-tropical climate during the season of pit structure habitation; 2) minimally a biseasonal settlement pattern; 3) reliance on stored food during the period of pit structure occupation. These conditions may be related to other factors of society and the presence of any or all of these three elements in society does not pre-condition occupation of pit structures. Nonetheless, these three conditions were present in all cases of pit structure occupation present in the Ethnographic Atlas. Other cultural patterns were common, but not universal across the sample. These commonalities include: cold season of occupation, low population estimates, and simple political and economic systems.
Cross-cultural patterning:
The ethnographic sample is based almost entirely on case studies from societies located in northern latitudes. The period of pit structure occupation is generally during the cold season, probably due to their thermal efficiency. Dug into the ground, pit structures take advantage to the insulating properties of soil, as well as having a low profile, protecting them from exposure to wind-induced heat loss. Since less heat is lost by transmission than is in above ground structures, less energy is required to maintain stable temperatures inside the structure.Out of the 82 ethnographic cases in the Ethnographic Atlas, 50 societies had population estimates. Of these, 64% had fewer than 100 people per settlement. In only 6% of cases were there more than 400 persons per settlement. The cases with the highest population densities were the Arikara and Hidatsa of the North American Great Plains and the Konso of Ethiopia. Gilman attributes high population densities among the Arikara to the availability of buffalo.
Cross-cultural patterning:
Pit structure occupations are generally associated with simple political and economic systems. For 86% of the sample, class stratification or social distinctions based on non-hereditary wealth were reported as absent. However, some pit-dwelling societies are characterized by chiefdom level complexity. In terms of economic organization, 77% of the societies who occupy pit structures had a hunting and gathering economy. This is a large fraction of the sample, but is not considered a universally consistent feature like biseasonal settlement and a reliance on stored foods during pit structure occupation.
Cross-cultural patterning:
During the part of the year when people are not living in pit structures, activities should be focused on acquiring foods to store. Based on the sample from the Ethnographic Atlas, this may be through either hunting and gathering or agricultural activity.
Cross-cultural patterning:
Many different prehistoric groups used pit houses. Although generally associated with the American southwest cultures, such as Fremont, Pueblo, Hohokam, and Mogollon, pit houses were used by a wide variety of people in a wide variety of places over the past 12,000 years. Large pit house formations have been excavated in British Columbia, Canada, such as at Keatley Creek Archaeological Site. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Three wishes joke**
Three wishes joke:
The three wishes joke (or genie joke) is a joke format in which a character is given three wishes by a supernatural being, and fails to make the best use of them. Common scenarios include releasing a genie from a lamp, catching and agreeing to release a mermaid or magical fish, or crossing paths with the devil. The first two wishes go as expected, with the third wish being misinterpreted, or granted in an unexpected fashion that doesn't reflect the intent of the wish. Alternatively, the wishes are split between three people, with the last person's wish inadvertently or intentionally thwarting or undoing the wishes of the other characters. An example of the three wishes joke runs as follows: Three men are stranded on a desert island, when a bottle washes up on the shore. When they uncork the bottle, a genie appears and offers three wishes. The first wishes to be taken to Paris. The genie snaps his fingers, and the man suddenly finds himself standing in front of the Eiffel Tower. The second man wishes that he were in Hollywood, and with a snap of the genie's fingers, he finds himself on a Tinseltown movie set. The third man, now alone on the island, looks around and says, "I wish my friends were back."
Variations:
One variation on the theme has the protagonist turning the tables on the genie, who for some contrived reason has placed a condition on the wishes that would result in an opponent of the protagonist also benefiting from the wishes. An example of this joke was used in The Simpsons episode, "Homer Simpson, This Is Your Wife". There, a character tells Marge Simpson a joke in which a genie promises to grant a man whatever he wishes, with the caveat that the man's wife's lover gets double whatever the man gets. After first wishing for a house and a car, the man wishes to be beaten "half to death" — which Marge doesn't understand.
Variations:
A very early version of the joke is found in an 1875 book of Scottish anecdotes. There, a Scottish highlander is asked what his three wishes would be. He first wishes for a lake full of whisky. His second wish is for a similar quantity of good food. When asked for his third wish, after a moment of indecision, he asks for a second lake full of whisky.A variation attributed to Denis Norden shows three people being granted three wishes, with two making very good choices, and the other making comically bad choices.Yet another variation is the one where the first wishes go wrong and through the last one, the protagonist(s) end up exactly the way they were from the beginning. An example of this is the following, taken from Charles Perrault's story The Ridiculous Wishes: A poor starving peasant couple are granted three wishes and the woman, just taking the first thing that comes to her mind, wishes for one sausage, which she receives immediately. Her husband, pointing out that she could have wished for immense wealth or food to last them a lifetime, becomes angry with her for making such a stupid wish and, not thinking, wishes the sausage were stuck on her nose. Sure enough, the sausage is stuck in the middle of her face, and then they have to use the third wish to make it go away, upon which it disappears completely.
In fiction:
The format is not always used for humor. In "The Monkey's Paw", a horror short story by author W. W. Jacobs, the paw of a dead monkey is a talisman that grants its possessor three wishes, but the wishes come with an enormous price. In the story, the recipient of the monkey's paw wishes for £200, only to learn that his son has been killed in a terrible work accident, for which the employer makes a goodwill payment of £200. Later, the mother asks that the dead son be wished back to life. Upon hearing strange sounds and a knock at the door, the father realizes that the thing outside would be a horribly mutilated body, and wishes it away with the paw's final wish. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**New York sour**
New York sour:
The New York sour is an IBA official cocktail. Largely similar to the whiskey sour, the New York sour adds a float of dry red wine to the drink. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Katsuobushi**
Katsuobushi:
Katsuobushi (Japanese: 鰹節) is simmered, smoked and fermented skipjack tuna (Katsuwonus pelamis, sometimes referred to as bonito). It is also known as bonito flakes or broadly as okaka (おかか).
Shaved katsuobushi and dried kelp—kombu—are the main ingredients of dashi, a broth that forms the basis of many soups (such as miso) and sauces (e.g., soba no tsukejiru) in Japanese cuisine.
Katsuobushi's distinct umami taste comes from its high inosinic acid content. Traditionally made katsuobushi, known as karebushi, is deliberately fermented with Aspergillus glaucus fungus in order to reduce moisture. Katsuobushi has also been shown to impart kokumi (a term translated as "heartiness").
Traditional production process:
The fish is beheaded, gutted, and filleted, with the fatty belly, which does not lend well to being preserved, trimmed off. The fillets are then arranged in a basket and simmered just below boiling for an hour to an hour and a half, depending on their size.
Traditional production process:
The rib bones are then removed and the fillets smoked for up to a month using oak, pasania, or castanopsis wood. They are smoked for five to six hours in one session, left to rest one day for the condensation to rise to the surface, then fired and smoked again the next day. This smoking and resting cycle is repeated 12–15 times in total. The built-up tar from the smoke is cleaned from the surface using a grinder. At this stage the fillets are called arabushi (荒節) and most commonly found in stores shaved and packaged for sale under the name katsuo-kezuri-bushi (鰹削り節) or hanakatsuo. They are not true katsuobushi without the last fermentation stage, but still valued as a good substitute.
Traditional production process:
The last stage of creating katsuobushi is to allow the fish to sun-dry using the assistance of mold. The fillets are sprayed with Aspergillus glaucus culture and left for two weeks in a closed cultivation room. The mold ferments the fillets and also draws out any residual moisture.
Traditional production process:
The mold is continually scraped off, with further sun-drying increasing hardness and dryness until the fillet resembles a piece of wood, with less than 20% of its original weight. By definition, only fillets that have been treated in this manner may be referred to as katsuobushi. After repeating this process of mold growth and sun-drying at least twice, the katsuobushi can also be called karebushi (枯節, "dried fillet"), and fillets repeating this process more than three times can be called honkarebushi (本枯節, "true dried fillet"). When tapped together lightly, they sound almost metallic, and unlike their dull beige outer appearance, when broken open they are a translucent deep ruby color inside. Rarely, very high-end honkarebushi repeat this drying process for over two years.In the Edo era, it was common for katsuobushi to go through an extra step, the so-called tebiyama style (手火山式, tebiyama-shiki) process. After the fillets are boiled and their rib bones removed the fish are put in steaming baskets stacked atop one another for one to two hours a few meters above a burning wood fire. These are rotated to assure an equal exposure to the smoke. The result is more flavorful and resistant to deterioration. Due to the extra cost and facilities required only a few factories following tebiyama-shiki remain.
Shaving:
Traditionally, chunks of katsuobushi were shaved as needed with an instrument similar to a wood plane called a katsuobushi kezuriki.
Today katsuobushi is typically sold in bags of small pink-brown shavings, which vary by thickness: smaller, thinner shavings, called hanakatsuo (花鰹), are used as a flavoring and topping for many Japanese dishes, such as okonomiyaki, while the larger thicker, called kezurikatsuo (削り鰹), are favored for making the widely used dashi stock.
Uses:
In addition to making dashi, other popular uses of katsuobushi include: Okaka, finely chopped katsuobushi dressed with soy sauce.
As a stuffing for rice balls (onigiri).
As a topping for rice. Popular for bentō, often covered with strips of laver.
Dried okaka is used as an ingredient of furikake rice topping (called "okaka furikake").
As a seasoning for cold tofu (hiyayakko, 冷奴) along with grated ginger and Welsh onion (a type of spring onion).
Sprinkled with sesame seeds and chopped laver atop cold soba noodles (zarusoba).
As a topping on takoyaki and okonomiyaki.
As a seasoning on century egg along with sesame oil and soy sauce.
As a high-protein treat for cats sold at pet stores.
As a topping for ramen mixed with salt.
Health:
The mycotoxin beta-nitropropionic acid has been found on katsuobushi as well as in miso and in soy sauce, two other Japanese fungal fermented products. Certain strains of A. glaucus are reported to produce mycotoxins.Due to the smoking process which involves tar and charcoal, amounts of benzopyrene exceeding EU standards, as much as 37μg per kilogram, have been detected in commercially sold katsuobushi. As a result, they have been once banned for sale in the European Union. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bone morphogenetic protein 2**
Bone morphogenetic protein 2:
Bone morphogenetic protein 2 or BMP-2 belongs to the TGF-β superfamily of proteins.
Function:
BMP-2 like other bone morphogenetic proteins, plays an important role in the development of bone and cartilage. It is involved in the hedgehog pathway, TGF beta signaling pathway, and in cytokine-cytokine receptor interaction. It is also involved in cardiac cell differentiation and epithelial to mesenchymal transition.
Like many other proteins from the BMP family, BMP-2 has been demonstrated to potently induce osteoblast differentiation in a variety of cell types.BMP-2 may be involved in white adipogenesis and may have metabolic effects.
Interactions:
Bone morphogenetic protein 2 has been shown to interact with BMPR1A.
Clinical use and complications:
Bone morphogenetic protein 2 is shown to stimulate the production of bone. Recombinant human protein (rhBMP-2) is currently available for orthopaedic usage in the United States. Implantation of BMP-2 is performed using a variety of biomaterial carriers ("metals, ceramics, polymers, and composites") and delivery systems ("hydrogel, microsphere, nanoparticles, and fibers"). While used primarily in orthopedic procedures such as spinal fusion, BMP-2 has also found its way into the field of dentistry.The use of dual tapered threaded fusion cages and recombinant human bone morphogenetic protein-2 on an absorbable collagen sponge obtained and maintained intervertebral spinal fusion, improved clinical outcomes, and reduced pain after anterior lumbar interbody arthrodesis in patients with degenerative lumbar disc disease. As an adjuvant to allograft bone or as a replacement for harvested autograft, bone morphogenetic proteins (BMPs) appear to improve fusion rates after spinal arthrodesis in both animal models and humans, while reducing the donor-site morbidity previously associated with such procedures.A study published in 2011 noted "reports of frequent and occasionally catastrophic complications associated with use of [BMP-2] in spinal fusion surgeries", with a level of risk far in excess of estimates reported in earlier studies. An additional review by Agrawal and Sinha of BMP-2 and its common delivery systems in early 2016 showed how "problems like ectopic growth, lesser protein delivery, [and] inactivation of the protein" reveal a further need "to modify the available carrier systems as well as explore other biomaterials with desired properties." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Extrinsic pathway**
Extrinsic pathway:
In molecular biology, the term extrinsic pathway may refer to multiple cascades of protein interactions.
The extrinsic pathway of apoptosis refers to cell death induced by external factors that activate the death-inducing signaling complex.
The extrinsic pathway of blood coagulation is also known as the tissue factor pathway and refers to a cascade of enzymatic reactions resulting in blood clotting and is done with the addition of injured tissue cells.
a number of extracurricular molecules is specialised to induce apoptosis.These extracurricular signals molecules bind with cell surface receptors (termed death receptors). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fire-safe polymers**
Fire-safe polymers:
Fire-safe polymers are polymers that are resistant to degradation at high temperatures. There is need for fire-resistant polymers in the construction of small, enclosed spaces such as skyscrapers, boats, and airplane cabins. In these tight spaces, ability to escape in the event of a fire is compromised, increasing fire risk. In fact, some studies report that about 20% of victims of airplane crashes are killed not by the crash itself but by ensuing fires. Fire-safe polymers also find application as adhesives in aerospace materials, insulation for electronics, and in military materials such as canvas tenting.Some fire-safe polymers naturally exhibit an intrinsic resistance to decomposition, while others are synthesized by incorporating fire-resistant additives and fillers. Current research in developing fire-safe polymers is focused on modifying various properties of the polymers such as ease of ignition, rate of heat release, and the evolution of smoke and toxic gases. Standard methods for testing polymer flammability vary among countries; in the United States common fire tests include the UL 94 small-flame test, the ASTM E 84 Steiner Tunnel, and the ASTM E 622 National Institute of Standards and Technology (NIST) smoke chamber. Research on developing fire-safe polymers with more desirable properties is concentrated at the University of Massachusetts Amherst and at the Federal Aviation Administration where a long-term research program on developing fire-safe polymers was begun in 1995. The Center for UMass/Industry Research on Polymers (CUMIRP) was established in 1980 in Amherst, MA as a concentrated cluster of scientists from both academia and industry for the purpose of polymer science and engineering research.
History:
Early history Controlling the flammability of different materials has been a subject of interest since 450 B.C. when Egyptians attempted to reduce the flammability of wood by soaking it in potassium aluminum sulfate (alum). Between 450 B.C. and the early 20th century, other materials used to reduce the flammability of different materials included mixtures of alum and vinegar; clay and hair; clay and gypsum; alum, ferrous sulfate, and gypsum; and ammonium chloride, ammonium phosphate, borax, and various acids. These early attempts found application in reducing the flammability of wood for military materials, theater curtains, and other textiles, for example. Important milestones during this early work include the first patent for a mixture for controlling flammability issued to Obadiah Wyld in 1735, and the first scientific exploration of controlling flammability, which was undertaken by Joseph Louis Gay-Lussac in 1821.
History:
Developments since WWII Research on fire-retardant polymers was bolstered by the need for new types of synthetic polymers in World War II. The combination of a halogenated paraffin and antimony oxide was found to be successful as a fire retardant for canvas tenting. Synthesis of polymers, such as polyesters, with fire retardant monomers were also developed around this time. Incorporating flame-resistant additives into polymers became a common and relatively cheap way to reduce the flammability of polymers, while synthesizing intrinsically fire-resistant polymers has remained a more expensive alternative, although the properties of these polymers are usually more efficient at deterring combustion.
Polymer combustion:
General mechanistic scheme Traditional polymers decompose under heat and produce combustible products; thus, they are able to originate and easily propagate fire (as shown in Figure 1).
Polymer combustion:
The combustion process begins when heating a polymer yields volatile products. If these products are sufficiently concentrated, within the flammability limits, and at a temperature above the ignition temperature, then combustion proceeds. As long as the heat supplied to the polymer remains sufficient to sustain its thermal decomposition at a rate exceeding that required to feed the flame, combustion will continue.
Polymer combustion:
Purpose and methods of fire-retardant systems The purpose is to control heat below the critical level. To achieve this, one can create an endothermic environment, produce non-combustible products, or add chemicals that would remove fire-propagating radicals (H and OH), to name a few. These specific chemicals can be added into the polymer molecules permanently (see Intrinsically Fire-Resistant Polymers) or as additives and fillers (see Flame-Retardant Additives and Fillers).
Polymer combustion:
Role of oxygen Oxygen catalyzes the pyrolysis of polymers at low concentration and initiates oxidation at high concentration. Transition concentrations are different for different polymers. (e.g., polypropylene, between 5% and 15%). Additionally, polymers exhibit a structural-dependent relationship with oxygen. Some structures are intrinsically more sensitive to decomposition upon reaction with oxygen. The amount of access that oxygen has to the surface of the polymer also plays a role in polymer combustion. Oxygen is better able to interact with the polymer before a flame has actually been ignited.
Polymer combustion:
Role of heating rate In most cases, results from a typical heating rate (e.g. 10°C/min for mechanical thermal degradation studies) do not differ significantly from those obtained at higher heating rates. The extent of reaction can, however, be influenced by the heating rate. For example, some reactions may not occur with a low heating rate due to evaporation of the products.
Polymer combustion:
Role of pressure Volatile products are removed more efficiently under low pressure, which means the stability of the polymer might have been compromised. Decreased pressure also slows down decomposition of high boiling products.
Intrinsically fire-resistant polymers:
The polymers that are most efficient at resisting combustion are those that are synthesized as intrinsically fire-resistant. However, these types of polymers can be difficult as well as costly to synthesize. Modifying different properties of the polymers can increase their intrinsic fire-resistance; increasing rigidity or stiffness, the use of polar monomers, and/or hydrogen bonding between the polymer chains can all enhance fire-resistance.
Intrinsically fire-resistant polymers:
Linear, single-stranded polymers with cyclic aromatic components Most intrinsically fire-resistant polymers are made by incorporation of aromatic cycles or heterocycles, which lend rigidity and stability to the polymers. Polyimides, polybenzoxazoles (PBOs), polybenzimidazoles, and polybenzthiazoles (PBTs) are examples of polymers made with aromatic heterocycles (Figure 2). Polymers made with aromatic monomers have a tendency to condense into chars upon combustion, decreasing the amount of flammable gas that is released. Syntheses of these types of polymers generally employ prepolymers which are further reacted to form the fire-resistant polymers.
Intrinsically fire-resistant polymers:
Ladder polymers Ladder polymers are a subclass of polymers made with aromatic cycles or heterocycles. Ladder polymers generally have one of two types of general structures, as shown in Figure 3.One type of ladder polymer links two polymer chains with periodic covalent bonds. In another type, the ladder polymer consists of a single chain that is double-stranded. Both types of ladder polymers exhibit good resistance to decomposition from heat because the chains do not necessarily fall apart if one covalent bond is broken. However, this makes the processing of ladder polymers difficult because they are not easily melted. These difficulties are compounded because ladder polymers are often highly insoluble.
Intrinsically fire-resistant polymers:
Inorganic and semiorganic polymers Inorganic and semiorganic polymers often employ silicon-nitrogen, boron-nitrogen, and phosphorus-nitrogen monomers. The non-burning characteristics of the inorganic components of these polymers contribute to their controlled flammability. For example, instead of forming toxic, flammable gasses in abundance, polymers prepared with incorporation of cyclotriphosphazene rings give a high char yield upon combustion. Polysialates (polymers containing frameworks of aluminum, oxygen, and silicon) are another type of inorganic polymer that can be thermally stable up to temperatures of 1300-1400 °C.
Flame-retardant additives and fillers:
Additives are divided into two basic types depending on the interaction of the additive and polymer. Reactive flame retardants are compounds that are chemically built into the polymer. They usually contain heteroatoms. Additive flame retardants, on the other hand, are compounds that are not covalently bound to the polymer; the flame retardant and the polymer are just physically mixed together.
Flame-retardant additives and fillers:
Only a few elements are being widely used in this field: aluminum, phosphorus, nitrogen, antimony, chlorine, bromine, and in specific applications magnesium, zinc and carbon. One prominent advantage of the flame retardants (FRs) derived from these elements is that they are relatively easy to manufacture. They are used in important quantities: in 2013, the world consumption of FRs amounted to around 1.8/2.1 Mio t for 2013 with sales of 4.9/5.2 billion USD. Market studies estimate FRs demand to rise between 5/7 % pa to 2.4/2.6 Mio t until 2016/2018 with estimated sales of 6.1/7.1 billion USD.The most important flame retardants systems used act either in the gas phase where they remove the high energy radicals H and OH from the flame or in the solid phase, where they shield the polymer by forming a charred layer and thus protect the polymer from being attacked by oxygen and heat.
Flame-retardant additives and fillers:
Flame retardants based on bromine or chlorine, as well as a number of phosphorus compounds act chemically in the gas phase and are very efficient. Others only act in the condensed phase such as metal hydroxides (aluminum trihydrate, or ATH, magnesium hydroxide, or MDH, and boehmite), metal oxides and salts (zinc borate and zinc oxide, zinc hydroxystannate), as well as expandable graphite and some nanocomposites (see below). Phosphorus and nitrogen compounds are also effective in the condensed phase, and as they also may act in the gas phase, they are quite efficient flame retardants. Overviews of the main flame retardants families, their mode of action and applications are given in. Further handbooks on these topics are A good example for a very efficient phosphorus-based flame retardant system acting in the gas and condensed phases is aluminium diethyl phosphinate in conjunction with synergists such as melamine polyphosphate (MPP) and others. These phosphinates are mainly used to flame retard polyamides (PA) and polybutylene terephthalate (PBT) for flame retardant applications in electrical engineering/electronics (E&E).
Flame-retardant additives and fillers:
Natural fiber-containing composites Besides providing satisfactory mechanical properties and renewability, natural fibers are easier to obtain and much cheaper than man-made materials. Moreover, they are more environmentally friendly. Recent research focuses on application of different types of fire retardants during the manufacturing process as well as applications of fire retardants (especially intumescent coatings) at the finishing stage.
Flame-retardant additives and fillers:
Nanocomposites Nanocomposites have become a hotspot in the research of fire-safe polymers because of their relatively low cost and high flexibility for multifunctional properties. Gilman and colleagues did the pioneering work by demonstrating the improvement of fire-retardancy by having nanodispersed montmorillonite clay in the polymer matrix. Later, organomodified clays, TiO2 nanoparticles, silica nanoparticles, layered double hydroxides, carbon nanotubes and polyhedral silsesquioxanes were proved to work as well. Recent research has suggested that combining nanoparticles with traditional fire retardants (e.g., intumescents) or with surface treatment (e.g., plasma treatment) effectively decreases flammability.
Flame-retardant additives and fillers:
Problems with additives and fillers Although effective at reducing flammability, flame-retardant additives and fillers have disadvantages as well. Their poor compatibility, high volatility and other deleterious effects can change properties of polymers. Besides, addition of many fire-retardants produces soot and carbon monoxide during combustion. Halogen-containing materials cause even more concerns on environmental pollution. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Geodemographic segmentation**
Geodemographic segmentation:
In marketing, geodemographic segmentation is a multivariate statistical classification technique for discovering whether the individuals of a population fall into different groups by making quantitative comparisons of multiple characteristics with the assumption that the differences within any group should be less than the differences between groups.
Principles:
Geodemographic segmentation is based on two simple principles: People who live in the same neighborhood are more likely to have similar characteristics than are two people chosen at random.
Neighborhoods can be categorized in terms of the characteristics of the population which they contain. Any two neighborhoods can be placed in the same category, i.e., they contain similar types of people, even though they are widely separated.
Clustering algorithms:
The use of different algorithms leads to different results, but there is no single best approach for selecting the best algorithm, just as no algorithm offers any theoretical proof of its certainty. One of the most frequently used techniques in geodemographic segmentation is the widely known k-means clustering algorithm. In fact most of the current commercial geodemographic systems are based on a k-means algorithm. Still, clustering techniques coming from artificial neural networks, genetic algorithms, or fuzzy logic are more efficient within large, multidimensional databases (Brimicombe 2007).
Clustering algorithms:
Neural networks can handle non-linear relationships, are robust to noise and exhibit a high degree of automation. They do not assume any hypotheses regarding the nature or distribution of the data and they provide valuable assistance in handling problems of a geographical nature that, to date, have been impossible to solve. One of the best known and most efficient neural network methods for achieving unsupervised clustering is the Self-Organizing Map (SOM). SOM has been proposed as an improvement over the k-means method, for it provides a more flexible approach to census data clustering The SOM method has been recently used by Spielman and Thill (2008) to develop geodemographic clustering of a census dataset concerning New York City.
Clustering algorithms:
Another way of characterizing an individual polygon's similarity to all the regions is based on fuzzy logic. The basic concept of fuzzy clustering is that an object may belong to more than one clusters. In binary logic, the set is limited by the binary yes – no definition, meaning that an object either belongs or not to a cluster. Fuzzy clustering allows a spatial unit to belong to more than one clusters with varying membership values. Most studies concerning geodemographic analysis and fuzzy logic employ the Fuzzy C-Means algorithm and the Gustafson-Kessel algorithm, (Feng and Flowerdew 1999).
Systems:
Famous geodemographic segmentation systems are Claritas Prizm (US), CanaCode Lifestyles (Canada), PSYTE HD (Canada), Tapestry (US), CAMEO (UK), ACORN (UK) and MOSAIC (UK) system. New systems targeting subgroups of the population are also emerging. For example, Segmentos examines the geodemographic lifestyles of Hispanics in the United States. Both MOSAIC and ACORN use Onomastics to infer the ethnicity from resident names.
Systems:
CanaCode Lifestyle Clusters CanaCode Lifestyle Clusters is developed by Manifold Data Mining and classifies Canadian postal codes into 18 distinct major lifestyle groups and 110 niche lifestyles. It uses current-year statistics on over 10,000 variables ranging from demographics to socioeconomic factors to expenditures to lifestyle traits (e.g. consumer behaviors) including product usage, media usage, and psychographics.
Systems:
PSYTE HD PSYTE HD Canada is a geodemographic market segmentation system that classifies Canadian postal codes and Dissemination Areas into 57 unique lifestyle groups and mutually exclusive neighborhood types. PSYTE HD Canada is built on the Canadian Census demographic and socioeconomic base in addition to various other third party data inputs combined in a state of the art cluster build environment. The resultant clusters represent the most accurate snapshots of Canadian neighborhoods available. PSYTE HD Canada is an effective tool for analyzing customer data and potential markets, gaining market intelligence and insight, and interpreting consumer behavior across the diverse Canadian marketplace.
Systems:
CAMEO system The CAMEO Classifications are a set of consumer classifications that are used internationally by organisations as part of their sales, marketing and network planning strategies.
CAMEO UK has been built at postcode, household and individual level and classifies over 50 million British consumers. It has been built to accurately segment the British market into 68 distinct neighbourhood types and 10 key marketing segments.
Internationally Global CAMEO is the largest consumer segmentation system in the world, covering 40 nations. There is also single global classification CAMEO International which segments across borders.
CAMEO was developed and is maintained by Callcredit Information Group.
Systems:
Acorn system A Classification Of Residential Neighborhoods (Acorn) is developed by CACI in London. It is the only geodemographic tool currently available that is built using current year data rather than 2011 Census information. Acorn helps to analyse and understand consumers in order to increase engagement with customers and service users to deliver strategies across all channels. Acorn segments all 1.9 million UK postcodes into 6 categories, 18 groups and 62 types.
Systems:
MOSAIC system Mosaic UK is Experian's people classification system. Originally created by Prof Richard Webber (visiting Professor of Geography at Kings College University, London) in association with Experian. The latest version of Mosaic was released in 2009. It classifies the UK population into 15 main socio-economic groups and, within this, 66 different types.
Mosaic UK is part of a family of Mosaic classifications that covers 29 countries including most of Western Europe, the United States, Australia and the Far East.
Systems:
Mosaic Global is Experian's global consumer classification tool. It is based on the simple proposition that the world's cities share common patterns of residential segregation. Mosaic Global is a consistent segmentation system that covers over 400 million of the world's households using local data from 29 countries. It has identified 10 types of residential neighbourhood that can be found in each of the countries.
Systems:
geoSmart system In Australia, geoSmart is a geodemographic segmentation system based on the principle that people with similar demographic profiles and lifestyles tend to live near each other. It is developed by an Australian supplier of geodemographic solutions, RDA Research.
geoSmart geodemographic segments are produced from the Australian Census (Australian Bureau of Statistics) demographic measures and modeled characteristics, and the system is updated for recent household growth. The clustering creates a single segment code that is represented by a descriptive statement or a thumbnail sketch.
In Australia, geoSmart is mainly used for database segmentation, customer acquisition, trade area profiling and letterbox targeting, although it can be used in a broad range of other applications.
The Output Area Classification The Output Area Classification (OAC) is the UK Office for National Statistics' (ONS) free and open geodemographic segmentation based upon the UK Census of Population 2011. It classifies 41 census variables into a three-tier classification of 7, 21, and 52 groups.
The perceived advantages of OAC over other commercial classifications stem from the fact that the methodology is open and documented, and that the data is open and freely available to both the public and commercial organizations, subject to licensing conditions.
OAC has a wide variety of potential applications, from geographic analysis to social marketing and consumer profiling. The UK public sector is one of the main users of OAC.
ESRI Community Tapestry This method classifies US neighborhoods into 65 market segments, based on socioeconomic and demographic factors, then consolidates these 67 segments into 14 types of LifeModes with names such as "High Society", "Senior Styles", and "Factories and Farms". The smallest spatial granularity of data is produced at the level of the U.S. Census Block Group.
See also Market_segmentation#Companies_(proprietary_segmentation_databases) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virulent Newcastle disease**
Virulent Newcastle disease:
Virulent Newcastle disease (VND), formerly exotic Newcastle disease, is a contagious viral avian disease affecting many domestic and wild bird species; it is transmissible to humans. Though it can infect humans, most cases are non-symptomatic; rarely it can cause a mild fever and influenza-like symptoms and/or conjunctivitis in humans. Its effects are most notable in domestic poultry due to their high susceptibility and the potential for severe impacts of an epizootic on the poultry industries. It is endemic to many countries. No treatment for VND is known, but the use of prophylactic vaccines and sanitary measures reduces the likelihood of outbreaks.
Virulent Newcastle disease:
The disease is caused by Newcastle disease virus (NDV), an avulavirus. Strains of Newcastle disease virus have been used to treat cancer in humans, since the virus appears to preferentially infect and kill cancerous cells. Strains of Newcastle disease virus have also been used to create viral vector vaccine candidates against Ebola and Covid-19.
History:
Newcastle disease was first identified in Java, Indonesia, in 1926, and in Newcastle-upon-Tyne, England, in 1927. However, it may have been prevalent as early as 1898, when a disease wiped out all the domestic fowl in northwest Scotland.The policy of slaughter ceased in England and Wales on 31 March 1963, except for the peracute form of Newcastle disease and for fowl plague. In Scotland the slaughter policy continued for all types of fowl pest.Interest in the use of NDV as an anticancer agent has arisen from the ability of NDV to selectively kill human tumour cells with limited toxicity to normal cells.Since May 2018, California Department of Food and Agriculture staff and the United States Department of Agriculture have been working on eliminating VND in South California and more than 400 birds have been confirmed to have VND. On February 27, 2019, the California state veterinarian, Annette Jones, increased the quarantine area in Southern California and on March 15, 2019 and April 5, 2019, cases of VND in Northern California and Arizona respectively.
Causal agent:
The causal agent, Newcastle disease virus (NDV), is a variant of avian orthoavulavirus 1, a negative-sense, single-stranded RNA virus. NDV belongs to the subfamily Avulavirinae, which infect birds. Transmission occurs by exposure to faecal and other excretions from infected birds, and through contact with contaminated food, water, equipment, and clothing.
Strains NDV strains can be categorised as velogenic (highly virulent), mesogenic (intermediate virulence), or lentogenic (nonvirulent). Velogenic strains produce severe nervous and respiratory signs, spread rapidly, and cause up to 90% mortality. Mesogenic strains cause coughing, affect egg quality and production, and result in up to 10% mortality. Lentogenic strains produce mild signs with negligible mortality.
Transmission:
NDV is spread primarily through direct contact between healthy birds and the bodily discharges of infected birds. The disease is transmitted through infected birds' droppings and secretions from the nose, mouth, and eyes. NDV spreads rapidly among birds kept in confinement, such as commercially raised chickens.
High concentrations of the NDV are found in birds' bodily discharges; therefore, the disease can be spread easily by mechanical means. Virus-bearing material can be picked up on shoes and clothing and carried from an infected flock to a healthy one.
NDV can survive for several weeks in a warm and humid environment on birds' feathers, manure, and other materials. It can survive indefinitely in frozen material. However, the virus is destroyed rapidly by dehydration and by the ultraviolet rays in sunlight.
Smuggled pet birds, especially Amazon parrots from Latin America, pose a great risk of introducing NDV into the US. Amazon parrots are carriers of the disease, but do not show symptoms, and are capable of shedding NDV for more than 400 days.
Clinical findings:
Clinical signs Signs of infection with NDV vary greatly depending on factors such as the strain of virus and the health, age and species of the host.
Clinical findings:
The incubation period for the disease ranges from 4 to 6 days. An infected bird may exhibit several signs, including respiratory signs (gasping, coughing), nervous signs (depression, inappetence, muscular tremors, drooping wings, twisting of head and neck, circling, complete paralysis), swelling of the tissues around the eyes and neck, greenish, watery diarrhoea, misshapen, rough- or thin-shelled eggs and reduced egg production.
Clinical findings:
In acute cases, the death is very sudden, and, in the beginning of the outbreak, the remaining birds do not seem to be sick. In flocks with good immunity, however, the signs (respiratory and digestive) are mild and progressive, and are followed after 7 days by nervous symptoms, especially twisted heads.
Postmortem lesions Petechiae in the proventriculus and on the submucosae of the gizzard are typical; also, severe enteritis of the duodenum occurs. The lesions are scarce in hyperacute cases (first day of outbreak).
Diagnosis:
Immunological tests Enzyme-linked immunosorbent assay, polymerase chain reaction, and sequence technology tests have been developed.
Virus isolation:
Samples For routine isolation of NDV from chickens, turkeys, and other birds, samples are obtained by swabbing the trachea and the cloaca. Cotton swabs can be used. The virus can also be isolated from the lungs, brain, spleen, liver, and kidneys.
Handling Prior to shipping, samples should be stored at 4 °C (refrigerator). Samples must be shipped in a padded envelope or box. Samples may be sent by regular mail, but overnight is recommended.
Prevention:
Any animals showing symptoms of Newcastle disease should be isolated immediately. New birds should also be vaccinated before being introduced to a flock. An inactivated viral vaccine is available, as well as various combination vaccines. A thermotolerant vaccine is available for controlling Newcastle disease in underdeveloped countries. Schiappacasse et al 2020 demonstrates successful, complete inactivation of the virus in a space using a nonthermal plasma generator.
History of NDV in cancer research:
Though the oncolytic effect of NDV was documented already in the 1950s, later advances in research into using viruses in cancer therapy came with the advent of reverse genetics technologies. Later on it was Csatary LK and his colleagues to document anti-cancer effects in patients with brain gliomas.One of the main issues using NDV would be the host/patient immune response against the virus itself, which prior to the time of the reverse genetics technology, decreased the potential applicability of NDV as a cancer treatment.As of 2018 there had been several clinical studies into the use of NDV for cancer treatment, but the research quality was low and the outcomes inconclusive. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**QuarkXPress**
QuarkXPress:
QuarkXPress is desktop publishing software for creating and editing complex page layouts in a WYSIWYG (What You See Is What You Get) environment. It runs on macOS and Windows. It was first released by Quark, Inc. in 1987 and is still owned and published by them.
QuarkXPress:
The most recent version, QuarkXPress 2022 (internal version number 18.0.0), allows publishing in English ("International and U.S.") and 36 other languages, including Arabic, Chinese, Japanese, Portuguese, German, Korean, Russian, French and Spanish.QuarkXPress is used by individual designers, large publishing houses and corporations to produce a variety of layouts, from single-page flyers and collateral to the multi-media projects required for magazines, newspapers, catalogs and the like. More recent versions have added support for ebooks, Web and mobile apps.
History:
Founded by Tim Gill in 1981 with a $2,000 loan from his parents, with the introduction of Fred Ebrahimi as CEO in 1986.
History:
The first version of QuarkXPress was released in 1987 for the Macintosh. Five years passed before a Microsoft Windows version (3.1) followed in 1992. In the 1990s, QuarkXPress became widely used by professional page designers, the typesetting industry and printers. In particular, the Mac version of 3.3 (released in 1996) was seen as stable and trouble-free, working seamlessly with Adobe's PostScript fonts as well as with Apple's TrueType fonts. Quark's AppleScript support was a significant factor in both Quark's and AppleScript's success.In 1989, QuarkXPress incorporated an application programming interface called XTensions which allows third-party developers to create custom add-on features to the desktop application. Xtensions, along with Adobe's Photoshop plugins, was one of the first examples of a developer allowing others to create software add-ons for their application.
History:
Although competitors like PageMaker existed, QuarkXPress was so dominant that it had an estimated 95% market share during the 1990s. After QuarkXPress 3.3, QuarkXPress was seen as needing significant improvements and users criticized it for its overly long innovation cycles.
Gill sold his 50% stake in the company in 1999 for a reported $500 million.
History:
The release of QuarkXPress version 5 in 2002 led to disappointment from Apple's user base, as QuarkXPress did not support Mac OS X, while Adobe InDesign 2.0—launched in the same week—did. QuarkXPress also lost marketshare due to an increasing price gap between it and InDesign. InDesign CS cost $699, while QuarkXPress 6 cost $945. The later Adobe Creative Suite (2003), which users purchased for access to Photoshop and Illustrator, included InDesign.In response to a shrinking user base, Quark started to lower its pricing levels in 2004. In December 2006, Quark licensed the Windows version of QuarkXPress 5 to be distributed for free on the cover of a UK computer magazine, Computer Shopper, with the idea of enticing consumers to upgrade to later versions.
History:
Having arrived late with a Mac OS X version, Quark took a different approach to porting to Intel-native applications on Mac (Universal Binary), and released its Universal Binary version 7 months before Adobe ported InDesign.QuarkXPress 9 won Product of the Year in 2011 (MacWorld Awards 2011: Grand Prix Winner).
Since 2015, QuarkXPress has been updated on an annual cycle, with major version releases from 2015 to present.
Use and features:
The package provides the basic functionality of font, alignment, spacing, and color, but it also provides its users with professional typesetting options such as kerning, curving text along a line, and ligatures.
A QuarkXPress document contains text and graphics boxes. The boxes can be reshaped, layered, and given varying levels of transparency and text alignment (runaround). Both box positioning and graphic or text positioning is allowed within a box with an accuracy of one-thousandth of an inch.
Color control allows the full-use of printing-press standard Pantone or Hexachrome inks, along with a variety of other color-space options. Draft output can be printed on conventional desktop printers. Process color (CMYK) separation films can be produced for printing-presses. QuarkXPress also offers the ability for composite work-flows, both with PostScript and PDF output.
QuarkXPress offers layout synchronization, multiple undo/redo functionality, XML and web page (HTML) features, and support for direct PDF import and output. Documents can be verified (pre-flight) before printing. This high-level print preview automatically identifies conflicts and other printing problems. Adobe has a similar feature in InDesign.
Composition zones feature makes it the only desktop application with multi-user capabilities by allowing multiple users to edit different zones on the same page. Composition Zones pushes collaboration a step further than just simultaneous text/picture (as possible with Quark CopyDesk since 1991), as it allows layout and graphic elements to be edited outside the layout application.
User-defined rules, output specs, and layout specs can be used for intelligent templates and enable resource sharing (for example, server-based style sheet definitions).
Version 6.5, released at the end of 2004, added enhanced support for the Photoshop format (PSD). The PSD integration and picture manipulation features led to QuarkXPress receiving a number of awards, such as the Macworld Editor's Choice for 2004.
Version 7 added support for OpenType, Unicode, JDF, and also PDF/X-export. QuarkXPress 7 also added unique features, such as native transparency at the color level.
Use and features:
QuarkXPress 8 introduced a completely new user interface, support for drag and drop, PDF 1.7 import, AI Import and a global file format. Design grids can be assigned to pages and boxes to allow unlimited baseline grids. Hanging characters can be applied and customized by character and amount to hang outside the box. This is the first version to include built-in Adobe Flash authoring. Designers can create Flash content including sound, video, animation and interactivity without programming. In October 2008, QuarkXPress 8 won the MacUser Award for Print Publishing Software of the Year.With version 9 QuarkXPress extended its crossmedia publishing approach and can be used now to also export to eBooks (ePub3 and Blio) and native apps (for the iPad). With App Studio, which is shipped with QuarkXPress, designers can even create and design their own apps. Additionally QuarkXPress 9 offers cascading styles (stylesheets based on text content), callouts (anchored objects that flow with the text based on position rules), create complex ad editable Bézier paths using a wizard (ShapeMaker), bullets and numbers (with import and export from/to Microsoft Word) and more.
Use and features:
The Mac version of QuarkXPress 9 is for Intel processors only, making QuarkXPress 8.5.1 the last choice for PPC-based Macs.
Use and features:
QuarkXPress 10, was described by Quark as a major re-write of the software on the Mac platform in particular to move it from the older Carbon API to Cocoa. It also included a new, modern graphics engine, Xenon. During the lifecycle of version 10, new features included Retina Display support, PDF pass-through transparency, notes, redlining, increased zoom (8000%) and the ability to create HTML5 animations for inclusion in App Studio tablet and smartphone apps.
Use and features:
QuarkXPress 2015 was the first version to use a different naming scheme. It was completely 64-bit and added fixed-layout ePub and Kindle export as well as exporting layouts as PDF/X-4. Quark claimed to have added the top 10 of all user-requested features.QuarkXPress 2016 included the ability to import and copy and paste from other applications and file formats to native QuarkXPress objects. The release also includes revamped digital capabilities including being able to create HTML5 Publications. Top user requested features include multi-gradient blends and a color picker tool.
Use and features:
QuarkXPress 2017 continued the new naming scheme and established an annual release cycle. The headline features include non-destructive image editing, various typography enhancements such as text stroking and text shading, responsive HTML5, and unlimited iOS apps for no additional cost (outside of the Apple Developer fees). Other user-requested features included adaptive layout conversion for print, smart quotes, and proportional leading.
Use and features:
On March 1, 2018, Quark announced QuarkXPress 2018, stating it would be available on May 16, 2018, continuing its now familiar annual release cycle. The headline features in version 2018 include new OpenType controls, hyphenation strictness, support for color fonts, IDML import (to convert Adobe InDesign documents to QuarkXPress) and the ability to create unlimited Android apps for no additional cost (outside of the Google Play fees).
Use and features:
Server version In the beginning of 2003 Quark released a server version of QuarkXPress, originally called QuarkDDS. Renamed in 2006 to "QuarkXPress Server", the product is now primarily sold with Quark Publishing Platform – the central hub of the company's content automation solutions. QuarkXPress Server is a Java application that takes content components (text, images, video, data, charts, etc.) and automatically assembles them into different formats from PDFs to responsive HTML and Web apps. As the content is assembled into templates using granular content components, the output can be highly customized for different audiences in terms of the content and the brand. The system relies on XML.
Use and features:
Extensions and tools Quark Interactive Designer Quark Interactive Designer is an extension and tool for creating Adobe Flash context from QuarkXPress documents. It enables the export QuarkXPress projects in SWF (Flash) file format. This allows documents created for print or web production to also be output as a Flash advertisement. No knowledge of timelines or ActionScript is necessary for this purpose. Since QuarkXPress is natively capable of creating HTML projects, this allows web designers to design and build their HTML and Flash elements and combine them all in a single application. Resulting files can be exported as SWF Flash files or standalone Projector applications for macOS or Windows. Quark Interactive Designer makes use of palette-based actions, similar to those found in PowerPoint, in order to animate text and graphics. It also allows some use of button-triggered behaviors and embedding of QuickTime and Flash Video, and audio files.
Version history:
QuarkXPress 1 (1987) – Mac OS only.
QuarkXPress 2 (1989) – First non-English versions (e.g. French, German).
QuarkXPress 2.1 (1989) – Enhanced typographic control, such as user-definable kerning tables.
QuarkXPress 3 (1990) – First version with measurement palette and support for libraries.
QuarkXPress 3.1 (1992) – First version to also support Windows.
QuarkXPress 3.2 (1993) – First version to support Applescript and color management.
QuarkXPress 3.3 (1996) – First version to support PPC natively. First Passport Version (optional).
QuarkXPress 3.32 (1996) – Support for QuarkImmedia. This is the last version which works on Windows 3.x (requires Win32s to be installed).
QuarkXPress 4 (1997) – First version with bézier curves. Notable interface improvements include pop-up tools and tabbed dialog boxes.
QuarkXPress 4.1 (1999) – First version to also support PDF and XML.
QuarkXPress 5 (2002) – First version to offer tables and to export HTML.
QuarkXPress Server (QuarkDDS) released.
QuarkXPress 6 (2003) – First version to support Mac OS X.
QuarkXPress 6.1 (2004) – First version with Excel Import filter.
QuarkXPress 6.5 (2004) – First version to also support the Document Object Model and features for picture retouching.
QuarkXPress 6.52 (2006) – Bug fixes, released after Quark 7.
QuarkXPress 7 (2006) – First version to support OpenType, Unicode, PDF/X, Shadows/Transparencies, Job Definition Format and Composition Zones.
QuarkXPress 7.01 (8 August 2006) – First native version for Intel Macs (Universal binary), plus PPML support.
QuarkXPress 7.02 (2006) – Additional language support in Passport.
QuarkXPress 7.1 (2007) – Performance update.
QuarkXPress 7.2 (2007) – First version to support Windows Vista, additional languages.
QuarkXPress 7.3 (2007) – Increased UI localization and PDF support, improved performance and stability.
QuarkXPress 7.31 (2007) – Certification on Windows Vista, support for Mac OS X 10.5 ("Leopard"), enhancements to spell checking.
QuarkXPress 7.4 (2008) – non public release, only for QPS customers.
QuarkXPress 7.5 (2008) – Bug-fix release, released after release of Quark 8.
QuarkXPress 8 (2008) – New UI, drag-and-drop support, direct image manipulation, customizable optical margin alignment, multiple baseline grids, East Asian support, built-in Flash authoring.
QuarkXPress 8.01 (2008) – Spellchecker enhancements QuarkXPress 8.02 (2009) – Five new languages and new Pantone libraries.
QuarkXPress 8.1 (2009) – Numerical scale, native transparency and layers in PDF, improved spell checker and other feature improvements. Supports Snow Leopard and Windows 7.
QuarkXPress 8.12 (2009) – Bug-fix release.
QuarkXPress 8.15 (2010) (Mac OS X only) – Fixes activation issues on certain Apple hardware.
QuarkXPress 8.1.6 (2010) – Speed optimizations QuarkXPress 8.1.6.2 (2010) – Bug-fix release.
QuarkXPress 8.5 (2010) – Bug fixes, auto updater, DOCX import.
QuarkXPress 8.5.1 (2011) – Bug fixes, last Universal Binary version.
QuarkXPress 9 (2011) – Nested Styles, callouts (anchored elements outside text boxes), bullets and numbers, shape wizard, multi-image import, ePUB Export.
QuarkXPress 9.0.1 (2011) – Bug-fix release QuarkXPress 9.1 (2011) – Addition of "App Studio", which allows to export multimedia apps for iPad out of QuarkXPress. First version to officially support Mac OS X Lion QuarkXPress 9.2 (2012) – Export to ePUB 3.0, plus ability to create ePUB files from scratch. Improvements to App Studio, including iOS 5 support.
QuarkXPress 9.2.1 (2012) (Mac OS X only) – Fix "missing icons" bug caused by Lion 10.7.3 QuarkXPress 9.2.1.1 (2012) – Added support for exporting to the Retina iPad QuarkXPress 9.3 (2012) – Export eBooks directly to Amazon Kindle format, plus other minor fixes including EPS and PDF color management.
QuarkXPress 9.3.1 (2012) – Compatibility with the OS X Mountain Lion (10.8) Gatekeeper feature.
QuarkXPress 9.3.1.1 (2012) – Fixes a spellchecker crash.
QuarkXPress 9.5 (2012) – Allows the creation of 100% HTML5-based content on native apps and platforms such as Android.
Version history:
QuarkXPress 9.5.1 (2013) – Adds page stacks, bugfixes QuarkXPress 9.5.1.1 (2013) – Bugfixes QuarkXPress 9.5.2 (2013) – Download manager, bugfixes QuarkXPress 9.5.3 (2013) – Fixes known issues with PDF export QuarkXPress 9.5.4 (2013) – Support for OS X Mavericks QuarkXPress 10 (September 2013)QuarkXPress 10.0.1 (2013) – Support for OS X Mavericks and Windows 8.1 QuarkXPress 10.1 (2014) – 8000% zoom, smart guides, HTML5-based animations, image export, new book function QuarkXPress 10.2 (2014) – Speed Improvements, Notes, Redlining QuarkXPress 10.2.1 (2014) – Bug fixes QuarkXPress 10.5 (2014) – Support for OS X Yosemite QuarkXPress 2015QuarkXPress 2015 Release 11.0 (April 2015) – 64-bit version only, over 5 meters max page size, fixed-layout interactive eBooks (FXL ePUB), footers and end notes, text variables, custom paper sizes, user-definable shortcut keys (Mac only), table styles, PDF/X-4.
Version history:
May 2015 Release (11.0.0.1) – bug fixes July 2015 Release (11.0.1) – faster launch speed Sep 2015 Release (11.1) – Support for Windows 10 Oct 2015 Release (11.2) – Support for OS X El Capitan QuarkXPress 2016QuarkXPress 2016 Release 12.0 (May 2016) – Convert AI/EPS/PDF to editable objects, copy Illustrator, InDesign, MS Office as editable objects; create HTML5 Publications, multi-color gradients, OpenType Stylistic Sets, Eyedropper QuarkXPress 2017QuarkXPress 2017 Release 13.0 (May 2017) – Non-destructive image editing, transparency blend modes, text shading and test framing, stroke live text, merge/split columns, create responsive HTML5 Publications, create iOS Apps (for free, no monthly fees) QuarkXPress 2017 Release 13.0.1 (June 2017) QuarkXPress 2017 Release 13.0.2 (July 2017) QuarkXPress 2017 Release 13.1 (October 2017) – Support for macOS High Sierra QuarkXPress 2017 Release 13.1.1 (December 2017) – Fix for PSD filter QuarkXPress 2017 Release 13.2 (January 2018) – Beta support for opening Adobe InDesign Markup Language (IDML) files QuarkXPress 2017 Release 13.2.1 (January 2018) – Fix for PDF output QuarkXPress 2017 Release 13.2.4 (June 2018) QuarkXPress 2018QuarkXPress 2018 Release 14.0 (May 2018) - OpenType enhancements, color fonts support, hyphenation strictness, InDesign IDML Import, tagged/accessible PDF, built-in JavaScript v8 support, create Android Apps, digital preview improvements, HTML5 export optimizations, unified Windows/Mac interface.
Version history:
QuarkXPress 2018 Release 14.0.1 (July 2018) QuarkXPress 2018 Release 14.1.2 (October 2018) – Now available in the Mac App Store. Dark Theme for Mojave.
Version history:
QuarkXPress 2018 Release 14.2 (December 2018) – Adds typography for Indian languages like Hindi QuarkXPress 2018 Release 14.2.1 (January 2019) QuarkXPress 2019 QuarkXPress 2019 Release 15.0 (July 2019) QuarkXPress 2020 QuarkXPress 2020 Release 16.0 (2020) QuarkXPress 2021 QuarkXPress 2021 Release 17.0.01 (October 2021) QuarkXPress 2022QuarkXPress 2022 Release 18.0 (February 2022) - subscription offering in addition to perpetual license option, access to a royalty-free stock image library QuarkXPress 2023 QuarkXPress 2023 Release 19.0 (April 2023) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ARHGAP9**
ARHGAP9:
Rho GTPase-activating protein 9 is an enzyme that in humans is encoded by the ARHGAP9 gene.
Function:
This gene encodes a member of the Rho-GAP family of GTPase activating proteins. The protein has substantial GAP activity towards several Rho-family GTPases in vitro, converting them to an inactive GDP-bound state. It is implicated in regulating adhesion of hematopoietic cells to the extracellular matrix. Multiple transcript variants encoding different isoforms have been found for this gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Immunodeficiency with hyperimmunoglobulin M**
Immunodeficiency with hyperimmunoglobulin M:
Immunodeficiency with hyperimmunoglobulin M is a rare disorder characterized by recurrent infections, low or absent IgG, IgE, and IgA levels, and normal or elevated levels of IgM and IgD.: 84 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tales of VS.**
Tales of VS.:
Tales of VS. (テイルズ オブ バーサス, Teirusu Obu Bāsasu) (pronounced as "Tales of Versus") is a crossover fighting game featuring various characters across the Tales video game series. It was developed by Matrix Software and published by Namco Bandai Games for the PlayStation Portable on August 6, 2009 in Japan. It was not localized for release in any other regions.
Gameplay:
The game takes the basic fighting engine from the main series of Tales video games, which is called the "Linear Motion Battle System", and uses it in a crossover fighting video game in the vein of Dissidia Final Fantasy and Super Smash Bros.. Characters are able to carry out normal attacks as well as "Artes", special techniques which have been utilized in almost every Tales game. Items appear on the playing field that also affect gameplay; food such as sushi may offer a temporary stat boost, while others, like a boomerang-blade, may offer a temporary means of attack. Battles can feature up to a total of four characters at a time, with an emphasis on two on two battles, although "free for all and one vs. two battles do occur as well. Stages are interactive with different stages having different effects, such as rays of light or rolling barrels that cause damage, or disappearing and reappearing platforms for characters to jump on.Customization also plays a large role in the game as well. Upon winning battles, characters gain experience in the form of "grade points", which are used to upgrade statistics. Grade points may be distributed and redistributed in between battles, to statistics such as health points, attack power, or defense. Additionally, grade points can be used to equip skills and abilities as well, such as a "dash" move that increases general speed, or a move that increases jumping height. Like games in the main series, characters are also able to customize their equipment, although it doesn't change the character's physical appearance.
Gameplay:
Story mode The game features a number of different game modes. The main part of the game, the "Story Mode", is where the bulk of the gameplay occurs and where the game's overall plot unfolds. In game, it is referred to as the "Yggdrasill Mode", named after the "World Tree" that the game's story is centered around. This mode focuses entirely on two on two battles, where the player chooses a preset duo, some from the same game, like Lloyd Irving and Colette Brunel of Tales of Symphonia, some being random pairings, such as Farah Oersted of Tales of Eternia pairing with Yuri Lowell of Tales of Vesperia. In the game, the player directs the characters around a World Map with preset paths and destinations, not allowing for exploration beyond the straight line. Different events occur when the character stops on different icon amongst the paths; typically leading to either story sequences, required battles, or optional side-quests. Finishing the game with certain character sets, or playing through the course of the game and making certain choices, unlocks further sets of characters to play through the game.
Gameplay:
Other modes There are multiple other aspect of the games beyond the "Story Mode". The game's "Arcade Mode" simplifies things down to simply continuous fights against computer-controlled opponents in a preset order, where as the "Survival Mode" plays similarly, but entails advancing for as long as possible against increasingly stronger opponents. The game also has a "Special Battle Mode", where special challenges, scenarios, or restrictions are set up for the player to do in order to win the battle. For example, a winning condition for a battle may be to use certain characters, being the first to attack, or being restricted from using certain moves. A general "Training Mode" also exists, where the player can practice moves against a dummy opponent.
Gameplay:
A wireless "Multiplayer Mode" is also available, for up to four players to battle amongst each other. "Grade Points" earned from performance in multiplayer battles can be used in the "Story Mode" as well. In addition to the specific "Multiplayer Mode", a number of the other modes can be played with a second player in a cooperative manner as well.Beyond the various different fighting modes, the game also has other areas, such as a specific "Customization Mode" that is just for setting up characters for battle, and an "Item Library", where unlocked content, such as music, movies, or collectable cards can be viewed.
Gameplay:
Tales of Wallbreaker Tales of VS. also contains a detailed mini-game, Tales of Wallbreaker, separate from the typical fighting that takes place in the rest of the main game. This part of the game opts to use a completely different graphical style than the rest of the game, modeled after traditional 2D sprite-based graphics similar to the first few Tales games. The gameplay still revolves around fighting on a 2D plane, but the goal is no longer based on draining the other character's health. Instead there are two walls, one behind each character, and the object is to attack the other character into the wall enough times to make it shatter. The mini-game contains twenty one characters, including thirteen exclusive characters that cannot be played as in the main game. While largely a stand-alone game, characters for Tales of Wallbreaker can be unlocked due to actions that take place in the "Story Mode" of the main game.
Story:
Unlike many cross-over scenarios, where characters from one world are transplanted into another, in Tales of VS. all of the characters from different games come together into a new, original world, called Dailantia. The world is largely drained of resources, with only four countries left, all needing the remaining resources. The four countries consist of the Holy Kingdom of Hazel, the Knight States of Fleswelg, the New Imperial Nation Niddshogg, and the Free States Alliance of Dyne. The "World Tree", the source of the world's energy (called "mana" in-game), only releases a "Great Seed" of energy every couple of years, which leads to the Nations fighting amongst themselves for ownership of it.Instead of falling back on war, the nations come up with a diplomatic competition to decide who gets the resources. The countries would have representatives that would travel the land to collect special flags, and the country that collects all of the flags would be given the rights to the "Great Seed". The "representatives" are characters of the past games in the Tales series, and the fights between the representatives to obtain the flags make up the game's battles.
Characters:
Tales of VS. features a total of 35 characters from 13 previous Tales games. Character interactions and relationships are handled in a similar way to how it is handled in Dissidia Final Fantasy and Dissidia 012 crossover fighting video games; character's share their same general characteristics and relationships from their prior games, but technically have slightly different backstories. For instance, Lloyd is still Colette's guardian in Tales of VS., as he was in his original game Tales of Symphonia, but their overall goal and country they live in differs from their hometown and "World Regeneration Project" featured in their original game.When more than one of the same character is being used in the same battle, the characters wear the same costumes but in different colors.
Characters:
Main game Tales of Wallbreaker Additionally, there are some characters that are only playable in the Tales of Wallbreaker minigame.
Development and release:
Tales of VS. was released on August 6, 2009 in Japan. While the game's name was trademarked in North America, and even had trailers with some English voiceovers and English text, it was never announced or released for any region outside Japan. Shortly before and after the game's Japanese release, a series of videos of the game, titled "Director's Corner" videos, were released, showcasing aspects of the game by the developers. The theme song accompanying the opening scene is "Be Your Wings", sung by Girl Next Door, and was released on August 5, 2009.The game came packaged with a code for free DLC for the PlayStation 3 version of Tales of Vesperia, which unlocked new skits, and pre-ordering the game resulted in further DLC that unlocked new costumes in Tales of Vesperia based on Tales of the Abyss. Additionally, Tales of Vesperia contained a DLC code for Tales of VS., to unlock a special fight that unlocks special, otherwise unobtainable equipment.
Development and release:
A radically different game under the title of Tales of VS. was also released for mobile phones. Players would be able to use a login ID and password to send and receive data from the PSP version of the game. Through this players can view status, equipment or even get bonus items. In the mobile version, players are also able to make their own characters, and change their Guild, Job, Title and Accessories. Contrary to the action-based fighting in the PSP release, the mobile version utilizes a turn-based system known as "Command Battle" in which each character has four commands: Attack, Defend, Use Skill or Counter. Each battle lasts 10 turns.
Reception:
Reception for the game has been mixed. Japanese gaming magazine Famitsu released a generally positive review for the game in their August 2009 issue giving the game a 8/8/8/8 a total of 32 of 40. Famitsu praised the gameplay and controls, stating "The controls are simple...but the gameplay system is remarkably deep. It's pretty basic as a multiplayer game, but I get the impression that the charms of the series are well-represented." Siliconera was also positive with the game, praising the crossover character interactions and different game modes, stating that it was "generally fun to play". Excessive load times between battles and occasional odd camera views were noted as faults of the game.PlayStation LifeStyle was less enthusiastic regarding the game, giving it a 4/10 and stating, "Tales fans might have some reason to import this interesting spinoff, and the ones who can understand some Japanese would get a few smiles out of the story mode, but without the appeal of fanservice, we’re left with a fighting game that gets boring quickly."Initial sales of the game in Japan were high, with 133,000 copies sold in its first week, and just short of another 35,000 in its second week. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hughes procedure**
Hughes procedure:
The Hughes procedure is an oculoplastic procedure which is performed to reconstruct a lower eyelid defect. It is usually performed as a 2-stage procedure.The most common use for the Hughes procedure is reconstruction after the removal of a lower eyelid skin cancer.The result aims to recreate the normal appearance and function of the lid. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vimentin**
Vimentin:
Vimentin is a structural protein that in humans is encoded by the VIM gene. Its name comes from the Latin vimentum which refers to an array of flexible rods.
Vimentin:
Vimentin is a type III intermediate filament (IF) protein that is expressed in mesenchymal cells. IF proteins are found in all animal cells as well as bacteria. Intermediate filaments, along with tubulin-based microtubules and actin-based microfilaments, comprises the cytoskeleton. All IF proteins are expressed in a highly developmentally-regulated fashion; vimentin is the major cytoskeletal component of mesenchymal cells. Because of this, vimentin is often used as a marker of mesenchymally-derived cells or cells undergoing an epithelial-to-mesenchymal transition (EMT) during both normal development and metastatic progression.
Structure:
A vimentin monomer, like all other intermediate filaments, has a central α-helical domain, capped on each end by non-helical amino (head) and carboxyl (tail) domains. Two monomers are likely co-translationally expressed in a way that facilitates their formation of a coiled-coil dimer, which is the basic subunit of vimentin assembly.The α-helical sequences contain a pattern of hydrophobic amino acids that contribute to forming a "hydrophobic seal" on the surface of the helix. In addition, there is a periodic distribution of acidic and basic amino acids that seems to play an important role in stabilizing coiled-coil dimers. The spacing of the charged residues is optimal for ionic salt bridges, which allows for the stabilization of the α-helix structure. While this type of stabilization is intuitive for intrachain interactions, rather than interchain interactions, scientists have proposed that perhaps the switch from intrachain salt bridges formed by acidic and basic residues to the interchain ionic associations contributes to the assembly of the filament.
Function:
Vimentin plays a significant role in supporting and anchoring the position of the organelles in the cytosol. Vimentin is attached to the nucleus, endoplasmic reticulum, and mitochondria, either laterally or terminally.The dynamic nature of vimentin is important when offering flexibility to the cell. Scientists found that vimentin provided cells with a resilience absent from the microtubule or actin filament networks, when under mechanical stress in vivo. Therefore, in general, it is accepted that vimentin is the cytoskeletal component responsible for maintaining cell integrity. (It was found that cells without vimentin are extremely delicate when disturbed with a micropuncture). Transgenic mice that lack vimentin appeared normal and did not show functional differences. It is possible that the microtubule network may have compensated for the absence of the intermediate network. This result supports an intimate interaction between microtubules and vimentin. Moreover, when microtubule depolymerizers were present, vimentin reorganization occurred, once again implying a relationship between the two systems. On the other hand, wounded mice that lack the vimentin gene heal slower than their wild type counterparts.In essence, vimentin is responsible for maintaining cell shape, integrity of the cytoplasm, and stabilizing cytoskeletal interactions. Vimentin has been shown to eliminate toxic proteins in JUNQ and IPOD inclusion bodies in asymmetric division of mammalian cell lines.Also, vimentin is found to control the transport of low-density lipoprotein, LDL, -derived cholesterol from a lysosome to the site of esterification. With the blocking of transport of LDL-derived cholesterol inside the cell, cells were found to store a much lower percentage of the lipoprotein than normal cells with vimentin. This dependence seems to be the first process of a biochemical function in any cell that depends on a cellular intermediate filament network. This type of dependence has ramifications on the adrenal cells, which rely on cholesteryl esters derived from LDL.Vimentin plays a role in aggresome formation, where it forms a cage surrounding a core of aggregated protein.
Clinical significance:
It has been used as a sarcoma tumor marker to identify mesenchyme. Its specificity as a biomarker has been disputed by Jerad Gardner.Methylation of the vimentin gene has been established as a biomarker of colon cancer and this is being utilized in the development of fecal tests for colon cancer. Statistically significant levels of vimentin gene methylation have also been observed in certain upper gastrointestinal pathologies such as Barrett's esophagus, esophageal adenocarcinoma, and intestinal type gastric cancer. High levels of DNA methylation in the promoter region have also been associated with markedly decreased survival in hormone positive breast cancers.
Clinical significance:
Downregulation of vimentin was identified in cystic variant of papillary thyroid carcinoma using a proteomic approach.
See also Anti-citrullinated protein antibody for its use in diagnosis of rheumatoid arthritis.
Vimentin was discovered to be an attachment factor for SARS-CoV-2 by Nader Rahimi and colleagues.
Interactions:
Vimentin has been shown to interact with: DSP MEN1 MYST2 PKN1 PRKCI PLEC SPTAN1 UPP1 YWHAZThe 3' UTR of Vimentin mRNA has been found to bind a 46kDa protein. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stop signal**
Stop signal:
In telecommunication, a stop signal is a signal that marks the end of part of a transmission, for example: In asynchronous serial communication, a signal at the end of a character that prepares the receiving device for the reception of a subsequent character. A stop signal is usually limited to one signal element having any duration equal to or greater than a specified minimum value.
Stop signal:
A signal to a receiving mechanism to wait for the next signal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cohesin domain**
Cohesin domain:
In molecular biology, the cohesin domain is a protein domain. It interacts with a complementary domain, termed the dockerin domain. The cohesin-dockerin interaction is the crucial interaction for complex formation in the cellulosome.The scaffolding component of the cellulolytic bacterium Clostridium thermocellum is a non-hydrolytic protein which organises the hydrolytic enzymes into a large complex, called the cellulosome. Scaffoldin comprises a series of functional domains, amongst which is a single cellulose-binding domain and nine cohesin domains which are responsible for integrating the individual enzymatic subunits into the complex. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Safelight**
Safelight:
A safelight is a light source suitable for use in a photographic darkroom. It provides illumination only from parts of the visible spectrum to which the photographic material in use is nearly, or completely insensitive.
Design:
A safelight usually consists of an ordinary light bulb in a housing closed off by a coloured filter, but sometimes a special light bulb or fluorescent tube with suitable filter material or phosphor (in fluorescent tubes) coated directly on the glass is used in an ordinary fixture.
Design:
Differently sensitised materials require different safelights. In traditional black-and-white photographic printing, photographic papers normally are handled under an amber or red safelight, as such papers typically are sensitive only to blue and green light. Orthochromatic papers and films are also sensitive to yellow light and must be used only with a deep red safelight, not with an amber one. Panchromatic films and papers, nominally sensitive to the entire spectrum, sometimes have a region of minimum in their range of sensitivity that allows the careful use of safelight confined to that part of the spectrum. For example, Kodak Panalure panchromatic paper is tolerant of limited exposure to light filtered through a Kodak 13 Safelight Filter. Other panchromatic materials must be handled only in total darkness.
Design:
Many photosensitive materials used in technical and industrial applications, such as photoresist, are sensitive only to blue, violet, and ultraviolet light and may be handled under a brighter yellow safelight. Low-pressure sodium vapour lamps sometimes are used in larger industrial darkrooms. They emit nearly monochromatic light at 589 nm (yellow), to which the materials are insensitive; as a result they can be extremely bright while still "safe".The word "safe" in "safelight" is relative, as in most cases, a sensitised material eventually will be affected by its safelight if exposed to it for an extended length of time. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interruption science**
Interruption science:
Interruption science is the interdisciplinary scientific study concerned with how interruptions affect human performance, and the development of interventions to ameliorate the disruption caused by interruptions. Interruption science is a branch of human factors psychology and emerged from human–computer interaction and cognitive psychology.
Interruption science:
Being ubiquitous in life and an intuitive concept, there are few formal definitions of interruption. A commonly agreed upon definition proposed by Boehm-Davis and Remington specifies an interruption is "the suspension of one stream of work prior to completion, with the intent of returning to and completing the original stream of work". Interruptions are considered to be on the spectrum of multitasking and in this context referred to as sequential multitasking. The distinguishing feature of an interruption (see Task switching (psychology), concurrent multitasking) is the presence of primary task which must be returned to upon completing a secondary interrupting task. For instance, talking on the phone while driving is generally considered an instance of concurrent multitasking; stopping a data entry task to check emails is generally considered an instance of an interruption.
Interruption science:
Interruptions, in almost all instances, are disruptive to performance and induce errors. Therefore, interruption science typically examines the effects of interruptions in high-risk workplace environments such as aviation, medicine, and vehicle operation in which human error can have serious, potentially disastrous consequences. Interruptions are also explored in less safety-critical workplaces, such as offices, where interruptions can induce stress, anxiety, and poorer performance.
History:
The first formal investigation into interruptions was conducted by Zeigarnik and Ovsiankina as part of the Vygotsky Circle in the 1920s. Their seminary research demonstrated the Zeigarnik effect: people remember uncompleted or interrupted tasks better than completed tasks. In the 1940s, Fitts and Jones reported that interruptions were a cause of pilot errors and flying accidents, and made recommendations on reducing these disruptive effects.
Knowledge workers:
Office workers face a number of interruptions due to information technologies such as e-mail, text messages, and phone calls. One line of research in interruption science examines the disruptive effects of these technologies and how to improve the usability and design of such devices. According to Gloria Mark, "the average knowledge worker switches tasks every three minutes, and, once distracted, a worker can take nearly a half-hour to resume the original task". Mark conducted a study on office workers, which revealed that "each employee spent only 11 minutes on any given project before being interrupted". Kelemen et al. showed that a team of programmers is interrupted through a technical Skype support chat up to 150 times a day, but these interruptions can be reduced by introducing a dispatcher role and a knowledge base.
Knowledge workers:
Notifications One of the major challenges associated with increased reliance on information technologies is they will send users notifications, without considering current task demands. Answering notifications impedes task performance and the ability to resume to the original task at hand. In addition, even just knowing that one has received a notification can negatively impact sustained attention.Several solutions have been proposed to this problem. One study suggested entirely disable email notifications. The down side was it may induce a pressure to constant need to check their email accounts.: 27 In fact, entirely removing notifications may lead people to spend more time checking their email.: 29 The absence of e-mail notifications is often seen as counterproductive because of the required "catch-up" time periods after a long time between email checking.: 30 Alternatively, there are several attempts to design software applications that deliver notifications when there is an identified break from work, or categorize notifications based on their relative importance (e.g. Oasis).
Knowledge workers:
Research has also investigated the effects of relevant interruptions, and found notifications relevant to the current task are less disruptive than if it were unrelated.: 99 Overall task performance is most impacted when an instant message is received during fast and stimulus-driven tasks such as typing, pressing buttons, or examining search results.: 263, 265, 268 Bounded deferral is a restricted notification method that entails users waiting a prescribed amount of time before they access a notification to reduce the amount of interruption and decline in productivity. This technique was used in the aim to provide calmer and less disruptive work spaces.: 1 If users are busy, alerts and notifications are put aside and delivered only when users are in a position to receive notifications without harming their work. The bounded deferral method has proven to be useful and has the potential to become even more effective on a wider scale, as it has showed how an effective notification system can operate.
Medicine:
In nursing, a study has been conducted of the impact of interruptions on nurses in a trauma center. Another study has been done on the interruption rates of nurses and doctors.Interruption caused by smartphone use in health-care settings can be deadly. Hence, it may be worthwhile for health care organizations to craft effective cellphone usage policies to maximize technological benefits and minimize unnecessary distraction associated with smartphone use. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subinvolution**
Subinvolution:
Subinvolution is a medical condition in which after childbirth, the uterus does not return to its normal size.
Presentation:
Symptoms The condition may be asymptomatic. The predominant symptoms are: Abnormal lochial discharge either excessive or prolonged Irregular or at times excessive uterine bleeding Irregular cramp like pain is cases of retained products or rise of temperature in sepsis Signs The uterine height is greater than the normal for the particular day of puerperium. Normal puerperal uterus may be displaced by a full bladder or a loaded rectum. It feels boggy and softer upon palpation.
Presentation:
Presence of features responsible for subinvolution may be evident.
Causes:
Predisposing factors Grand multiparity Overdistension of uterus as in twins and hydramnios Ill maternal health Caesarean section Uterine prolapse Retroversion after the uterus becomes pelvic organ Uterine fibroid Aggravating factors Retained products of conception Uterine sepsis, endometritis
Factors:
Persistent lochia/fresh bleeding Long labor Anesthesia Full bladder Difficult delivery Retained placenta Maternal infection
Diagnosis:
Definition When the involution is impaired or retarded it is called subinvolution. The uterus is the most common organ affected by subinvolution. As it is the most accessible organ to be measured per abdomen, the uterine involution is considered clinically as an index to assess subinvolution.
Management:
Antibiotics in endometritis Exploration of the uterus in retained products Pessary in prolapse or retroversion.
Ergometrine so often prescribed to enhance the involution process by reducing the blood flow of the uterus is of no value in prophylaxis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Widowmaker (forestry)**
Widowmaker (forestry):
In forestry, a widowmaker or fool killer is a detached or broken limb or tree top. The name indicates that such objects can kill forest workers by falling on them, thus "making widows" of their wives. The U.S. Occupational Safety and Health Administration describes widowmakers as "broken off limbs that are hanging freely in the tree to be felled or in the trees close by."
Causes:
Widowmakers are often caused by fungal growth over a sustained period. Other causes include damage from other falling trees or stress on a branch.
Hazards:
Widowmakers may pose a risk to equipment or personnel working under or around the tree. They can become dislodged by wind or during tree felling, and are responsible for 11% of all fatal chainsaw accidents. The U.S. National Institute for Occupational Safety and Health (NIOSH) offers ways to eliminate risks by avoiding working beneath widowmakers, knocking them down, or pulling them down with a machine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CHST7**
CHST7:
Carbohydrate sulfotransferase 7 is an enzyme that in humans is encoded by the CHST7 gene.
Function:
This gene belongs to the sulfotransferase gene family. Sulfotransferases generate sulfated glycosaminoglycan (GAG) moieties during chondroitin sulfate biosynthesis. They create considerable structural diversity among chondroitin sulfates by transferring sulfate with remarkable specificity for the underlying oligosaccharide substrate. This gene product mainly transfers sulfate to N-acetylgalactosamine. The regulated expression of each member of this gene family may be an important determinant of sulfated GAGs expression and the associated function of chondroitin sulfates as regulators of many biologic processes. This gene is part of a gene cluster on chromosome Xp11.23. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vomilenine reductase**
Vomilenine reductase:
In enzymology, a vomilenine reductase (EC 1.5.1.32) is an enzyme that catalyzes the chemical reaction 1,2-dihydrovomilenine + NADP+ ⇌ vomilenine + NADPH + H+Thus, the two substrates of this enzyme are 1,2-dihydrovomilenine and NADP+, whereas its 3 products are vomilenine, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-NH group of donors with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 1,2-dihydrovomilenine:NADP+ oxidoreductase. This enzyme participates in indole and ipecac alkaloid biosynthesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Germanium telluride**
Germanium telluride:
Germanium telluride (GeTe) is a chemical compound of germanium and tellurium and is a component of chalcogenide glasses. It shows semimetallic conduction and ferroelectric behaviour.Germanium telluride exists in three major crystalline forms, room-temperature α (rhombohedral) and γ (orthorhombic) structures and high-temperature β (cubic, rocksalt-type) phase; α phase being most phase for pure GeTe below the ferroelectric Curie temperature of approximately 670 K.Doped germanium telluride is a low temperature superconductor.
Phase Transition:
Solid GeTe can transform between amorphous and crystalline states. The crystalline state has a low resistivity (semiconducting at room temperature) and the amorphous state has a high resistivity. The difference in resistivity can be up to six orders of magnitude depending on the film quality, GeTe compositions, and nucleation site formation. The drastic changes in the properties of the material have been exploited in data storage applications. The phase transitions of GeTe can be fast, reversible and repeatable, with drastic property changes, making GeTe a promising candidate in applications like radio frequency (RF) switching and direct current (DC) switching. Researches on mechanisms that relate the phase transition and radio frequency (RF) switching is underway, with promising future in optimization for telecommunication applications.
Phase Transition:
Although both solid states can exist at room temperatures, the transition requires specific heating and cooling process known as the thermal actuation method. To achieve the amorphous state the solid is heated up beyond the melting temperature with a high current pulse in a short amount of time and rapidly quenched or cooled down. Crystallization happens when the GeTe is heated to a crystallization temperature lower than the melting temperature with a relatively longer and lower current pulse, and a slow quenching process with the current gradually reduced. Both direct and indirect heating can induce phase changes. Joule heating approach is the common direct heating method and indirect heating can be accomplished by a separate layer of dielectric material added to the RF switch. The crystal structure of GeTe is rhombohedrally distorted rock salt-type structure that forms a face-centered cubic (FCC) sublattice at room temperature.
Synthesis:
Single-crystalline GeTe nanowires and nanohelices Semiconducting GeTe nanowires (NW) and nanohelices (NH) are synthesized via vapor transport method, with metal nanoparticle catalysts. GeTe was evaporated and carried by Ar gas at optimum temperature, pressure, time, and gas flow rate to the downstream collecting/grow site (SiO2 surface coated with colloidal gold nanoparticles). High temperature over 500 °C produces thicker nanowires and crystalline chunks. Au is essential to the growth of NW and NH and is suggested to the metal catalyst of the reaction. This method gives rise to NW and NH with a 1:1 ratio of Ge and Te. NW produced by this method average about 65 nm in diameter and up to 50 μm in length. NHs averages to 135 nm in helix diameter.
Synthesis:
Nanocrystal (quantum size effect) The synthesis described above has not reached the sized required to exhibit quantum size effect. Nanostructures that reach the quantum regime exhibit a different set of phenomena unseen at a larger scale, for example, spontaneous polar ordering and the splitting of diffraction spots. The synthesis of GeTe nanocrystals of average size of 8, 17, and 100 nm involves divalent Ge(II) chloride – 1,4 dioxane complex and bis[bis(trimethylsilyl)amino]Ge (II) and trioctylphosphine-tellurium in a solvent such as 1,2-dichlorobenzene or phenyl ether. Ge(II) reduction kinetics has been thought to determine the GeTe formation. Large the Ge(II) reduction rate may lead to the increase in particle nucleation rate, resulting in the reduction of particle diameter.
Applications:
Memory storage GeTe has been heavily used in non-volatile optical data storage such as CDs, DVDs, and Blu-ray and may replace dynamic and flash random access memories. In 1987, Yamada et al. explored the phase changing properties of GeTe and Sb2Te3 for optical storage. The short crystallization time, cyclability and high optical contrast made these material better options than Te81Ge15Sb2S2 which has a slow transition time.
Applications:
RF switching The high contrast in resistivity between the amorphous and crystalline states and the ability to reverse the transition repeatedly make GeTe a good candidate for RF switching. RF requires a thin layer of GeTe film to be deposited on the surface of the substrate. Seed layer structure, precursor composition, deposition temperature, pressure, gas flow rates, precursor bubbling temperatures and the substrates all play a role in the film properties. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PAX1**
PAX1:
Paired box protein Pax-1 is a protein that in humans is encoded by the PAX1 gene.
Function:
This gene is a member of the paired box (PAX) family of transcription factors which are essential during fetal development. It is required for the development of the ventral vertebral column. Its expression is limited to the pharyngeal pouches and the cells that surround the developing vertebrae near the top where the head will be established to help give rise to the neck and the start of the formation of the shoulders and arm buds. Cancers, such as ovarian and cervical cancers, add a methyl (CH3) group which silences, or disables, the gene which may be able to suppress the tumor by regulating when other cells divide and increase. A substitution or deletion of this gene in mice can produce variants of the mutant undulated which is characterized by segmentation abnormalities along the inner spine. Mutations in the human gene may contribute to the condition of Klippel–Feil syndrome, which is the failure of the vertebrae to segment near the top of the spine and possibly further down with symptoms including a short, immovable neck and a low hairline on the back of the head.
Interactions:
PAX1 has been shown to interact with MEOX1 and MEOX2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Selenourea**
Selenourea:
Selenourea is the organoselenium compound with the formula SeC(NH2)2. It is a white solid. This compound features a rare example of a stable, unhindered carbon-selenium double bond. The compound is used in the synthesis of selenium heterocycles. Compared to urea, the oxo-analog of selenourea, few studies have been done on the compound due to the instability and toxicity of selenium compounds. Selenourea is toxic if inhaled or consumed.
Synthesis:
The compound was first synthesized in 1884 by Auguste Verneuil by the reaction of hydrogen selenide and cyanamide: H2Se + NCNH2 → SeC(NH2)2While this reaction has even found use in industrial synthesis of selenourea, more modern methods concern themselves with synthesis of substituted selenoureas. These can be synthesized using organic isoselenocyanates and secondary amines: RN=C=Se + NHR′R″ → Se=C(NRH)(NR′R″H)Alternatively, a substituted carbodiimide could be used as follows: RN=C=NR′ 1. HCl⟶2. LiAlHSeH Se=C(NRH)(NR′H)
Properties:
X-ray crystallographic measurements on crystals at −100 °C give average C=Se bond lengths of 1.86 Å, and 1.37 Å for C−N. Both the Se−C−N and N−C−N angles were measured at 120°, as expected for an sp2-hybridized carbon. Through these same studies, the existence of Se−H hydrogen bonding in the crystal lattice—suggested from the O−H and S−H hydrogen bonding found in crystals of urea and thiourea—was confirmed.Both the shortened length of the N−C bond and the longer Se=C bond suggest a delocalization of the lone pair on the amines; the Se=C π-bonding electrons are drawn towards the selenium atom, while the nitrogen's lone pair is drawn towards the carbonyl carbon. A similar effect is observed in urea and thiourea. In going from urea to thiourea to selenourea the double bond is more delocalized and longer, while the C−N σ bond is stronger and shorter. In terms of resonance structures, the selenol form (structures II, III) is more prevalent compared to urea and thiourea analogs; however, the lone pair the nitrogen of selenourea delocalizes only slightly more than the lone pair on thiourea (in contrast to a much greater delocalization in going from urea to thiourea). These minor differences suggest that the properties emergent from the delocalized nitrogen lone pair and destabilization of the C=S and C=Se π bond in thiourea and selenourea will also be similar.
Properties:
Unlike urea and thiourea, which have both been researched extensively, relatively few studies quantitatively characterize selenourea. While the selone tautomer (I) has been shown to be the more stable form, mainly qualitative and comparative information on selenourea's tautomerization is available.
Properties:
In comparable manner to ketones, selones also tautomerize: Since the greater delocalization of the lone pair electrons correlates with the selone product, the equilibrium position of selenourea likely has an equilibrium position comparable to thiourea's (which is lies more to the right that than urea's). Thiourea has been shown to exist predominantly in its thione form at 42 °C in dilute methanol, with the thionol tautomer almost nonexistent at neutral pH.
Reactivity:
An important class of reactions of selenourea is the formation of heterocycles. Some selenium-containing heterocycles exhibit antiinflammatory and antitumor activity, among other medicinal uses. Using selenourea as a precursor is considered to be the most efficient means of selenium-containing heterocyclic synthesis.Another class of reactions is the complexation of selenourea with transition metals and metalloids. Its ability to act as an effective ligand is attributed to the electron-donating effect of the amino groups and consequent stabilization of the selenium–metal π bond. In selenourea complexes only selenium–metal bonding has been observed, unlike in the urea and thiourea counterparts, which also bond through the nitrogen atom. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Crosstime Traffic**
Crosstime Traffic:
Crosstime Traffic is a series of books by Harry Turtledove.The central premise of the stories is an Earth that has discovered access to alternate universes where history proceeded differently. "Crosstime Traffic" is the name of the company with a global monopoly on the technology.
Background:
The background strongly resembles that of H. Beam Piper's Paratime series and Keith Laumer's Imperium Series. One tribute paid to Piper's series is the names of the inventors of temporal transposition: Ghaldron and Hesthor in Piper, Galbraith and Hester in Turtledove. In all of the series the "home timeline" was running low on resources, and it has used its knowledge of time to covertly import supplies from other Earths and save their civilization from collapse. The most important difference is the nature of the home timeline. Piper's world was inhabited by a culture which had been technologically advanced for thousands of years and was even more distantly related to our own. Laumer's series had a civilization that was less advanced than our own in almost every way except for their travel technology. Turtledove's world, although set in the 2090s, resembles the 2010s of the real world, with modest general advances in technology including the crosstime capability, as well as inflation resulting in US$100 (nicknamed 'franklins') having the same buying power as US$1 in the 2010s.
Background:
The books are young adult novels with teenage protagonists, who frequently become stranded in dangerous alternate worlds and must adapt to survive. Their adventures give them increased appreciation for the benefits of living in a civilized, high-tech society. Invariably, each book has two viewpoint characters, a boy and a girl – different ones in each book; in most books one of them is from the home timeline and the other from a visited alternate. Except for "Gunpowder Empire", where the protagonists are siblings, love interest developing between the protagonists is invariably part of the plot. In two books it ends with successful consummation, the protagonist from an Alternate timeline getting exceptional permission to come to the Home Timeline; in one book, lovers must say goodbye with a tearful heartbreak; and circumstances in one make it end with boy and girl becoming staunch foes, despite their mutual attraction. While there is considerable violence, the language and plots are restricted by the intended audience. For instance, In High Places includes the prospect of an enslaved girl being sexually abused, but does not use the word "rape" (although the word is later used in The Valley-Westside War). This shows considerable restraint of the author Turtledove, who is famous for writing scenes of unfettered sexuality, violence and profanity in his adult novels such as the series of Worldwar, Southern Victory, and The War That Came Early.
Novels:
Gunpowder Empire (2003): The first book in the series, it involves a pair of siblings stranded during a siege of an outpost of a Roman Empire that never collapsed.
Curious Notions (2004): The second book in the series is about a teenager and his father who are running an electronics store in San Francisco in a world where Imperial Germany reigns supreme following its victories in all three World Wars during the first half of the twentieth century.
In High Places (2005): Takes place in a world where the Black Death killed four-fifths of Europe's population, and the Moors still occupy Spain and southern France as well as Italy, and the Industrial Revolution never happened.
Novels:
The Disunited States of America (2006): This book concerns a pair of teenagers, one from the Crosstime civilization, one a native, who meet in a Virginia where the United States fell apart in the early 1800s due to the Constitutional Convention failing, in a North America torn by war between numerous independent states. The working title for this book was The Untied States of America.
Novels:
The Gladiator (2007): This novel is set in a world dominated by Communism after the Soviet Union won the Cold War in the late 20th century. In Italy, two teenagers chafe under the deadening rule of communism — until they discover the existence of Crosstime Traffic through a strategy gaming shop which is not as it seems.
Novels:
The Valley-Westside War (2008): The sixth book in the series, set in Los Angeles a world in which nuclear warfare took place in 1967. Los Angeles and the rest of the United States are split into numerous tiny republics, kingdoms, city states, and such, we are told a story of when the Kingdom of the Valley invaded the Democracy of Westside. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Red Peas Soup**
Red Peas Soup:
Red peas soup is a soup eaten in Jamaica. It is made of kidney beans (known locally as red peas), seasonings such as scotch bonnet pepper, pimento seeds, etc. Traditionally, the broth includes a pigtail. Red Peas Soup is usually eaten with yam and Jamaican dumplings. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solar gain**
Solar gain:
Solar gain (also known as solar heat gain or passive solar gain) is the increase in thermal energy of a space, object or structure as it absorbs incident solar radiation. The amount of solar gain a space experiences is a function of the total incident solar irradiance and of the ability of any intervening material to transmit or resist the radiation.
Solar gain:
Objects struck by sunlight absorb its visible and short-wave infrared components, increase in temperature, and then re-radiate that heat at longer infrared wavelengths. Though transparent building materials such as glass allow visible light to pass through almost unimpeded, once that light is converted to long-wave infrared radiation by materials indoors, it is unable to escape back through the window since glass is opaque to those longer wavelengths. The trapped heat thus causes solar gain via a phenomenon known as the greenhouse effect. In buildings, excessive solar gain can lead to overheating within a space, but it can also be used as a passive heating strategy when heat is desired.
Window solar gain properties:
Solar gain is most frequently addressed in the design and selection of windows and doors. Because of this, the most common metrics for quantifying solar gain are used as a standard way of reporting the thermal properties of window assemblies. In the United States, The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE), and The National Fenestration Rating Council (NFRC) maintain standards for the calculation and measurement of these values.
Window solar gain properties:
Shading coefficient The shading coefficient (SC) is a measure of the radiative thermal performance of a glass unit (panel or window) in a building. It is defined as the ratio of solar radiation at a given wavelength and angle of incidence passing through a glass unit to the radiation that would pass through a reference window of frameless 3 millimetres (0.12 in) Clear Float Glass. Since the quantities compared are functions of both wavelength and angle of incidence, the shading coefficient for a window assembly is typically reported for a single wavelength typical of solar radiation entering normal to the plane of glass. This quantity includes both energy that is transmitted directly through the glass as well as energy that is absorbed by the glass and frame and re-radiated into the space, and is given by the following equation: F(λ,θ)=T(λ,θ)+N∗A(λ,θ) Here, λ is the wavelength of radiation and θ is the angle of incidence. "T" is the transmissivity of the glass, "A" is its absorptivity, and "N" is the fraction of absorbed energy that is re-emitted into the space. The overall shading coefficient is thus given by the ratio: S.C.=F(λ,θ)1/F(λ,θ)o The shading coefficient depends on the radiation properties of the window assembly. These properties are the transmissivity "T" , absorptivity "A", emissivity (which is equal to the absorptivity for any given wavelength), and reflectivity all of which are dimensionless quantities that together sum to 1. Factors such as color, tint, and reflective coatings affect these properties, which is what prompted the development of the shading coefficient as a correction factor to account for this. ASHRAE’s table of solar heat gain factors provides the expected solar heat gain for ⅛” clear float glass at different latitudes, orientations, and times, which can be multiplied by the shading coefficient to correct for differences in radiation properties.
Window solar gain properties:
The value of the shading coefficient ranges from 0 to 1. The lower the rating, the less solar heat is transmitted through the glass, and the greater its shading ability.
Window solar gain properties:
In addition to glass properties, shading devices integrated into the window assembly are also included in the SC calculation. Such devices can reduce the shading coefficient by blocking portions of the glazing with opaque or translucent material, thus reducing the overall transmissivity.Window design methods have moved away from the Shading Coefficient and towards the Solar Heat Gain Coefficient (SHGC), which is defined as the fraction of incident solar radiation that actually enters a building through the entire window assembly as heat gain (not just the glass portion). The standard method for calculating the SHGC also uses a more realistic wavelength-by-wavelength method, rather than just providing a coefficient for a single wavelength like the shading coefficient does. Though the shading coefficient is still mentioned in manufacturer product literature and some industry computer software, it is no longer mentioned as an option in industry-specific texts or model building codes. Aside from its inherent inaccuracies, another shortcoming of the SC is its counter-intuitive name, which suggests that high values equal high shading when in reality the opposite is true. Industry technical experts recognized the limitations of SC and pushed towards SHGC in the United States (and the analogous g-value in Europe) before the early 1990s.A conversion from SC to SHGC is not necessarily straightforward, as they each take into account different heat transfer mechanisms and paths (window assembly vs. glass-only). To perform an approximate conversion from SC to SHGC, multiply the SC value by 0.87.
Window solar gain properties:
g-value The g-value (sometimes also called a Solar Factor or Total Solar Energy Transmittance) is the coefficient commonly used in Europe to measure the solar energy transmittance of windows. Despite having minor differences in modeling standards compared to the SHGC, the two values are effectively the same. A g-value of 1.0 represents full transmittance of all solar radiation while 0.0 represents a window with no solar energy transmittance. In practice though, most g-values will range between 0.2 and 0.7, with solar control glazing having a g-value of less than 0.5.
Window solar gain properties:
Solar heat gain coefficient (SHGC) SHGC is the successor to the shading coefficient used in the United States and it is the ratio of transmitted solar radiation to incident solar radiation of an entire window assembly. It ranges from 0 to 1 and refers to the solar energy transmittance of a window or door as a whole, factoring in the glass, frame material, sash (if present), divided lite bars (if present) and screens (if present). The transmittance of each component is calculated in a similar manner to the shading coefficient. However, in contrast to the shading coefficient, the total solar gain is calculated on a wavelength-by-wavelength basis where the directly transmitted portion of the solar heat gain coefficient is given by: 350 3500 nmT(λ)E(λ)dλ Here T(λ) is the spectral transmittance at a given wavelength in nanometers and E(λ) is the incident solar spectral irradiance. When integrated over the wavelengths of solar short-wave radiation, it yields the total fraction of transmitted solar energy across all solar wavelengths. The product N∗A(λ,θ) is thus the portion of absorbed and re-emitted energy across all assembly components beyond just the glass. It is important to note that the standard SHGC is calculated only for an angle of incidence normal to the window. However this tends to provide a good estimate over a wide range of angles, up to 30 degrees from normal in most cases.SHGC can either be estimated through simulation models or measured by recording the total heat flow through a window with a calorimeter chamber. In both cases, NFRC standards outline the procedure for the test procedure and calculation of the SHGC. For dynamic fenestration or operable shading, each possible state can be described by a different SHGC.
Window solar gain properties:
Though the SHGC is more realistic than the SC, both are only rough approximations when they include complex elements such as shading devices, which offer more precise control over when fenestration is shaded from solar gain than glass treatments.
Solar gain in opaque building components:
Apart from windows, walls and roofs also serve as pathways for solar gain. In these components heat transfer is entirely due to absorptance, conduction, and re-radiation since all transmittance is blocked in opaque materials. The primary metric in opaque components is the Solar Reflectance Index which accounts for both solar reflectance (albedo) and emittance of a surface. Materials with high SRI will reflect and emit a majority of heat energy, keeping them cooler than other exterior finishes. This is quite significant in the design of roofs since dark roofing materials can often be as much as 50 °C hotter than the surrounding air temperature, leading to large thermal stresses as well as heat transfer to interior space.
Solar gain and building design:
Solar gain can have both positive or negative effects depending on the climate. In the context of passive solar building design, the aim of the designer is normally to maximize solar gain within the building in the winter (to reduce space heating demand), and to control it in summer (to minimize cooling requirements). Thermal mass may be used to even out the fluctuations during the day, and to some extent between days.
Solar gain and building design:
Control of solar gain Uncontrolled solar gain is undesirable in hot climates due to its potential for overheating a space. To minimize this and reduce cooling loads, several technologies exist for solar gain reduction. SHGC is influenced by the color or tint of glass and its degree of reflectivity. Reflectivity can be modified through the application of reflective metal oxides to the surface of the glass. Low-emissivity coating is another more recently developed option that offers greater specificity in the wavelengths reflected and re-emitted. This allows glass to block mainly short-wave infrared radiation without greatly reducing visible transmittance.In climate-responsive design for cold and mixed climates, windows are typically sized and positioned in order to provide solar heat gains during the heating season. To that end, glazing with a relatively high solar heat gain coefficient is often used so as not to block solar heat gains, especially in the sunny side of the house. SHGC also decreases with the number of glass panes used in a window. For example, in triple glazed windows, SHGC tends to be in the range of 0.33 - 0.47. For double glazed windows SHGC is more often in the range of 0.42 - 0.55.
Solar gain and building design:
Different types of glass can be used to increase or to decrease solar heat gain through fenestration, but can also be more finely tuned by the proper orientation of windows and by the addition of shading devices such as overhangs, louvers, fins, porches, and other architectural shading elements.
Solar gain and building design:
Passive solar heating Passive solar heating is a design strategy that attempts to maximize the amount of solar gain in a building when additional heating is desired. It differs from active solar heating which uses exterior water tanks with pumps to absorb solar energy because passive solar systems do not require energy for pumping and store heat directly in structures and finishes of occupied space.In direct solar gain systems, the composition and coating of the building glazing can also be manipulated to increase the greenhouse effect by optimizing their radiation properties, while their size, position, and shading can be used to optimize solar gain. Solar gain can also be transferred to the building by indirect or isolated solar gain systems.
Solar gain and building design:
Passive solar designs typically employ large south facing windows with a high SHGC and overhangs that block sunlight in summer months and permit it to enter the window in the winter. When placed in the path of admitted sunlight, high thermal mass features such as concrete slabs or trombe walls store large amounts of solar radiation during the day and release it slowly into the space throughout the night. When designed properly, this can modulate temperature fluctuations. Some of the current research into this subject area is addressing the tradeoff between opaque thermal mass for storage and transparent glazing for collection through the use of transparent phase change materials that both admit light and store energy without the need for excessive weight. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N2pc**
N2pc:
N2pc refers to an ERP component linked to selective attention. The N2pc appears over visual cortex contralateral to the location in space to which subjects are attending; if subjects pay attention to the left side of the visual field, the N2pc appears in the right hemisphere of the brain, and vice versa. This characteristic makes it a useful tool for directly measuring the general direction of a person's attention (either left or right) with fine-grained temporal resolution.
History:
Luck and Hillyard (1990) first observed the N2pc while seeking to document electrophysiological correlates of focused attention during visual search using ERPs. Subjects viewed arrays containing 4-12 items, one of which was a target on 50% of trials. Compared to the waveform over cortex ipsilateral to the target, experimenters observed a consistently greater negative deflection of the ERP waveform at approximately 200 ms after the stimulus at posterior sites (i.e., over visual cortex) contralateral to the side of the screen subjects attended.
History:
The N2pc first received its name from Luck and Hillyard (1994), who named the component after its characteristic features. The "N" denotes a negative polarity; "2" describes its latency in the waveform (i.e., the second negative deflection, typically around 200 ms); and "pc" stands for "posterior-contralateral," as the component appears over posterior electrode sites contralateral to the direction of attention. The experimenters explored what factors would modulate the N2pc using a visual search paradigm in which subjects had to report the presence of a target object in a display (e.g., a green box or a horizontal bar). They confirmed that the N2pc appeared contralateral to attended stimuli, and furthermore found that it did not appear when subjects saw only one object at a time or had to spread their attention over all the items in the display. These data led the experimenters to believe the N2pc corresponds to a filtering process that occurs whenever people focus attention on one object while ignoring others.
Component characteristics:
The component's name, N2pc, abbreviates its characteristics. The component belongs to the family of N2 ERP components, a negative deflection in the ERP waveform at a latency of approximately 200-300 ms following a stimulus. The "pc" stands for "posterior-contralateral", describing the topographic distribution of the component. It appears as a greater negativity at posterior electrode sites contralateral to the attended side of the visual field relative to ipsilateral electrode sites. For example, when a person pays attention to something in the left side of the visual field, an N2pc appears as a greater negativity over the right posterior areas of the brain than the left. MEG has been used to localize the N2pc primarily to lateral extrastriate cortex and inferotemporal visual areas, such as V4.
Classic paradigms and findings:
The N2pc can be used flexibly in nearly any task in which one would like to study the direction and time course of selective attention. However, researchers have primarily used the N2pc in visual search paradigms to study the deployment of attention over time and test hypotheses of parallel and serial models of visual search.The first experiments to investigate specifically the N2pc used a visual search paradigm in which subjects reported the presence or absence of a pre-defined target (e.g., a green rectangle) in a display containing one "oddball" stimulus that differed on a single feature from a uniform background of items (e.g., a green square among blue squares). The oddball stimuli would "pop out" and attract attention, but were not necessarily targets. As a result, experimenters knew where subjects directed attention, but could simultaneously manipulate factors orthogonal to the location of attention, such as low-level features or probability of the target appearing. The pop-out oddball would generate an N2pc, as it received focused attention, while stimulus characteristics modulated the amplitude and latency of the component.
Classic paradigms and findings:
Subsequent investigation into the N2pc manipulated the number of items in the array and found that a display with as few as two objects elicits the component. Because an object cannot "pop out" and attract attention in a two-item display, experimenters concluded that the N2pc must reflect top-down, controlled processes of directing attention. The same study also demonstrated that the N2pc does not only occur when attending to visual features, but semantic features as well. In one experiment, subjects had to respond to the words "left" and "right" while ignoring the color words "white" and "brown." Even in this case of semantically defined targets, subjects demonstrated an N2pc contralateral to the target word. Together, these results have provided strong evidence that the N2pc reflects the location of covert, consciously directed attention.The prototypical visual search paradigm for eliciting an N2pc component has subjects attend and respond to a target stimulus to the left or right of fixation. Unlike regular visual search experiments, however, two major criteria most hold when attempting to measure N2pc response. First, the stimuli should be identical in all conditions, and the experimenter should only manipulate instructions for directing attention across conditions; this precludes the possibility that stimulus features drive ERP effects rather than focused attention. Second, the target should be easy to find, usually via "pop-out." The goal is to minimize the variability in search times and N2pc latency, resulting in a much clearer signal when the waveforms are averaged together over multiple trials.An example experiment for eliciting the N2pc that follows the critical principles above might proceed as follows: Subjects see an array of upright and inverted T's. One T is red, and one T is green, but the rest are black (thus fulfilling the first criterion of easy-to-find targets). Subjects are told to attend to either the red T's or the green T's at the beginning of the experiment and report whether that letter is upright or inverted (thus fulfilling the second criterion that attention should not be confounded with stimulus characteristics). We should expect to see an N2pc contralateral to the side of the screen the attended letter appeared.
Functional sensitivity:
Amplitude The N2pc is primarily sensitive to the directional focus of attention over time. However, research has found a variety of factors that modulate N2pc response. N2pc amplitude is sensitive to factors related to increasing demands on focused attention.
Functional sensitivity:
When non-target stimuli closely resemble the target (e.g., when targets are defined by size, but the size difference between targets and distractors is very small), they elicit an N2pc of lower amplitude than a target stimulus When targets are defined by a conjunction of features (e.g., blue, horizontal bar) rather than a single feature (e.g., blue bar), they elicit a larger N2pc, which may reflect a greater demand on attention to identify the target.
Functional sensitivity:
In a dual-task situation where subjects focus on a demanding primary task while performing target detection as a secondary task, the N2pc only appears in response to detecting targets defined by conjunctions of features. Again, the greater attentional demands of conjunction-based targets relative to feature-based targets may be responsible.
N2pc amplitude increases as distractors appear closer to the target, which increases the need to focus on the target while filtering the distractors (but also see Mazza et al., 2009, who found conflicting results).
Functional sensitivity:
When subjects have to indicate where a target is located, they exhibit a larger N2pc than when they simply have to report whether or not a target is present Elimination Certain experimental conditions can eliminate the N2pc entirely. These results have been used to argue for a spatial filtering hypothesis, which proposes that the N2pc reflects the process of ignoring task-irrelevant (i.e., non-target) stimuli.
Functional sensitivity:
Early investigations of the N2pc critically found that the component was sensitive to the presence of distractors, appearing only when distractors accompanied a target stimulus. Furthermore, N2pc amplitude increases with the number of distractors in the display.The N2pc also disappears when targets in the visual search task are defined as "any oddball object" rather than by one or more specific features. Luck and Hillyard (1994) have argued that in this case, determining whether a given object is a target requires distributing attention over multiple objects in the array (and determining the common features) rather than filtering them. Consequently, the spatial filtering process is discouraged, and the N2pc therefore does not appear.
Functional significance:
The N2pc literature agrees on a few functional characteristics of the N2pc. First, the N2pc appears whenever a person focuses attention on an object. Second, it serves as a direct measure for the direction of focused attention, either to the left or right. Finally, the N2pc is generally believed to be tied to a spatial filtering hypothesis (see above: "Eliminating the N2pc"). The last point regarding the functional significance of the N2pc, however, has been challenged. Some have contested the spatial filtering hypothesis, arguing that the N2pc reflects an enhancement of task-relevant stimulus processing rather than a suppression of irrelevant stimuli.Other work has explored further cognitive processes that could be linked to the N2pc. For instance, the classic visual search paradigm that elicits the N2pc could be broken down further into processes of shifting attention, and spatially based processing of non-target locations. When combining the visual search task with visual cues that drew attention to spatial locations in the display, experimenters found that while the N2pc may not reflect shifts of attention, it may still reflect processing of a location in space that may or may not contain a target. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Atomic mirror**
Atomic mirror:
In physics, an atomic mirror is a device which reflects neutral atoms in a way similar to the way a conventional mirror reflects visible light. Atomic mirrors can be made of electric fields or magnetic fields, electromagnetic waves or just silicon wafer; in the last case, atoms are reflected by the attracting tails of the van der Waals attraction (see quantum reflection). Such reflection is efficient when the normal component of the wavenumber of the atoms is small or comparable to the effective depth of the attraction potential (roughly, the distance at which the potential becomes comparable to the kinetic energy of the atom). To reduce the normal component, most atomic mirrors are blazed at the grazing incidence. At grazing incidence, the efficiency of the quantum reflection can be enhanced by a surface covered with ridges (ridged mirror).The set of narrow ridges reduces the van der Waals attraction of atoms to the surfaces and enhances the reflection. Each ridge blocks part of the wavefront, causing Fresnel diffraction.Such a mirror can be interpreted in terms of the Zeno effect.
Atomic mirror:
We may assume that the atom is "absorbed" or "measured" at the ridges. Frequent measuring (narrowly spaced ridges) suppresses the transition of the particle to the half-space with absorbers, causing specular reflection. At large separation L between thin ridges, the reflectivity of the ridged mirror is determined by dimensionless momentum p=KLθ , and does not depend on the origin of the wave; therefore, it is suitable for reflection of atoms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cranial nerve disease**
Cranial nerve disease:
Cranial nerve disease is an impaired functioning of one of the twelve cranial nerves. Although it could theoretically be considered a mononeuropathy, it is not considered as such under MeSH.
Cranial nerve disease:
It is possible for a disorder of more than one cranial nerve to occur at the same time, if a trauma occurs at a location where many cranial nerves run together, such as the jugular fossa. A brainstem lesion could also cause impaired functioning of multiple cranial nerves, but this condition would likely also be accompanied by distal motor impairment.
Cranial nerve disease:
A neurological examination can test the functioning of individual cranial nerves, and detect specific impairments.
Facial nerve palsy:
The facial nerve is the seventh of 12 cranial nerves. This cranial nerve controls the muscles in the face. Facial nerve palsy is more abundant in older adults than in children and is said to affect 15-40 out of 100,000 people per year. This disease comes in many forms which include congenital, infectious, traumatic, neoplastic, or idiopathic. The most common cause of this cranial nerve damage is Bell's palsy (idiopathic facial palsy) which is a paralysis of the facial nerve. Although Bell's palsy is more prominent in adults it seems to be found in those younger than 20 or older than 60 years of age. Bell's Palsy is thought to occur by an infection of the herpes virus which may cause demyelination and has been found in patients with facial nerve palsy. Symptoms include flattening of the forehead, sagging of the eyebrow, and difficulty closing the eye and the mouth on the side of the face that is affected. The inability to close the mouth causes problems in feeding and speech. It also causes lack of taste, lacrimation, and sialorrhea.The use of steroids can help in the treatment of Bell's Palsy. If in the early stages, steroids can increase the likelihood of a full recovery. This treatment is used mainly in adults. The use of steroids in children has not been proven to work because they seem to recover completely with or without them. Children also tend to have better recovery rates than older adults. Recovery rate also depends on the cause of the facial nerve palsy (e.g. infections, perinatal injury, congenital dysplastic). If the palsy is more severe patients should seek steroids or surgical procedures. Facial nerve palsy may be the indication of a severe condition and when diagnosed a full clinical history and examination are recommended.Although rare, facial nerve palsy has also been found in patients with HIV seroconversion. Symptoms found include headaches (bitemporal or occipital), the inability to close the eyes or mouth, and may cause the reduction of taste. Few cases of bilateral facial nerve palsy have been reported and is said to only effect 1 in every 5 million per year.
Others:
Eyes Oculomotor nerve palsy - Oculomotor nerve (III) Fourth nerve palsy - Trochlear nerve (IV) Sixth nerve palsy - Abducens nerve (VI) Other Trigeminal neuralgia - Trigeminal nerve (V) Facial nerve paralysis, Bell's palsy, Melkersson–Rosenthal syndrome, Central seven - Facial nerve (VII) Accessory nerve disorder - Accessory nerve (XI) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solar power in Alabama**
Solar power in Alabama:
Solar power in Alabama on rooftops could theoretically provide 29.8% of all electricity used in Alabama, with 20,400 MW of solar panels potentially installed on rooftops.Alabama was ranked 50th among US states for solar power in 2020, and 35th in Q1 of 2021, with .027% of the state's power generated by solar.
Net metering:
Offering net metering is required by federal law, but Alabama is one of only four states to not have adopted a statewide policy on net metering, which means it needs to be negotiated with the utility. IREC best practices, based on experience, recommends no limits to net metering, individual or aggregate, and perpetual roll over of kWh credits.Alabama Power has installed four types of solar panels in Birmingham that can be monitored on the Internet. The company will pay up to 4.81¢/kWh during the summer and 3.93¢/kWh in the winter for excess generation from up to 100 kW systems. Peak power rates are weekdays, 1 to 7 pm in summer and 5 to 9 am in winter. Customers choosing the Time Advantage Energy rate pay 7¢/kWh during winter peak periods and 25¢/kWh during summer peak periods. Off peak is charged 5¢/kWh. Using time advantage requires a time of use meter, and the base charge is increased by $10.50 each month.
Solar power projects:
In 2010, one of Alabama's largest solar arrays was the 25 kW system installed at the Coastal Response Center, in Coden, Alabama. A $250,000 economic stimulus grant was used to install 156 solar panels on Anniston's Museum of Natural History, which was completed on August 24, 2011. The output of this 25.2 kW system can also be monitored online.River Bend Solar, completed in 2016, contributes 75 MW capacity to the TVA power grid, and reduces carbon emissions by 100,000 tons annually.LaFayette Solar Farm in LaFayette, completed in 2019, supplies 79.2 MW to Walmart.In 2021, Covington Electric Cooperative, which is constructing a 100 kW solar array, is the only rural electric cooperative in Alabama with a community solar program.
Solar panel manufacturing:
In 2019, LG Electronics opened a solar panel manufacturing plant in Huntsville. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cardiac electrophysiology**
Cardiac electrophysiology:
Cardiac electrophysiology is a branch of cardiology and basic science focusing on the electrical activities of the heart. The term is usually used in clinical context, to describe studies of such phenomena by invasive (intracardiac) catheter recording of spontaneous activity as well as of cardiac responses to programmed electrical stimulation - clinical cardiac electrophysiology. However, cardiac electrophysiology also encompasses basic research and translational research components. Specialists studying cardiac electrophysiology, either clinically or solely through research, are known as cardiac electrophysiologists.
Description:
Electrophysiological (EP) studies are performed to assess complex arrhythmias, elucidate symptoms, evaluate abnormal electrocardiograms, assess risk of developing arrhythmias in the future, and design treatment. These procedures include therapeutic methods (typically radiofrequency ablation, or cryoablation) in addition to diagnostic and prognostic procedures. Other therapeutic modalities used in this field include antiarrhythmic drug therapy and implantation of pacemakers, implantable cardioverter-defibrillators and cardiac resynchronisation therapy devices.
Electrophysiological study:
The cardiac electrophysiology (EP) study typically measures the response of myocardium to programmed electrical stimulation (PES) on specific pharmacological regimens in order to assess the likelihood that the regimen will successfully prevent potentially fatal sustained ventricular tachycardia (VT) or ventricular fibrillation VF (VF) in the future. Sometimes a series of EP study drug trials must be conducted to enable the cardiologist to select the one regimen for long-term treatment that best prevents or slows the development of VT or VF following PES. Such studies may also be conducted in the presence of a newly implanted or newly replaced cardiac pacemaker or ICD.
Physician specialists:
A specialist in cardiac electrophysiology is known as an electrophysiologist, or "heart electrician" in layman' terms. Cardiac electrophysiology is a subspecialty of cardiology in most countries and usually requires two or more years of EP fellowship training after a general cardiology residency. In early 2011, the Centers for Medicare and Medicaid Services promoted cardiac electrophysiology to its own specialty category in the United States. Cardiac electrophysiologists are trained to perform interventional cardiac electrophysiology studies and cardiac rhythm management device implantations.
Physician specialists:
Research cardiac electrophysiologist Cardiac electrophysiologists specialize in a sub-area of electrophysiology, which in turn is a sub-area of physiology. This specialization usually requires education at the doctoral (PhD, DSc, or MD/DO) level to become a principal investigator for research projects. The area of research is often multi-disciplinary involving chemistry, bioelectrics, biology, and biomedical engineering. The flagship tools used by cardiac electrophysiologists overlap with the toolbox of the neuroscientist including patch clamp and optical mapping.
Allied professionals:
Mapping specialists (EP techs, EP physiologists) are typically educated up to the Bachelor's or Master's level and are employed by either a cardiac electrophysiology company or department. Often international certification such as Certified Electrophysiology Specialist (CEPS) by the International Board of Heart Rhythm Examiners (IBHRE) or EHRA Certified Electrophysiology Specialist (ECES) or equivalent is required.
Subdiscipline:
Cardiac electrophysiology is a relatively young subdiscipline of cardiology and internal medicine. It was developed during the mid-1970s by Hein J. J. Wellens, professor of medicine at the University of Maastricht in the Netherlands and attending cardiologist at the Academic Hospital in Maastricht. In 1980 the first microprocessor based stimulator was developed there.
Textbook:
The author of the definitive textbook in the field is by the late Mark E. Josephson, former Robinette Professor of Medicine and chief of cardiology at the University of Pennsylvania School of Medicine in Philadelphia, Pennsylvania, professor of medicine at Harvard Medical School, and attending cardiologist at Beth Israel Deaconess Medical Center in Boston, Massachusetts. The most recent published edition of Clinical Cardiac Electrophysiology: Techniques and Interpretations is the 6th edition in 2020.
Professional societies:
The Heart Rhythm Society, founded in 1979, promotes education and advocacy for cardiac arrhythmia professionals (including cardiac electrophysiologists) and patients. European Heart Rhythm Association, a part of European Society of Cardiology, is active in Europe.
Certification:
Founded in 1985 as NASPExAM, the International Board of Heart Rhythm Examiners (IBHRE) offers knowledge based board exams for physicians and allied health professionals working in the field of cardiac electrophysiology and cardiac rhythm device management. European Heart Rhythm Association (EHRA) provides knowledge and practical competency based certification to physicians and allied health professionals as well as accreditation of cardiac electrophysiology training centres in Europe and neighbouring countries.
Mapping systems:
Electroanatomic mapping uses electric and magnetic fields to create three dimensional models of heart structures using specialized catheters.
Notable cardiac electrophysiologists:
Kenneth Ellenbogen, American Richard N. Fogoros, American Michel Haïssaguerre (born 1955), French Mark Josephson (1943–2017), American George Klein, Canadian Bruce Lerman, American John Alexander MacWilliam (1857–1937), British/Scottish Michel Mirowski (1924–1990), Polish-Israeli-American Eric Prystowsky, American Amiran Revishvili (born 1956), Russian Hein Wellens (1935–2020), Dutch | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Radical 54**
Radical 54:
Radical 54 or radical long stride (廴部) meaning "long stride" is one of the 31 Kangxi radicals (214 radicals in total) composed of three strokes.
In the Kangxi Dictionary, there are nine characters (out of 49,030) to be found under this radical.
廴 is also the 26th indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China. While this radical is composed of three strokes in Traditional Chinese, it is treated as a two-stroke component in Simplified Chinese, with the two turning strokes becoming one continuous stroke.
Literature:
Fazzioli, Edoardo (1987). Chinese calligraphy : from pictograph to ideogram : the history of 214 essential Chinese/Japanese characters. calligraphy by Rebecca Hon Ko. New York: Abbeville Press. ISBN 0-89659-774-1.
Lunde, Ken (Jan 5, 2009). "Appendix J: Japanese Character Sets" (PDF). CJKV Information Processing: Chinese, Japanese, Korean & Vietnamese Computing (Second ed.). Sebastopol, Calif.: O'Reilly Media. ISBN 978-0-596-51447-1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Club (organization)**
Club (organization):
A club is an association of people united by a common interest or goal. A service club, for example, exists for voluntary or charitable activities. There are clubs devoted to hobbies and sports, social activities clubs, political and religious clubs, and so forth.
History:
Historically, clubs occurred in all ancient states of which exists detailed knowledge. Once people started living together in larger groups, there was need for people with a common interest to be able to associate despite having no ties of kinship. Organizations of the sort have existed for many years, as evidenced by Ancient Greek clubs and associations (collegia) in Ancient Rome.
History:
Origins of the word and concept It is uncertain whether the use of the word "club" originated in its meaning of a knot of people, or from the fact that the members "clubbed" together to pay the expenses of their gatherings. The oldest English clubs were merely informal periodic gatherings of friends for the purpose of dining or drinking with one another. Thomas Occleve (in the time of Henry IV) mentions such a club called La Court de Bonne Compagnie (the Court of Good Company), of which he was a member. In 1659 John Aubrey wrote, "We now use the word clubbe for a sodality [a society, association, or fraternity of any kind] in a tavern." In Shakespeare's day Of early clubs the most famous, latterly, was the Bread Street or Friday Street Club that met at the Mermaid Tavern on the first Friday of each month. John Selden, John Donne, John Fletcher and Francis Beaumont were among the members (although it is often asserted that William Shakespeare and Sir Walter Raleigh were members of this club, there is no documented evidence to support this claim). Another such club, founded by Ben Jonson, met at the Devil Tavern near Temple Bar, also in London.
History:
Coffee houses The word “club,” in the sense of an association to promote good-fellowship and social intercourse, became common in England at the time of Tatler and The Spectator (1709–1712). With the introduction of coffee-drinking in the middle of the 17th century, clubs entered on a more permanent phase. The coffee houses of the later Stuart period are the real originals of the modern clubhouse. The clubs of the late 17th and early 18th century type resembled their Tudor forerunners in being oftenest associations solely for conviviality or literary coteries. But many were confessedly political, e.g. The Rota, or Coffee Club (1659), a debating society for the spread of republican ideas, broken up at the Restoration in 1660, the Calves Head Club (c.1693) and the Green Ribbon Club (1675). The characteristics of all these clubs were: No permanent financial bond between the members, each man's liability ending for the time being when he had paid his “score” after the meal.
History:
No permanent clubhouse, though each clique tended to make some particular coffee house or tavern their headquarters.These coffee-house clubs soon became hotbeds of political scandal-mongering and intriguing, and in 1675 King Charles II issued a proclamation which ran: “His Majesty hath thought fit and necessary that coffee houses be (for the future) put down and suppressed,” because “in such houses divers false, malitious and scandalous reports are devised and spread abroad to the Defamation of his Majesty’s Government and to the Disturbance of Peace and Quiet of the Realm.” So unpopular was this proclamation that it was almost instantly found necessary to withdraw it, and by Anne’s reign the coffee-house club was a feature of England’s social life. See English coffeehouses in the 17th and 18th centuries.
History:
18th and 19th century The idea of the club developed in two directions. One was of a permanent institution with a fixed clubhouse. The London coffeehouse clubs in increasing their members absorbed the whole accommodation of the coffeehouse or tavern where they held their meetings, and this became the clubhouse, often retaining the name of the original innkeeper, e.g. White's, Brooks's, Arthur's, and Boodle's. These still exist today as the famous gentlemen's clubs.
History:
The peripatetic lifestyle of the 18th and 19th century middle classes also drove the development of more residential clubs, which had bedrooms and other facilities. Military and naval officers, lawyers, judges, members of Parliament and government officials tended to have an irregular presence in the major cities of the Empire, particularly London, spending perhaps a few months there before moving on for a prolonged period and then returning. Especially when this presence did not coincide with the Season, a permanent establishment in the city (i.e., a house owned or rented, with the requisite staff), or the opening of a townhouse (generally shuttered outside the Season) was inconvenient or uneconomic, while hotels were rare and socially déclassé. Clubbing with a number of like-minded friends to secure a large shared house with a manager was therefore a convenient solution.
History:
The other sort of club meets occasionally or periodically and often has no clubhouse, but exists primarily for some specific object. Such are the many purely athletic, sports and pastimes clubs, the Alpine, chess, yacht and motor clubs. Also there are literary clubs (see writing circle and book club), musical and art clubs, publishing clubs. The name of “club” has been annexed by a large group of associations which fall between the club proper and friendly societies, of a purely periodic and temporary nature, such as slate, goose and Christmas clubs, which do not need to be registered under the Friendly Societies Act.
History:
Worldwide The institution of the gentleman's club has spread all over the English-speaking world. Many of those who energised the Scottish Enlightenment were members of the Poker Club in Edinburgh. In the United States clubs were first established after the War of Independence. One of the first was the Hoboken Turtle Club (1797), which still survived as of 1911. In former British Empire colonies like India and Pakistan they are known as Gymkhana.
History:
The earliest clubs on the European continent were of a political nature. These in 1848 were repressed in Austria and Germany, and later clubs of Berlin and Vienna were mere replicas of their English prototypes. In France, where the term cercle is most usual, the Club de l'Entresol (1724-1731) was followed by the Club Politique (1782), and during the French Revolution such associations proved important political forces (see Jacobins, Feuillants, Cordeliers). Of the purely social clubs in Paris the most notable were the Jockey-Club de Paris (1833), the Cercle de l'Union, the Traveller's and the Cercle Interallié.
Types of clubs:
Buying club Buyer's clubs or buying clubs are clubs organized to pool members' collective buying power, enabling them to make purchases at lower prices than are generally available, or purchase goods that might otherwise be difficult to obtain. There are many legitimate buying clubs – for example, food buying clubs – but many are unauthorized credit card billing scams, in which a customer is induced to enroll in a free trial of a buyer's club membership, and then unexpectedly billed when the trial ends.
Types of clubs:
Country or sports club There are two types of athletic and sports clubs: those organized for sporting participants (which include athletic clubs and country clubs), and those primarily for spectator fans of a team.
Types of clubs:
Athletic and country clubs offer one or more recreational sports facilities to their members. Such clubs may also offer social activities and facilities, and some members may join primarily to take advantage of the social opportunities. Country clubs offer a variety of recreational sports facilities to their members and are usually located in suburban or rural areas. Most country clubs have golf facilities. Swimming pools, tennis courts, polo grounds and exercise facilities are also common. Country clubs usually provide dining facilities to their members and guests, and frequently host special events like weddings. Similar clubs in urban areas are often called "athletic clubs". These clubs often feature indoor sports, such as indoor tennis, squash, futsal, basketball, volleyball, boxing, and exercise facilities.
Types of clubs:
Members of sports clubs that support a team can be sports amateurs—groups who meet to practice a sport, as for example in most cycling clubs—or professionals; football clubs consist of well-paid team members and thousands of supporters. A sports club can thus comprise participants (not necessarily competitors) or spectator fans, or both.
Types of clubs:
Some organizations exist with a mismatch between name and function. The Jockey Club is not a club for jockeys, but rather exists to regulate the sport of horseracing; the Marylebone Cricket Club was until recently the regulatory body of cricket; and so on. Sports club should not be confused with gyms and health clubs, which also can be for members only.
Types of clubs:
Fraternities and sororities Fraternities and sororities are social clubs of secondary or higher education students. Membership in these organizations is generally by invitation only.
Hobby club Hobbies are practiced for interest and enjoyment, rather than financial reward. Examples include science fiction clubs, ham radio, model railroading, collecting, creative and artistic pursuits, making, tinkering, sports, and adult education. Engaging in a hobby can lead to acquiring substantial skill, knowledge, and experience. However, personal fulfillment is the aim.
Personal club Personal Clubs are similar to Hobby Clubs. These clubs are run by a few close friends. These friends or family members do things they like to do together. They might even make a personal website for their club.
Types of clubs:
Professional societies These organizations are partly social, partly professional in nature and provide professionals with opportunities for advanced education, presentations on current research, business contacts, public advocacy for the profession and other advantages. Examples of these groups include medical associations, scientific societies, autograph club and bar associations. Professional societies frequently have layers of organization, with regional, national and international levels. The local chapters generally meet more often and often include advanced students unable to attend national meetings.
Types of clubs:
School club These are activities performed by students that fall outside the realm of classes. Such clubs may fall outside the normal curriculum of school or university education or, as in the case of subject matter clubs (e.g. student chapters of professional societies), may supplement the curriculum through informal meetings and professional mentoring.
Secret club A secret society is a club or an organization whose activities, events, inner functioning, or membership are concealed. The society may or may not attempt to conceal its existence. The term usually excludes covert groups, such as intelligence agencies or guerrilla warfare insurgencies, that hide their activities and memberships but maintain a public presence.
Service club A service club is a type of voluntary organization where members meet regularly for social outings and to perform charitable works either by direct hands-on efforts or by raising money for other organizations.
Types of clubs:
Social activities club Social activities clubs are a modern combination of several other types of clubs and reflect today's more eclectic and varied society. These clubs are centered around the activities available to the club members in the city or area in which the club is located. Because the purpose of these clubs is split between general social interaction and taking part in the events themselves, clubs tend to have more single members than married ones; some clubs restrict their membership to one of the other, and some are for gay and lesbian patrons.
Types of clubs:
Membership can be limited or open to the general public, as can the events. Most clubs have a limited membership based upon specific criteria, and limit the events to members to increase the security of the members, thus creating an increased sense of camaraderie and belonging. Social activities clubs can be for profit or not for profit, and some are a mix of the two (a for-profit club with a non-profit charitable arm, for instance). The Inter-Varsity Club (IVC) is the biggest British non-profit club.
Types of clubs:
Social club Some social clubs are organized around competitive games, such as chess and bridge. Other clubs are designed to encourage membership of certain social classes. In the 1940s, 1950s and 1960s social clubs were the precursor name of gangs like the infamous Hamburgs of Chicago. Latino immigrant adult and youth groups organized themselves as social clubs like: Black Eagles, Flaming Arrows, Paragons and Young Lords. Those made up of the elite are best known as gentlemen's clubs (not to be confused with strip clubs) and country clubs (though these also have an athletic function, see above). Membership to gentlemen's clubs require the ability to pay large fees as well as an invitation by existing members who seek new recruits who meet personal criteria such as lifestyle, moral base, etc. Less elitist, but still in some cases exclusive, are working men's clubs. Clubs restricted to either officers or enlisted men exist on military bases.
Types of clubs:
The modern Gentlemen's club is occasionally proprietary, i.e. owned by an individual or private syndicate and run on a for-profit basis, but more frequently owned by the members who delegate to a committee the management of its affairs, first reached its highest development in London, where the district of St. James's has long been known as "Clubland".
Current London proprietary clubs include Soho House, which commenced business in 1995, and Soho's Groucho Club, which opened in 1985 as "the antidote to the traditional club." In this spirit, the club was named for Groucho Marx because of his famous remark that he would not wish to join any club that would have him as a member. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ZFS (IBM file system project)**
ZFS (IBM file system project):
zFS was an IBM research project to develop a distributed, decentralized file system. It was a follow-on to the IBM DSF (Data Sharing Facility) project to build a serverless file system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pharmacokinetics simulation**
Pharmacokinetics simulation:
Pharmacokinetics simulation is a simulation method used in determining the safety levels of a drug during its development.
Purpose:
Pharmacokinetics simulation gives an insight to drug efficacy and safety before exposure of individuals to the new drug that might help to improve the design of a clinical trial. Pharmacokinetics simulations help in addition in therapy planning, to stay within the therapeutic range under various physiological and pathophysiological conditions, e.g., chronic kidney disease.
Simulators:
Simcyp Simulator and GastroPlus (from Simulations Plus) are simulators that take account for individual variabilities.
PharmaCalc v02 and PharmaCalcCL allow to simulate individual plasma-concentration time curves based on (published) pharmacokinetic parameters such as half-life, volume of distribution etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Osmotic concentration**
Osmotic concentration:
Osmotic concentration, formerly known as osmolarity, is the measure of solute concentration, defined as the number of osmoles (Osm) of solute per litre (L) of solution (osmol/L or Osm/L). The osmolarity of a solution is usually expressed as Osm/L (pronounced "osmolar"), in the same way that the molarity of a solution is expressed as "M" (pronounced "molar"). Whereas molarity measures the number of moles of solute per unit volume of solution, osmolarity measures the number of osmoles of solute particles per unit volume of solution. This value allows the measurement of the osmotic pressure of a solution and the determination of how the solvent will diffuse across a semipermeable membrane (osmosis) separating two solutions of different osmotic concentration.
Unit:
The unit of osmotic concentration is the osmole. This is a non-SI unit of measurement that defines the number of moles of solute that contribute to the osmotic pressure of a solution. A milliosmole (mOsm) is 1/1,000 of an osmole. A microosmole (μOsm) (also spelled micro-osmole) is 1/1,000,000 of an osmole.
Types of solutes:
Osmolarity is distinct from molarity because it measures osmoles of solute particles rather than moles of solute. The distinction arises because some compounds can dissociate in solution, whereas others cannot.Ionic compounds, such as salts, can dissociate in solution into their constituent ions, so there is not a one-to-one relationship between the molarity and the osmolarity of a solution. For example, sodium chloride (NaCl) dissociates into Na+ and Cl− ions. Thus, for every 1 mole of NaCl in solution, there are 2 osmoles of solute particles (i.e., a 1 mol/L NaCl solution is a 2 osmol/L NaCl solution). Both sodium and chloride ions affect the osmotic pressure of the solution.Another example is magnesium chloride (MgCl2), which dissociates into Mg2+ and 2Cl− ions. For every 1 mole of MgCl2 in the solution, there are 3 osmoles of solute particles.
Types of solutes:
Nonionic compounds do not dissociate, and form only 1 osmole of solute per 1 mole of solute. For example, a 1 mol/L solution of glucose is 1 osmol/L.Multiple compounds may contribute to the osmolarity of a solution. For example, a 3 Osm solution might consist of: 3 moles glucose, or 1.5 moles NaCl, or 1 mole glucose + 1 mole NaCl, or 2 moles glucose + 0.5 mole NaCl, or any other such combination.
Definition:
The osmolarity of a solution, given in osmoles per liter (osmol/L) is calculated from the following expression: where φ is the osmotic coefficient, which accounts for the degree of non-ideality of the solution. In the simplest case it is the degree of dissociation of the solute. Then, φ is between 0 and 1 where 1 indicates 100% dissociation. However, φ can also be larger than 1 (e.g. for sucrose). For salts, electrostatic effects cause φ to be smaller than 1 even if 100% dissociation occurs (see Debye–Hückel equation); n is the number of particles (e.g. ions) into which a molecule dissociates. For example: glucose has n of 1, while NaCl has n of 2; C is the molar concentration of the solute; the index i represents the identity of a particular solute.Osmolarity can be measured using an osmometer which measures colligative properties, such as Freezing-point depression, Vapor pressure, or Boiling-point elevation.
Osmolarity vs. tonicity:
Osmolarity and tonicity are related but distinct concepts. Thus, the terms ending in -osmotic (isosmotic, hyperosmotic, hyposmotic) are not synonymous with the terms ending in -tonic (isotonic, hypertonic, hypotonic). The terms are related in that they both compare the solute concentrations of two solutions separated by a membrane. The terms are different because osmolarity takes into account the total concentration of penetrating solutes and non-penetrating solutes, whereas tonicity takes into account the total concentration of non-freely penetrating solutes only.Penetrating solutes can diffuse through the cell membrane, causing momentary changes in cell volume as the solutes "pull" water molecules with them. Non-penetrating solutes cannot cross the cell membrane; therefore, the movement of water across the cell membrane (i.e., osmosis) must occur for the solutions to reach equilibrium.
Osmolarity vs. tonicity:
A solution can be both hyperosmotic and isotonic. For example, the intracellular fluid and extracellular can be hyperosmotic, but isotonic – if the total concentration of solutes in one compartment is different from that of the other, but one of the ions can cross the membrane (in other words, a penetrating solute), drawing water with it, thus causing no net change in solution volume.
Plasma osmolarity vs. osmolality:
Plasma osmolarity can be calculated from plasma osmolality by the following equation: where: ρsol is the density of the solution in g/ml, which is 1.025 g/ml for blood plasma.
Plasma osmolarity vs. osmolality:
ca is the (anhydrous) solute concentration in g/ml – not to be confused with the density of dried plasmaAccording to IUPAC, osmolality is the quotient of the negative natural logarithm of the rational activity of water and the molar mass of water, whereas osmolarity is the product of the osmolality and the mass density of water (also known as osmotic concentration).In simpler terms, osmolality is an expression of solute osmotic concentration per mass of solvent, whereas osmolarity is per volume of solution (thus the conversion by multiplying with the mass density of solvent in solution (kg solvent/litre solution).
Plasma osmolarity vs. osmolality:
where mi is the molality of component i.
Plasma osmolarity/osmolality is important for keeping proper electrolytic balance in the blood stream. Improper balance can lead to dehydration, alkalosis, acidosis or other life-threatening changes. Antidiuretic hormone (vasopressin) is partly responsible for this process by controlling the amount of water the body retains from the kidney when filtering the blood stream. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scope mount**
Scope mount:
Scope mounts are used to attach telescopic sights or other types of sights to firearms. The scope sight itself is usually made for only one of two main types of mounts, which can be classified as scopes for ring mounts, for example a 30 mm tube, or scopes for rail mounts, such as the Zeiss rail. Words such as mounts and bases are used somewhat loosely, and can refer to several different parts which are either used together or in place of each other as ways to mount optical sights to firearms.
Scope mount:
When it comes to the interface of the firearm itself, the Picatinny rail is one of the most widespread standard for new firearms as of 2020. While most scopes are made for being mounted either with a ring mount or a rail mount, some sights have an integral mounting mechanism allowing them to be attached directly to the firearm, like for example an integrated Picatinny mount. There are many proprietary and brand-specific types of mounts that either can be used with Picatinny rails or as alternatives to Picatinny (see the section on Link between scope and firearm). Scope mounts may be offered by firearm and scope manufacturers, or on the aftermarket.
Scopes for rail mounts:
Zeiss rail Among scopes for rail mounts, the 22.5 degree V-shaped Zeiss rail is the most prevalent standard. It was introduced in 1990. After the patent expired in 2008, compatible scopes have been offered from manufacturers such as Blaser, Leica, Minox, Meopta, Nikon, Noblex (formerly Docter), Schmidt & Bender and Steiner. It has therefore, in some sense, become the de facto industry standard for scope mounting rails. The system has so far seen most use on the European high end market.
Scopes for rail mounts:
Swarovski SR rail The Swarovski SR rail (patented in 2002, introduced in 2005 The Swarovski SR rail is also used by Kahles, a Swarovski subsidiary.) has a flat rail with many "teeth" as recoil lugs, and is only offered on scopes from Swarovski and its subsidiary Kahles. It separates itself from the Zeiss rail in that it is not neither stepless nor self-centering.
Scopes for rail mounts:
S&B Convex rail A former competing standard was the halv-circle shaped Schmidt & Bender Convex rail also introduced in 2005. Schmidt & Bender after a few years changed to the Zeiss rail standard. In contrast to the Zeiss and Swarovski systems, the S&B Convex rail had the possibility to add a cant to the scope when mounting, such that the reticle is not horizontal to the ground.
Scopes for rail mounts:
70 degree prism rail There is an older European system with an upside-down V-shape (70 degrees). This system has little widespread use today. The advantage of this system was that it at one time was offered by most European scope manufacturers, but the disadvantage was that the rail had to be drilled for a screw each time the eye relief was to be adjusted. All new standards for rail mounts have addressed this issue.
Scopes for ring mounts:
Ring mounts usually consist of a base attached to the firearm and rings (usually two) attached to the sight. The rings are usually made of steel or aluminum. Common diameters on ring mounts are 25.4 mm (1 inch), 26 mm, 30 mm and 34 mm. There are big differences in the strength and ability of sustained precision on different assemblies. With weak cartridges such as .22 LR applied in light-use scenarios, a pair of skinny aluminium rings may work well, while firearms with very powerful recoil often combined with a heavy sight may require steel rings or thicker aluminum rings with recoil lugs to be used.
Scopes for ring mounts:
Sizes Scopes for ring mounts are available in many different sizes. The most common ones are: 1 inch (25.4 mm) 30 mm 34 mmSome less common standards are: 3⁄4 inch (19 mm) 7⁄8 inch (22 mm) 26 mm - Some older European scopes 35 mm - Some IOR, Vortex and Leupold models 36 mm - Some Zeiss and Hensoldt models 40 mm - Some IOR models and Swarovski dS Lapping In order for a ring assembly to grip evenly, it is important that the scope rings are circular and coaxial with the scope tube. On ring mounts that grip unevenly, the ring mount can be lapped to prevent uneven pressure when mounting. One scopes made for ring mounts, it is not uncommon to get ring marks when mounting the rings.
Scopes for ring mounts:
Ring inserts There are insert rings on the market which allows for mounting a scope inside a ring mount of a larger diameter. An example could be to mount a scope with a 1-inch (25.4 mm) tube in a 30 mm mount using a plastic insert.
Scopes for ring mounts:
There are also special ring mounts in the market with circularly shaped ring inserts made to provide stress free mounting without lapping, with Burris Signature Rings and Sako Optilock Rings as two well-known examples. Burris Signature was introduced in 1995. A patent was applied for in 1994, and was granted in 1995. Sako Optilock has been sold since some time in the early 2000s. The trade name Optilock was registered in the US in December 1997, and has been marketed in the US since December 2001. In 2000, Sako was sold to Beretta Holding. In 2002, Burris was also sold to Beretta Holding, and thus Burris and Sako got the same owners. Burris' original patent for the rings with the circular insertes was considered to have expired in 2014, and as of 2020 is listed as "definitely expired".In 2015, XTR Signature Rings was launched as a further development of the Burris Signature series. The XTR variant differs in that it has two circular cavities per ring assembly versus one. A patent for the XTR Signature Rings was applied for in 2016, and was granted to Burris in 2019.
Mounts for compact sights:
Many reflex sights (e.g. red dot sights) and holographic sights have proprietary mounts.
Mounts for compact sights:
Aimpoint Acro rail: A dovetail rail for attaching a sight via a clamping mechanism, and with a 4 mm wide straight recoil lug groove. The dovetail is approximately 16.5 mm wide, and is radiused so as not to have any sharp edges. The mount is compact enough to be used on pistols, as well as rifles and shotguns. Launched in 2019 together with the sights Aimpoint Acro P-1 and C-1. Also used on Aimpoint Acro C-2 and P-2, as well as Steiner MPS.
Mounts for compact sights:
Aimpoint Micro standard: First introduced in 2007 on the small tube sight variants of Aimpoint, but today used by other manufacturers as well. Popular on rifles and shotguns, but not on handguns due to its size. The mounting standard uses four screws and one cross slot acting as a recoil lug. Used on red dot sights such as Aimpoint Micro, Vortex Crossfire, Sig Sauer Romeo 4 & 5, and some Holosun Paralow variants.
Mounts for compact sights:
Aimpoint CompM4 mount: Launched in 2007 with the Aimpoint CompM4 sight. The sight is attached to the mount via two M5 screws from the underside, and the mount has a transverse groove acting as a recoil lug. The Aimpoint Comp line was launched in 1993. The predecessor of the CompM4, CompM2, had a 30 mm ring mount and was introduced in the American military in 2000. Some manufacturers have copied the M4 mount system, but it has mainly been used by Aimpoint.
Mounts for compact sights:
C-More standard: A mounting standard introduced by C-More Sights. Uses two screws and two circular notches acting as recoil lugs. Used on red dot sights such as Delta Optical MiniDot, Kahles Helia, Vortex Razor and Sig Sauer Romeo3.
Docter/Noblex standard: The mounting pattern which through the 2010s was used by the largest number of manufacturers, perhaps due to the wide range of aftermarket mounts available. The mounting standard uses two screws and four circular notches acting as recoil lugs. Used on red dot sights such as Docter/Noblex sights, Burris Fastfire, Vortex Viper, Leica Tempus, etc.
Shield standard: A proprietary standard used by Shield Sights. Similar in shape to the Noblex/Docter footprint, but with other dimensions. In addition to the Shield red dot sights, it is also used on the Leupold Delta Point Pro.
Trijicon RMR/SRO-standard: Has two screw holes, and two shallow circular notches acting as recoil lugs. Mainly used on the Trijicon RMR and SRO red dot sights, as well as on some Holosun sights.
Other: Some notable red dot sights which have unique footprints not compatible with any of the above are Sig Sauer Romeo 1, Holosun Paralow 403A, Holosun 509T and Swampfox Kraken MRDS. There also exists reflex sights for ring mounts (e.g. Aimpoint CompM2 with a 30 mm tube) or with an integrated Picatinny base.
Link between scope and firearm:
Bases By bases, is usually meant an interconnecting part between the scope and the firearm. For example, a base may have a picatinny attachment on the underside, while the upper side may have either a ring (e.g. 30 mm) or rail mounting (e.g. Zeiss rail). On some assemblies, the upper and lower parts of the base are separate parts that must be screwed together and fastened to a specified torque. A base can thus sometimes constitute a complete scope mount assembly, but is most often used to refer to the lower part of a two-part scope mount assembly.
Link between scope and firearm:
The firearm interface which sits on the firearm and to which the scope mount is attached is often called the base or rail.
Some types of bases are: Standard mounts Picatinny rail: Standardized slot distances.
Weaver rail: Varying width between the slots.
Proprietary and brand specific mounts Claw mount. Several types, for example Suhl Claw Mounts, Ziegler ZP mount, and others.
Pivot mount. Several types, for example EAW, MAKlick, Steyr Luxus, and others.
Aimpoint Micro, also used by other red dot manufacturers. (Not compatible with Aimpoint Comp or the Aimpoint ACRO mounting standards. See Red dot sight#Mounting types for more red dot mounting standards).
Blaser saddle mount Contessa 12 mm "Euro rail" mount Browning X-Lock Double dovetail, which is rotated and tapped into place. Several types, for example the Leupold Dual Dovetail Mauser M03 Double Square Mount Picatinny-against-picatinny (Burris Eliminator) Pulsar type rail mount. Has some visual similarities with the Zeiss rail, but is incompatible due to a wider base and steeper angle.
Redfield type with windage adjustable mount, also known Redfield Standard Junior. Similar concepts are made by other manufacturers, e.g. "Leupold standard", "Burris TU/SU". Also manufactured by Weaver. Specifications can vary between manufacturers.
Link between scope and firearm:
Ruger integral type (used on Ruger No. 1, M77, Gunsite Scout, the Ranch series of the Mini-14 and Mini-30, Deerfield Carbine, Model 96 (.44 Magnum only) and PC Carbine.) Sako Optilock, either with rings separate from the bases, or with rings as part of the bases. Bases come in various variantes to fit either Sako tapered dovetail rail (available for three different types of action lengths), Tikka straight dovetail (11 mm or 17 mm), Weaver or Picatinny.
Link between scope and firearm:
Sako tapered dovetail rail (used on SAKO models Sako 75, Sako 85, L461, L579, S491, M591, L61R, L691, M995 and TRG-S) Sauer ISI mount (Sauer 303, and a very few editions of Sauer 202) Sauer SUM mount (Sauer 404) Schultz & Larsen integral Slide & Lock type "STANAG" Claw Mount, used on FN FAL, HK G3, HK33, G3SG/1 and MP5. Most STANAG bases must be used with corresponding STANAG rings, but there are also STANAG bases for scopes with rails.
Link between scope and firearm:
Dovetail rail (for example 11 mm, 17 mm or 19 mm). The flank angle varies, and dovetail rail mounts may therefore be regarded as non-standardized, even for a given witdth.
Link between scope and firearm:
Trijicon ACOG/VCOG rail Screw pattern on bases On receivers without an integrated attachment for mounting a scope, for example, an integrated Picatinny rail, the base is usually screwed on as a separate part. Such mounts are often model-specific to the firearm, and depend on factors such as the radius of the receiver bridge, the type of screw and the distance between the screw holes. A common fastening method is by screws. These are often metric M3.5x0.6 mm or US #6-48 (⌀ 3.5 mm, 0.53 mm pitch) or #8-40 (⌀ 4.2 mm, 0.64 mm pitch).
Link between scope and firearm:
Many European assemblies use M3.5 screws, such as Sako Optilock, Recknagel and original CZ rings. Since #6-48 and M3.5x0.6 have near identical diameters and almost equal pitch, there is a potential for confusion, and upon mixing the wrong screw will enter the threads, but will gradually become tighter to screw until the thread is destroyed. In case of damage, the hole must often be drilled and re-threaded, and M4x0.7 or #8-40 may then be relevant alternatives.
Link between scope and firearm:
Remington 700 pattern The Remington 700 Short Action (SA) scope base attachment pattern is particularly widespread, and is for example used on models such as: Remington Model 722, 40x, 78, 740, 742, 760, 710, 721, 722 and 725 Mauser M1996 straight pull and Roesser Titan 16 Mauser SR-97 Sauer 100, Sauer 101, Mauser M18 (not the M12) Bergara B14 LA Haenel Jäger 10 Sabatti Rover LAThe Remington 700 Long Action (LA) naturally has a longer distance between the front and rear screw holes, and therefore continuous scope mount assemblies for the 700 LA do not fit on the 700 SA nor the above-mentioned firearms. However, two-piece scope mounts in general interchange for the mentioned models.
Link between scope and firearm:
List of common screw patterns Bases with a rounded bottom for mounting on a round receiver bridges should ideally have a slightly smaller radius than the receiver in order to provide two points of contact and give a stable attachment. Conversely, a slightly too large radius on the mount will result in just one point of contact and a less stable attachment.
Link between scope and firearm:
In the table below, the radius refers to the curvature of the mounting surface on the receiver bridge. The base is often attached with two screws on the front receiver bridge and two screws on the rear receiver bridge, but sometimes with several more screws. The hole distances are measured from center-to-center. Some common hole distances are 12.7, 15.37 and 21.84 mm (0.500, 0.605 and 0.860 in) respectively).The two front screws are referred to in the list below as screws 1 and 2, and the front hole spacing is thus referred to as «distance 1-2». In the same way, the rear hole distance is called «distance 3-4». The distance between these is largely determined by the receiver length, and is stated here as «distance 2-3»
Other features:
Quick release Quick release (QR) can refer to several different variants of scope mounts which can be mounted and disassembled quickly without tools.
Other features:
Tilt In some cases, it may be relevant to add extra inclination to the scope to be able to shoot at longer (or shorter) distance. For example, this is popular for long range shooting, where it is common to use a tilt of 6 mrad (20 MOA). Extra tilt can be achieved several ways, like for example with a tilted Picatiny rail (e.g. 6 mrad tilt), with bases or rings (e.g. 6 mrad tilt) or with special insert rings (e.g. Burris Pos-Align).
Scope height:
The height of scope sight can be important for the cheek rest support (often called cheek weld) in order to gain correct eye placement, as well as for calculating ballistics (e.g. a ballistic table). The latter is particularly relevant at very close ranges (e.g. 15 meters [49 feet]), while at longer distances, such as in long range shooting, the scope height has less impact on the ballistic calculations.
Scope height:
The height of a scope sight can be measured in many ways. With regard to ballistic calculations, it is generally only measured from the center of the bore axis to the center of the scope sight (sightline). With regard to cheek support, several methods are used: On firearms with a picatinny rail, the height is measured from the top of the picatinny rail on the firearm. On most other types of bases it is common to measure from the top radius of the receiver bridge.
Scope height:
When the bottom measuring point is determined, the height is then measured up to either the optical center or the bottom of the scope tube, on scopes for ring mounts. The difference between these two measuring methods is distance from the optical center to the bottom of the scope tube, and usually corresponds to half of the tube diameter (e.g. 15 mm on binoculars with a 30 mm tube). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Soyasapogenol glucuronosyltransferase**
Soyasapogenol glucuronosyltransferase:
Soyasapogenol glucuronosyltransferase (EC 2.4.1.262, UGASGT) is an enzyme with systematic name UDP-D-glucuronate:soyasapogenol 3-O-D-glucuronosyltransferase. This enzyme catalyses the following chemical reaction UDP-glucuronate + soyasapogenol B ⇌ UDP + soyasapogenol B 3-O-D-glucuronideThis enzyme requires a divalent ion, Mg2+ or Mn2+, or Ca2+. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electronic cash**
Electronic cash:
Electronic cash was, until 2007, the debit card system of the German Banking Industry Committee, the association that represents the top German financial interest groups. Usually paired with a transaction account or current account, cards with an Electronic Cash logo were only handed out by proper credit institutions. An electronic card payment was generally made by the card owner entering their PIN (Personal Identification Number) at a so-called EFT-POS-terminal (Electronic-Funds-Transfer-Terminal). The name "EC" originally comes from the unified European checking system Eurocheque. Comparable debit card systems are Maestro and Visa Electron. Banks and credit institutions who issued these cards often paired EC debit cards with Maestro functionality. These combined cards, recognizable by an additional Maestro logo, were referred to as "EC/Maestro cards".
Providers:
All of Germany's providers registered with the Central Credit Committee are connected in the working group Arbeitskreis der electronic cash-Netzbetreiber. According to the Federal Cartel Office of Germany, the following providers have considerable market shares: Ingenico Payment Services GmbH (before 2014 – easycash), Ratingen, with a market share of 40% (as recorded in 2007) TeleCash GmbH & Co. KG, Stuttgart, with a market share of more than 20% B+S – B+S Card Service GmbH, Frankfurt am Main, with a market share of 10 to 15% WEAT – WEAT Electronic Datenservice GmbH, Düsseldorf, with a market share of less than 10% montrada – montrada GmbH, Bad Vilbel, with a market share of less than 10% (as recorded in 2006); in 2010 they claimed they were now Germany's third most important provider InterCard – InterCard AG, Taufkirchen b. München, with a market share of less than 10%In 2006, the following companies had market shares of less than 3% each: DVB Processing, CardProcess, Tyco/ADT, Bank-Verlag, CardTech, CCV Allcash, EKS, Alphyra, Experian, Paycom, Lavego, Telekurs.
Providers:
In 2010, only CardTech and Lavego remain from the 2006 list (as well as the six top dogs), with AGES, BCB Processing, CardProcess, Deutsche Bahn, Deutsche BP, Douglas Informatik & Service, Elavon, ESSO Deutschland, ICP International Cash Processing GmbH, Postbank, and Shell also offering services now.
Acceptance marks:
Currently there are two valid acceptance marks for electronic cash: the electronic cash PIN-Pad and girocard pictograms. The Technical attachment to the eligibility requirements for participation in the electronic cash system of the German credit services sector (retailing requirements) includes the retailer's obligation to accept both of these acceptance marks at newly set up points of sale for the time being. Furthermore, the acceptance marks are printed on the debit cards of German financial institutions.
Acceptance marks:
The trademarks on these two acceptance marks are held for the Central Credit Committee by the EURO Kartensysteme GmbH.
Acceptance marks:
For a transitional period another pictogram, the ec electronic cash pictogram, is still to be found as an acceptance mark on debit cards issued by the German credit services sector and on POS terminals. This mark was used during the transition from Eurocheque (payment via certified cheque) to payment via ec-card (card based payment with PIN). After the abolition of the Eurocheque, the allocation of ec-cards by the German credit services sector was suspended and the trademarks for Eurocheque were sold to MasterCard.
Acceptance marks:
The German banking sector no longer uses the ec electronic cash sign as an official acceptance mark for electronic cash. Instead, newly issued debit cards show the two current acceptance marks described above. However, the old ec electronic cash sign can still be found on some debit cards in circulation. These cards, which were issued before the new pictograms were introduced, remain valid, but will gradually be replaced by the new cards as they expire. Newly installed electronic cash POS terminals also bear the new pictograms.
Hardware and software:
A card terminal, also called EFT-POS terminal, consists of hardware and software components. The main hardware components are the security module, the PIN pad, the printer, the display, the magnetic card reader, the chip-card reader, the communication module and the power supply.
Hardware and software:
The software mainly consists of the operating system, the communication software, the software of the security module and various software modules for OPT (Online-Personalization of Terminals), EMV as well as additional applications such as prepayment, customer loyalty systems and remote administration. The most important element is the so-called security module, without which the terminal can only be used for electronic direct debit (EDD) transactions.
Hardware and software:
All card terminals working with the electronic cash system have to be certified by the ZKA (the German Central Credit Committee) in order to take part in cashless payment transactions. Terminals working exclusively with EDD do not require a ZKA certificate. Operating a card terminal requires a provider contract with the network operator. The data collected by the terminal is processed by the provider. For the time the terminal is in use the user (for example, the retailer) can contact the service provider. He can call a hotline and is guaranteed on-site technical support by a technician. He has a contact person who helps with questions about the account, transaction control, managing the contract, etc.
Chip card vs magnetic stripe card:
Most ec-cards are equipped with a magnetic stripe. This magnetic stripe is read-only and thus only contains static information. In addition, since the year 2000, more and more banks have started to add the EMV chip to newly issued cards. By 2008, 70% of the cards issued had that chip. The new chip is capable of processing data like a small computer and can respond to requests without the entire contents being read. In contrast to magnetic stripes, the chips cannot be copied easily. To maintain downward compatibility, especially with the Maestro card, which is most often integrated, most cards are still equipped with magnetic stripes. However, usually the chip as the more secure option is chosen wherever both means of communication are technically possible.
Chip card vs magnetic stripe card:
The magnetic stripe on a card has three paths. Until 30 September 2009, path 3 of the magnetic stripe was read for payments in Germany. Since then, the international standard path 2 is being read.
Payment authorization:
Electronic cash with a magnetic stripe card Paying at a POS terminal (point of sale) works as follows: Online authorization validates the card against the list of blocked account numbers and checks the given PIN. Next, it verifies whether the amount due is covered by the account balance (balance plus overdraft facility minus pending debits). Payment is rejected if any of the criteria listed above are not met. The authorization as well as the validation regarding sufficient funds and the daily limit is carried out by the headquarters of the institute from which the card is issued. General procedure for electronic cash payment using the magnetic stripe: Electronic cash with chip, chip offline The general procedure for electronic cash payment using a chip is as follows: The amount is entered.
Payment authorization:
The card is requested, and is read with the help of the chip reader.
The security module is activated, and requests the PIN.
Payment authorization:
The accuracy of the PIN is checked in the chip. If the PIN is entered correctly, the wrong entry counter is set to zero. If the PIN given is incorrect, the wrong entry count increases to one, and if it is entered incorrectly three times, the bank can block the card. The bank can unlock the card with the help of special bank terminals (BSFT).
Payment authorization:
The request for payment is sent to the card chip. If there is enough money and/or credit on the card, the amount will be deducted and the credit limit updated on the chip. Go to step 11.
The communications module establishes the connection to the provider and logs the data exchange.
Data exchanges are carried out via the communications link and plausibility checks.
Via the online connection the bank verifies that the card is not on the blacklist, and that the amount requested is not more than the available amount.
If one of these criteria is not met, payment will be rejected.
A payment approval (authorization) is transmitted to the chip and stored there.
The following information may, for example, be saved: "Further payments to the total of 500 euros before the end of the month are allowed." The communication module logs off at the provider and terminates the connection.
Payment authorization:
The printer creates a record of payment or rejection, which is shown on the screen. The confirmation of payment guarantees the retailer payment (if submitted on time).Steps 3 to 6 are not applicable if the credit limit has not been reached, thus resulting in no transaction costs. Additionally, the payment process is often accelerated because no online connection needs to be established. The bank thereby grants the customer additional credit.
Payment authorization:
Example You make a first withdrawal of 30 euros. The terminal sends a request to the bank and subsequently saves the payment authorisation. Further withdrawals up to a total amount of 500 euros are possible until the end of the month.
In a nearby shop, you pay another 70 euros using electronic cash. Another request to the bank is unnecessary as the payment permission is already stored on the chip. A credit line of 430 euros is now left on the chip.
The next day of the same month you want to pay 419 euros using electronic cash. Again, a request to the bank is unnecessary since the payment permission is already on the chip. A credit line of 11 euros is now left on the card.
Payment authorization:
The last day of this month you want to make a payment of another 12 euros in another shop. The available credit on the chip is now too low. A connection to the bank is established. The bank states that 12 euros are immediately available and that the credit line is being raised by another 500 euros until the end of the next month.
Payment authorization:
Costs The charge made for an electronic cash transaction depends on the amount of the payment. It is 0.3% of the amount with a minimum of 8 cents. In the oil industry the basic charge is 0.2% of the amount but with a minimum of 4 cents.Depending on the provider, further charges, e.g. for technical deployment, may be incurred.
According to retailers' terms and conditions, shops have to accept electronic cash payments on the same conditions and at the same prices as with cash. Thus, they have to pay the charges and are not allowed to set a minimum sales amount.
Modes of payment with electronic cash debit cards:
Many retailers provide the option of paying by card or electronic cash, as both payment systems include a guarantee of payment. The electronic direct debit (EDD) system offers no such guarantee and thus exposes the retailer to a default risk.
In 2005, 13.1% of all payments in Germany were made using electronic cash (payments included the entering of the PIN). In 2009, the percentage of payments using electronic cash went up to 19.4%; payments amounted to 71 billion euros.
The electronic purse card or Geldkarte can also be used for payments. With an annual turnover of 0.1 billion euros its market share amounts to less than 0.04%.
Modes of payment with electronic cash debit cards:
ELV (Elektronisches Lastschriftverfahren, electronic debit advice procedure) online or offline. 12% of 2005 turnover in commerce was processed using this method. The market share in 2009 was 12.2%, or 45 billion euros. The technology was introduced in 1984. When using ELV online (also called OLV) every online payment is checked against a credit rating score and a nationwide blacklist. When ELV takes place offline, there is no telephone line and no checking. It is the most inexpensive method for retailers. All procedures read only the account number, the bank code and the card number from the magnetic stripe or the chip. In contrast to the electronic cash method the customer authorises a direct withdrawal with his signature.
Modes of payment with electronic cash debit cards:
POZ (Point of Sale ohne Zahlungsgarantie, point of sale without payment guarantee). Unlike OLV and ELV, which are procedures used in retail, POZ was a procedure used by the ZKA (Zentraler Kreditausschuss, the German Central Credit Committee) from its introduction in 1994 up to its abolition on December 31, 2006. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Web content development**
Web content development:
Web content development is the process of researching, writing, gathering, organizing, and editing information for publication on websites. Website content may consist of prose, graphics, pictures, recordings, movies, or other digital assets that could be distributed by a hypertext transfer protocol server, and viewed by a web browser.
Content developers and web developers:
When the World Wide Web began, web developers either developed online content themselves, or modified existing documents and coded them into hypertext markup language (HTML). In time, the field of website development came to encompass many technologies, so it became difficult for website developers to maintain so many different skills. Content developers are specialized website developers who have content generation skills such as graphic design, multimedia development, professional writing, and documentation. They can integrate content into new or existing websites without using information technology skills such as script language programming and database programming. Content developers or technical content developers can also be technical writers who produce technical documentation that helps people understand and use a product or service. This documentation includes online help, manuals, white papers, design specifications, developer guides, deployment guides, release notes, etc.
Search engine optimization:
Content developers may also be search engine optimization specialists, or internet marketing professionals. High quality, unique content is what search engines are looking for. Content development specialists, therefore, have a very important role to play in the search engine optimization process. One issue currently plaguing the world of web content development is keyword-stuffed content which are prepared solely for the purpose of manipulating search engine rankings. The effect is that content is written to appeal to search engine (algorithms) rather than human readers. Search engine optimization specialists commonly submit content to article directories to build their website's authority on any given topic. Most article directories allow visitors to republish submitted content with the agreement that all links are maintained. This has become a method of search engine optimization for many websites today. If written according to SEO copywriting rules, the submitted content will bring benefits to the publisher (free SEO-friendly content for a webpage) as well as to the author (a hyperlink pointing to his/her website, placed on an SEO-friendly webpage).
New content types:
Web content is no longer restricted to text. Search engines now index audio/visual media, including video, images, PDFs, and other elements of a web page. Website owners sometimes use content protection networks to scan for plagiarized content. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Systemantics**
Systemantics:
General Systemantics (retitled to Systemantics in its second edition and The Systems Bible in its third) is a systems engineering treatise by John Gall in which he offers practical principles of systems design based on experience and anecdotes.
Systemantics:
It is offered from the perspective of how not to design systems, based on system engineering failures. The primary precept of the treatise is that large complex systems are extremely difficult to design correctly despite best intentions, so care must be taken to design smaller, less-complex systems and to do so with incremental functionality based on close and continual touch with user needs and measures of effectiveness.
Title origin:
The term systemantics is a commentary on prior work by Alfred Korzybski called General Semantics which conjectured that all systems failures could be attributed to a single root cause – a failure to communicate. Dr. Gall observes that, instead, system failure is an intrinsic feature of systems. He thereby derives the term 'General Systemantics' in deference to the notion of a sweeping theory of system failure, but attributed to an intrinsic feature based on laws of system behavior. He observes as a side-note that system antics also playfully captures the concept that systems naturally "act up."
Contents of the book:
Background Premise Systems in general work poorly or not at all.This is more a universal observation than a law. The origin of this observation is traced back via: Murphy's Law that "if anything can go wrong, it will", Alfred Korzybski's General Semantics notion of failure's root cause being a communication problem, Humorist Stephen Potter's One-upmanship on ways to "game" the system for personal benefit, Historian C. Northcote Parkinson's principle called Parkinson's Law – "Work expands so as to fill the time available for its completion" Educator Lawrence J. Peter's widely cited Peter Principle – "In a hierarchy every employee tends to rise to his level of incompetence ... in time every post tends to be occupied by an employee who is incompetent to carry out its duties ... Work is accomplished by those employees who have not yet reached their level of incompetence." Scope By "systems", the author refers to those that "...involve human beings, particularly those very large systems such as national governments, nations themselves, religions, the railway system, the post office..." though the intention is that the principles are general to any system.
Contents of the book:
Additionally, the author observes.
Everything is a system.
Everything is part of a larger system.
The universe is infinitely systematized, both upward (larger systems) and downward (smaller systems).
All systems are infinitely complex.
First principles New systems mean new problems.Once a system is set up to solve some problem, the system itself engenders new problems relating to its development, operations and maintenance. The author points out that the additional energy required to support the system can consume the energy it was meant to save. This leads to the next principle.
The total amount of anergy in the universe is fixed.The author defined anergy as the effort required to bring about a change. This was meant as a tongue-in-cheek analog of the law of conservation of energy.
Systems tend to expand to fill the known universe.One of the problems that a system creates is that it becomes an entity unto itself that not only persists but expands and encroaches on areas beyond the original system's purview.
Why systems behave poorly Complicated systems produce unexpected outcomes [Generalized Uncertainty Principle].The author cites a number of spectacular unexpected behaviors including: The Aswan Dam diverting the Nile River's fertilizing sediment to Lake Nasser (where it is useless) requiring the dam to operate at full electrical generating capacity to run the artificial fertilizer plants needed to replace the diverted sediment.
Contents of the book:
The space Vehicle Assembly Building at Kennedy Space Center designed to protect vehicles from weather is so large that it produces its own weather Feedback Not only do systems expand well beyond their original goals, but as they evolve they tend to oppose even their own original goals. This is seen as a systems theory analog of Le Chatelier's principle that suggests chemical and physical processes tend to counteract changed conditions that upset equilibrium until a new equilibrium is established. This same counteraction force can be seen in systems behavior. For example, incentive reward systems set up in business can have the effect of institutionalizing mediocrity.
Contents of the book:
This leads to the following principle.
Systems tend to oppose their own proper function.
What's in a name People performing roles in systems often do not perform the role suggested by the name the system gives that person, nor does the system itself perform the role that its name suggests.
People in systems do not actually do what the system says they are doing [Functionary's Falsity].
Contents of the book:
The system itself does not actually do what it says it is doing. [The Operational Fallacy] Inside systems The real world is what is reported to the system [The Fundamental Law of Administrative Workings (F.L.A.W.)].In other words, the system has a severely censored and distorted view of reality from biased and filtering sensory organs, which displaces understanding of the actual real-world which pales and tends to disappear. This displacement creates a type of sensory deprivation and a kind of hallucinogenic effect on those inside the systems, causing them to lose common sense. In addition to negatively affecting those inside the system, the system attracts to it people who are optimized for the pathological environment the system creates. Thus, Systems attract systems-people Elementary systems functions A complex system cannot be "made" to work. It either works or it doesn't.
Contents of the book:
A simple system, designed from scratch, sometimes works.
Some complex systems actually work.
A complex system that works is invariably found to have evolved from a simple system that works.
A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system.
Advanced systems functions The Functional Indeterminacy Theorem (F.I.T.): In complex systems, malfunction and even total non-function may not be detectable for long periods, if ever.
The Newtonian Law of Systems Inertia: A system that performs a certain way will continue to operate in that way regardless of the need or of changed conditions.
Systems develop goals of their own the instant they come into being.
Intrasystem goals come first.
System failure The Fundamental Failure-Mode Theorem (F.F.T.): Complex systems usually operate in a failure mode.
A complex system can fail in an infinite number of ways. (If anything can go wrong, it will; see Murphy's law.) The mode of failure of a complex system cannot ordinarily be predicted from its structure.
The crucial variables are discovered by accident.
The larger the system, the greater the probability of unexpected failure.
"Success" or "Function" in any system may be failure in the larger or smaller systems to which the system is connected.
The Fail-Safe Theorem: When a Fail-Safe system fails, it fails by failing to fail safe.
Practical systems design The Vector Theory of Systems: Systems run better when designed to run downhill.
Loose systems last longer and work better. (Efficient systems are dangerous to themselves and to others.) Management and other myths Complex systems tend to produce complex responses (not solutions) to problems.
Great advances are not produced by systems designed to produce great advances.
Other laws of systemantics As systems grow in size, they tend to lose basic functions.
The larger the system, the less the variety in the product.
Control of a system is exercised by the element with the greatest variety of behavioral responses.
Colossal systems foster colossal errors.
Choose your systems with care.
Sources:
Gall, John. The Systems Bible: The Beginner's Guide to Systems Large and Small (Third Edition of SYSTEMANTICS), General Systemantics Press/Liberty, 2003. ISBN 0-9618251-7-0.
Gall, John. SYSTEMANTICS: The Underground Text of Systems Lore. How Systems Really Work and How They Fail (Second Edition), General Systemantics Press, 1986. ISBN 0-9618251-0-3.
Gall, John. SYSTEMANTICS: How Systems Really Work and How They Fail (First Edition), Pocket, 1978. ISBN 0-671-81910-0. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Steve Krug**
Steve Krug:
Steve Krug is a UX (User Experience) professional based in Chestnut Hill, Massachusetts. He is best known for his book Don't Make Me Think about human-computer interaction and web usability, which is in its third edition with over 600,000 copies in print. He also heads a one-man consulting firm called Advanced Common Sense. Krug offers in-house workshops where he teaches do-it-yourself usability testing and provides targeted advice to clients on web usability strategies. Krug published Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems in 2009. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neuroscience Research Program**
Neuroscience Research Program:
The Neuroscience Research Program (NRP) is an inter-university and international organisation founded during 1962 by Francis Otto Schmitt and others, which marked a key moment in the foundation of neuroscience as a discipline.
Neuroscience Research Program:
A primary activity of the NRP was in making links between neural and behavioural sciences. The programs three core areas of interest were molecular biology, the Nervous system (neural) and psychology Funded by federal grants from the government of the United States of America, and additionally sponsored by Massachusetts Institute of Technology, the program was headquartered at the American Academy of Arts and Sciences based in Boston House. It operated a twice weekly meeting with guest speakers talking on key issues pertaining to neuroscience, and published its findings through the Neuroscience Research program Bulletin to libraries and other individuals working in the field.Frank Schmitt had earlier organised a meeting (seminar series) of persons at M.I.T. during 1960 and 1961, who were interested in developing cross-disciplinary understandings in the fields of physics, chemistry, and the structural examination of the brain, together with using knowledge of new psychological, psychiatric and behavioural findings. During February 1962, Schmitt invited a select number of highly esteemed scientists to a meeting within New York city, at which they all agreed to formulate a new organisation, which was named at Schmitts' bequest, and due to be located at Brookline Massachusetts.The program held six work-sessions each year, conferences which gave rise to published reports, intensive study programs (ISP's) triannually, and special conferences which were held for specific projects, where scientists suggested ways in which the most progress in neuroscience might be made, these were referred to generally by the term Whither, held both within the United States of America, and also at other international locations.Katheryn Cusick was executive secretary from 1964. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reproduction and vocalization in midshipman fish**
Reproduction and vocalization in midshipman fish:
Reproduction and vocalization in midshipman fish are closely interlinked. Mating in midshipman fish depends on auditory communication, the production and reception of sound signals. Males produce several different vocalizations, while females only make grunts in non-breeding situations.
Calling:
Typical Type I male calls are divided into short grunts that last for milliseconds or are produced in a series of grunts called a "grunt train", mid-duration growls, and long duration advertisement hums that can last up to an hour. These calls can be recorded naturally. They can also be produced in a laboratory, a procedure known as "fictive calling". In nature, two muscles contracting on the swim bladder produce these sounds. In the laboratory, sounds are produced by a stimulating electrode placed on the periaqueductal gray (PAG) and a recording electrode placed on the occipital nerve that leads to the sonic muscles of the fish.
Steroid mediation:
The vocalizations of male midshipman fish are androgen and estradiol steroid mediated. There are high blood levels of these hormones during the transition from non-calling to calling before midshipman breeding season, suggesting that higher hormone levels are needed for making advertisement calls. Feeding 11-ketotestosterone coated scallops to toadfish increases their calling behavior, which identifies 11-ketotestosterone, an androgen hormone, as a mediator of midshipman fish vocalization. There are also high levels of aromatase, an estrogen-generating enzyme, in the hindbrain vocal motor region. Estradiol steroids and their receptors are present in the same areas already concluded to be involved in male midshipman calling.There are three sexes of midshipman fish: females, type I males, and type II males. Type I and type II males have different reproductive strategies, and can be distinguished from each other based on physical characteristics. Type I males are eight times larger in body mass, and have much larger vocal organs. Type II males’ reproductive organs are seven times larger in size than those of type I males. Female and type II male midshipman fish can be distinguished from each other by the female's slightly larger size, and the type II male midshipman's large reproductive organs.The three sexes of midshipman fish have different steroid-mediated reproductive behaviors. Type I territorial males use vocalizations via paired muscles in the swimming bladder to attract females, while type II males invest in larger reproductive organs. Type II males then “sneak” into nests because they look much like females and fertilize laid eggs. This behavior is referred to as cuckoldry or satellite-spawning. Type II males and females are incapable of long duration calls. 11-Ketotestosterone is the major steroid present in Type I males’ vocalization systems, while Type 2 males’ and females’ vocalizations are primarily mediated by testosterone. The specific mechanisms by which these steroids act are still unknown.The sounds produced by male midshipman fish cause reproductive females to develop a hormone-mediated selective sensitivity to this sound, and they respond by laying eggs in the rock nest of a singing male. This selective sensitivity to higher frequency correlates to increased levels of testosterone and estradiol.
Neuron connectivity:
The neuronal pathway for midshipman vocalization starts at the ventral medullary nucleus and continues to a hindbrain vocal pattern generator, which contains both pre-pacemaker and pacemaker nuclei. For each action potential fired in their vocal pattern generator, there is exactly one sonic motor neuron that fires, and there is exactly one sound pulse. The two motor nuclei fire in phase in toadfish, leading to the paired contraction of the sonic muscles.The duration of calls is controlled by the pre-pacemaker neurons in the hindbrain. The duration is encoded by a long depolarization of these pre-pacemaker neurons. Exposing pacemaker neurons to different levels of the anesthetic lidocaine alters the duration of the calls, but not the frequency Pacemaker neurons code for the frequency of signals using "ultrafast" rhythmic oscillations in membrane potential. As midbrain stimulus increased, the oscillations increased in amplitude.
Implications for humans:
Although midshipman fish have been known to awaken houseboat owners, research surrounding their vocalizations could be beneficial to humans. Midshipman fish are model organisms for studying both human speech and hearing. Recently, it was found that midshipman fish can decrease their own hearing sensitivity by stiffening their inner ear hair cells while they are vibrating their calling muscles. This behavior is also found in bats, and may lead to an understanding a similar mechanism humans use to turn down their ear sensitivity to retain their hearing longer. There are conserved patterns of vocal, auditory, and neuroendocrine mechanisms between teleosts and tetrapods, which include midshipman fish and humans, respectively. This model organisms’ simple system could lead to a deeper understanding of human speech and auditory pathways,. This evolutionary connection could be important in modern medicine because these fish have homologous brain structures to humans. An example is for patients with lesions in the brain that become mute after having a stroke.On August 9, 1974, composer Charlie Morrow performed a concert for fish using what he understood to be decoded toadfish language, similar to his decoded field peeper language. Morrow observed that choruses of multiple toadfish shift leadership based on call and response by strong individuals. He notated the patterns for human performance, in one version numbering each individual for identification and spatial location. The night before, Richard Nixon resigned as U.S. president. Major media covered the concert. New York Times music critic, John Rockwell, wrote a review with the headline, "Fish Silent".
Brain-behavior relationship:
Midshipman fish have two forms of males: the nest-building Type I and "sneaker male" or "satellite male" Type II. Type I males attract females to their nests with their humming, coax them to lay eggs, and guard them. In contrast, Type II males do not build nests or attract females on their own. Instead, they sneak up to the Type I's nests and deposit their eggs. These behavioral differences can be seen in the differences in the structure and function of the nervous system.
Brain-behavior relationship:
Morph-specific vocal behavior Neurons and Muscles Type I males and Type II males and females grow along different growth paths when it comes to neurons and muscles that determine morph-specific vocal behavior. For Type I males, for example, sexual maturation is preceded by growth of the mate-calling circuit and sonic muscle. Specifically, before the transformation from juvenile to type I male, the size of the motoneurons and volume of the sonic motor nucleus increases twofold and the number of sonic muscle fibers increases fourfold. At the start of sexual maturation, the motoneurons increase in size again, although not as much as before. The pacemaker neurons also increase in size at this time but not as much as the motoneurons in general. The sonic muscle also increases fivefold in the size of muscle fibers. In contrast, there is little dramatic change seen before the transformation from juvenile to type II male or adult female. In fact, the vocal neurons and muscles change little or not at all.
Brain-behavior relationship:
Hormones Differences between the reproductive strategies of Type I and Type II are also reflected in hormonal differences during sexual maturation. The three different morphs – Type I, Type II, and females – also produce different levels of various hormones. Type II males produce the highest levels of testosterone, followed by females and then Type I males. Females only have estrogen in the form of 17β-estradiol but at much lower levels than testosterone.Type I males also have five times more 11-ketotestoterone, which is a form of testosterone common to teleosts, than type II males and females. 11-ketotestosterone is likely to be more potent than testosterone in supporting courtship behaviors such as humming. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ATSC-M/H**
ATSC-M/H:
ATSC-M/H (Advanced Television Systems Committee - Mobile/Handheld) is a U.S. standard for mobile digital TV that allows TV broadcasts to be received by mobile devices.ATSC-M/H is a mobile TV extension to preexisting terrestrial TV broadcasting standard ATSC A/53. It corresponds to the European DVB-H and 1seg extensions of DVB-T and ISDB-T terrestrial digital TV standards respectively. ATSC is optimized for a fixed reception in the typical North American environment and uses 8VSB modulation. The ATSC transmission method is not robust enough against Doppler shift and multipath radio interference in mobile environments, and is designed for highly directional fixed antennas. To overcome these issues, additional channel coding mechanisms are introduced in ATSC-M/H to protect the signal. As of 2021, ATSC-M/H is considered to have been a commercial failure.
Evolution of mobile TV standard:
Requirements Several requirements of the new standard were fixed right from the beginning: Completely backward compatible with ATSC (A/53) Broadcasters can use their available license without additional restrictions Available legacy ATSC receivers can be used to receive the ATSC (A/53) standard without any modification.
Evolution of mobile TV standard:
Proposals Ten systems from different companies were proposed, and two remaining systems were presented with transmitter and receiver prototypes: MPH (an acronym for mobile/pedestrian/handheld, suggesting miles per hour), was developed by LG Electronics and Harris Broadcast. (Zenith, a subsidiary of LG, developed much of the original ATSC system.) A-VSB (Advanced-VSB) was developed by Samsung and Rohde & Schwarz.To find the best solution, the Advanced Television Systems Committee assigned the Open Mobile Video Coalition (OMVC) to test both systems. The test report was presented on May 15, 2008. As a result of this detailed work by the OMVC, a final standard draft was designed by the Advanced Television Systems Committee, specialist group S-4. ATSC-M/H will be a hybrid. Basically the following components of the proposed systems are used: RF-Layer from the MPH standard Deterministic frame structure from A-VSB Signaling of service designed on the base of the established mobile standards Standard milestones On December 1, 2008, the Advanced Television Systems Committee elevated its specification for Mobile Digital Television to Candidate Standard status. In the following six months, the industry tested the standard. Before it became an official standard, additional improvements were proposed. ATSC members approved the ballot on October 15, 2009 to official standard A/153.
Evolution of mobile TV standard:
ATSC introduced in January 2010 at Consumer Electronics Show, the name and logo for "MDTV" for ATSC A/153.
Structure of mobile DTV standard:
The ATSC Mobile DTV standard ATSC-M/H (A/153) is modular in concept, with the specifications for each of the modules contained separate Parts. The individual Parts of A/153 are as follows: Part 1 “ATSC Mobile DTV System” describes the overall ATSC Mobile DTV system and explains the organization of the standard. It also describes the explicit signaling requirements that are implemented by data structures throughout the other Parts.
Structure of mobile DTV standard:
Part 2 “RF/Transmission System Characteristics” describes how the data is processed and placed into the VSB frame. Major elements include the Reed Solomon (RS) Frame, a Transmission Parameter Channel (TPC), and a Fast Information Channel (FIC).
Structure of mobile DTV standard:
Part 3 “Service Multiplex and Transport Subsystem Characteristics” covers the service multiplex and transport subsystem, which comprises several layers in the stack. Major elements include Internet Protocol (IPv4), UniDirectional Protocol (UDP), Signaling Channel Service, FLUTE over Asynchronous Layered Coding (ALC) / Layered Coding Transport (LCT), Network Time Protocol (NTP) time service, and Real-time Transport Protocol (RTP) / RTP Control Protocol (RTCP).
Structure of mobile DTV standard:
Part 4 “Announcement”: Part 4 covers Announcement, where services can optionally be announced using a Service Guide. The guide specified in Part 4 is based on an Open Mobile Alliance (OMA) broadcast (BCAST) OMA BCAST-Electronic program guide, with constraints and extensions.
Part 5 “Application Framework” defines the Application framework, which enables the broadcaster of the audio-visual service to author and insert supplemental content to define and control various additional elements of the Rich Media Environment (RME).
Part 6 “Service Protection” covers Service Protection, which refers to the protection of content, either files or streams, during delivery to a receiver. Major elements include the Right Issue Object and Short-Term Key Message (STKM).
Part 7 “AVC and SVC Video System Characteristics” defines the Advanced Video Coding (AVC) and Scalable Video Coding (SVC) Video System in the ATSC Mobile DTV system. Additional elements covered in this Part included closed captioning (CEA 708) and Active Format Description (AFD).
Part 8 “AHE AAC Audio System Characteristics” defines the High-Efficiency Advanced Audio Coding (HE-AAC v2) Audio System in the ATSC Mobile DTV system.
Part 9 “Scalable Full Channel Mobile Mode”
Principle:
ATSC-M/H is a service for mobile TV receivers and partly uses the 19.39 Mbit/s ATSC 8VSB stream. The mobile data is carried in an unreferenced Packet ID, so legacy receivers ignore the mobile data.
Technology:
ATSC-M/H bandwidth consumes fixed chunks of 917kbit/s out of the total ATSC Bandwidth. Each such chunk is called an M/H Group. A data pipe called a parade is a collection of one to eight M/H groups. A parade conveys one or two ensembles which are logical pipes of IP datagrams. Those datagrams in turn carry TV services, System Signaling tables, OMA DRM key streams and the Electronic Service Guide.
Technology:
ATSC-M/H has an improved design based on detailed analyses of experiences with other mobile DTV standards.
Protocol stack ATSC-M/H protocol stack is mainly an umbrella protocol that uses OMA ESG, OMA DRM, MPEG-4 in addition to many IETF RFCs.
Technology:
Transport stream data structure The ATSC-M/H standard defines a fixed transport stream structure, based on M/H Frames, which establishes the location of M/H content within the VSB Frames and allows for easier processing by an M/H receiver. This is contrary to the legacy ATSC transport stream, defined in A/53, in which there is no fixed structure to establish the phase of the data relative to VSB Frames.
Technology:
One M/H Frame is equivalent in size to 20 VSB Frames and has an offset of 37 transport stream (TS) packets relative to the beginning of the VSB Frame. Each M/H Frame, which has a fixed duration of 968 ms, is divided into five M/H sub-frames and each sub-frame is further subdivided into sixteen M/H Slots. Each slot is the equivalent amount of time needed to transmit 156 TS packets. A slot may either carry all main ATSC data (A/53) or 118 packets of M/H data and 38 packets of main data. The collection of 118 M/H packets transmitted within a slot is called an M/H Group. Each of the 118 M/H packets within an M/H Group are encapsulated inside a special TS packet, known as an MHE packet.
Technology:
An M/H Parade is a collection of M/H Groups and can carry one or two M/H Ensembles. These Ensembles are logical pipes for IP datagrams. Those datagrams in turn carry TV services and the signaling of mobile content. The M/H Groups from a single Parade are placed within M/H Slots according to an algorithm defined in A/153 Part 2. The Number of Groups per M/H Sub-Frame (NoG) for an M/H Parade ranges from 1 to 8 and therefore the number of Groups per an M/H Frame for a Parade ranges from 5 to 40 with a step of 5. The data of a Parade are channel coded and distributed by an interleaver during an M/H Frame.
Technology:
Mobile Data are protected by an additional FEC, as Interleaving and Convolutional codes. To improve the reception in the receiver, training sequences are introduced into the ATSC-M/H signal to allow channel estimation on the receiver side.
Time slicing is a technique used by ATSC-M/H to provide power savings on receivers. It is based on the time-multiplexed transmission of different services.
Error protection ATSC-M/H combines multiple error protection mechanisms for added robustness. One is an outer Reed–Solomon error correction code which corrects defective bytes after decoding the outer convolutional code in the receiver. The correction is improved by an additional CRC checksum since bytes can be marked as defective before they are decoded (erasure decoding).
The number of RS parity symbols can represent 24, 36 or 48. The symbols and the additional checksum form the outer elements of a data matrix which is allocated by the payload of the M/H Ensemble. The number of lines is fixed and the number of columns is variable according to how many slots per Subframe are occupied.
Technology:
The RS Frame is then partitioned into several segments of different sizes and assigned to specified regions. The M/H data in these regions are protected by an SCCC (Series Concatenated Convolutional Code), incorporating a code rate of 1/2 or 1/4, and is specific to each region in a group. A 1/4 rate PCCC (Parallel Concatenated Convolutional Code) is also employed as an inner code for the M/H signaling channel, which includes FIC (Fast Information Channel) and TPC (Transmission Parameter Channel). The TPC carries various FEC modes and M/H Frame information. Once the TPC is extracted, the receiver then knows the code rates being employed and can decode each region at its specified rate.
Technology:
A modified trellis encoder is also employed for backwards compatibility with legacy A/53 receivers.
The time interleaving of ATSC-M/H is 1 second.
Signaling ATSC M/H Signaling and Announcement defines three different layers of signalling. The layers are organized hierarchically and optimized to characteristics of the transmission layer.
Technology:
Transmission Signaling System is the lowest layer and uses the Transmission Parameter Channel (TPC). It provides information for the receiver needed to decode the signal Transport Signaling System is the second layer, which uses the Fast Information Channel (FIC) in combination with the Service Signaling Channel (SSC). The main purpose of the FIC is to deliver essential information to allow rapid service acquisition by the receiver. The Service Signaling Channel (SSC), consists of several different signaling tables. The information carried within these tables can be compared to the PSIP information of ATSC. The SSC provides mainly the basic information, the logical structure of the transmitted services and the decoding parameters for video and audio.
Technology:
Announcement / Electronic Service Guide (ESG) is the highest layer of signaling. It uses the Open Mobile Alliance (OMA) Broadcast Service Enabler Suite (OMA BCAST) Electronic Service Guide (ESG). An ESG is delivered as a file data session File Delivery over Unidirectional Transport (FLUTE), and is used as the delivery protocol. The ESG consists of several XML sections. With this structure, a program guide and enabled interactive services can be realized.
Technology:
Signaling of video- and audio coding Each video- or audio decoder needs information about the used coding parameters, for instance resolution, frame rate and IDR (Random Access Point) repetition rate. In MPEG-4/AVC, mobile TV systems the receiver uses information from the Session Description Protocol File (SDP-File). The SDP-file is a format which describes streaming media initialization parameters. In ATSC-M/H, the SDP-File is transmitted within the SMT-Table. Most of the information is coded in binary, but some is coded in the original ASCII text format. The SMT-Table combines information that is typically in different tables and reduces the complexity for the network and the receivers. In case of signaling with ESG, the complete SDP-File is transmitted.
Technology:
Single-frequency network (SFN) In an SFN, two or more transmitters with an overlapping coverage send the same program content simultaneously on the same frequency. The 8VSB modulation used by ATSC allows SFN transmissions. To allow regular channel approximation, ATSC-M/H provides additional training sequences. ATSC A/110 defines a method to synchronize the ATSC modulator as part of the transmitter. The A/110 standard sets up the Trellis coder in a pre-calculated way to all transmitters of the SFN. In such an SFN, the ATSC-M/H multiplexer and the ATSC-M/H transmitter are synchronized by a GPS reference. The ATSC-M/H multiplexer operates as a network adapter and inserts time stamps in the MPEG transport stream. The transmitter analyzes the time stamp, delays the transport stream before it is modulated and transmitted. Eventually, all SFN transmitters generate a synchronized signal.
Other mobile standards:
Until its shutdown, MediaFLO had been available in parts of the United States. It was a premium service that required subscription. ATSC-M/H would be free to air, as are regular broadcast signals. Both Standards were designed without sufficient consideration of the continued growth of the internet and mobile platforms which today provide excellent multimedia capabilities using only web-centric codecs and protocols rather than repurposing of existing Standards suited to legacy broadcasting. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PharmedOut**
PharmedOut:
PharmedOut (PhO) is a Georgetown University Medical Center project founded in 2006. It is directed by Adriane Fugh-Berman. The stated mission of the organization is to advance evidence-based prescribing and educate healthcare professionals about pharmaceutical marketing practices. Three stated goals of the project are: 1. Document and disseminate information about how pharmaceutical companies influence prescribing 2. Foster access to unbiased information about drugs 3. Encourage physicians to choose pharma-free CME (continuing medical education).This organization provides healthcare professionals with pharma-free continuing medical education (CME) and resources to unbiased drug information. PharmedOut was founded with funds from the Attorney General Consumer and Prescriber Education grant program. Since 2008, PharmedOut has been financially supported by individual donations and largely staffed by a volunteer team of physicians, pharmacists, nurses, scientists, lawyers, students, artists and writers.PharmedOut criticizes some medical research and practices, including overprescription of opioids, industry construction of and influence on perceptions of diseases and symptoms, misleading information about the benefits of and harms of testosterone, menopausal hormone therapy, flibanserin, and Epipens.Articles in peer-reviewed publications include an article about how Medicare prescribers who accept industry gifts prescribe more medications (and more expensive medications), how industry uses social psychology to manipulate physicians, pharmacist-industry relationships, an article on medical device salespeople and surgeons, an analysis of pharmaceutical marketing to people with hemophilia an analysis of how "key opinion leaders" are used to market drugs off-label, an explanation of drug rep tactics, an article on basic scientists and industry, and a study that documents the effect of Why Lunch Matters, a presentation that is the first to document a significant change in physicians' perceptions about their own individual vulnerability to pharmaceutical marketing.
PharmedOut:
PharmedOut has also criticized industry support of continuing medical education and industry support of patient advocacy groups, and has compiled a list of pharma-free patient advocacy groups.In its first 10 years, PharmedOUT has published the first studies on "Relationships between surgeons and medical device representatives", "Pharmacists' beliefs regarding pharmaceutical companies", "How drug company representatives influence physicians", "Promotional Tone in industry-influenced articles", "How companies market drugs off-label", "How ghostwriting sold menopausal hormone therapy", "Reverse-engineering marketing messages in industry-funded CME", "The way pharma targets individuals with hemophilia and other expensive diseases", "The first national survey of family medicine resident interactions with pharmaceutical companies", and "The effects of our first educational module about industry tactics on physicians' perceptions of their own vulnerability to marketing". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Betacam**
Betacam:
Betacam is a family of half-inch professional videocassette products developed by Sony in 1982. In colloquial use, "Betacam" singly is often used to refer to a Betacam camcorder, a Betacam tape, a Betacam video recorder or the format itself.
Betacam:
All Betacam variants from (plain) analog recording Betacam to Betacam SP and digital recording Digital Betacam (and additionally, HDCAM and HDCAM SR), use the same shape videocassettes, meaning vaults and other storage facilities do not have to be changed when upgrading to a new format. The cassettes are available in two sizes: S (short or small) and L (long or large). The Betacam camcorder can only load S magnetic tapes, while television studio sized video tape recorders (VTR) designed for video editing can play both S and L tapes.
Betacam:
The cassette shell and case for each Betacam cassette is colored differently depending on the format, allowing for easy visual identification. There is also a mechanical key that allows a video tape recorder to identify which format has been inserted.
The format supplanted the three-quarter-inch U-Matic format, which Sony had introduced in 1971. In addition to improvements in video quality, the Betacam configuration of an integrated professional video camera/recorder led to its rapid adoption by electronic news gathering (ENG) organizations.
DigiBeta, the common name for Digital Betacam, went on to become the single most successful professional broadcast digital recording video tape format in history, but now although Betacam remains popular in the field and for archiving, new tapeless digital products have led to a phasing out of Betacam products in television studio environments since 2006.
Variants:
Betacam and Betacam SP Original Betacam format The original Betacam format was launched on August 7, 1982. It is an analog component video format, storing the luminance, "Y", in one track and the chrominance, on another as alternating segments of the R-Y and B-Y components performing Compressed Time Division Multiplex, or CTDM. This splitting of channels allows true broadcast quality recording with 300 lines of horizontal luminance resolution and 120 lines chrominance resolution versus 0.4 MHz chroma bandwidth for domestic Betamax and the professional U-Matic formats. (~30 lines resolution left-to-right) on a relatively inexpensive cassette based format.
Variants:
The original Betacam cassettes, loaded with ferric-oxide tape, were identical in overall design and size (15.1 × 9.5 × 2.5 cm) to consumer-grade Betamax, introduced by Sony in 1975. Betacam cassettes could be used in a Betamax VCR; likewise, a blank Betamax tape would work on a Betacam deck. However, in later years Sony discouraged this practice, suggesting that the internal tape transport of Betamax cassette was not well suited to the faster tape transport of Betacam. In particular, the guide rollers tend to be noisy.
Variants:
Although there is a superficial similarity between Betamax and Betacam in that they use the same tape cassette, they are really quite different formats. Betamax records relatively low resolution video using a heterodyne color recording system and only two recording heads, while Betacam uses four heads to record in component format, at a much higher linear tape speed of 10.15 cm/s (3.99606 in./s) compared with Betamax's 1.87 cm/s (0.7362205 in./s), resulting in much higher video and audio quality. A typical L-750 length Betamax cassette that yielded about 3 hours of recording time on a Betamax VCR at its B-II Speed (NTSC), or on PAL, only provided 30 minutes' record time on a Betacam VCR or camcorder. Another common point between Betamax and Betacam is the placement of the stereo linear audio tracks. Also, some Betacam and Betamax portables share the same batteries.
Variants:
(Matsushita's rival "M" and "MII" formats took a similar approach in combining the cassette from a non-professional system- in this case, VHS- with a much higher-quality recording format. However, neither enjoyed Betacam's level of success).Betacam was initially introduced as a camera line along with a video cassette player. The first cameras were the BVP-3, which utilized three saticon tubes, and the BVP1, which used a single tri-stripe Trinicon tube. Both these cameras could be operated standalone, or with their docking companion VTR, the BVV-1 (quickly superseded by the BVV-1A), to form the BVW-1 (BVW-1A) integrated camcorder. Those decks were record-only. The only transport controls on the deck were Eject and Rewind. The docked camera's VTR button started and paused the tape recorder. Later the Betacam SP docking decks had full transport controls (except a Record button) but tapes could not be played back except in the camera's viewfinder in black-and-white only. Sony then came out with the Play Adapter, a separate portable unit that connected via a multi-pin cable and had a composite video out jack for color playback. At first color playback required the studio source deck, the BVW-10, which could not record, only play back. It was primarily designed as a feeder deck for A/B roll edit systems, usually for editing to a one-inch Type C or three-quarter-inch U-matic cassette edit master tape. There was also the BVW-20 field playback deck, which was a portable unit with DC power and a handle, that was used to verify color playback of tapes in the field. Unlike the BVW-10, it did not have a built in Time Base Corrector, or TBC.
Variants:
With the popular success of the Betacam system as a news acquisition format, the line was soon extended to include the BVW-15 studio player, and the BVW-40 Studio Edit Recorder. The BVW-15 added Dynamic Tracking, which enabled clear still frame and jog playback, something the BVW-10 could not deliver. The BVW-40 enabled for the first time editing to a Betacam master, and if set up and wired correctly, true component video editing. It was also possible to do machine to machine editing between a BVW-10/15 and BVW-40 without an edit controller—a single serial cable between the units was all that was required to control the player from the recorder in performing simple assemble and insert editing. Additionally there were two field models introduced, the field recorder BVW-25, and the BVW-21 play only portable field deck.
Variants:
At its introduction, many insisted that Betacam remained inferior to the bulkier one-inch Type C and B recording, the standard broadcast production format of the late 1970s to mid-1980s. Additionally, the maximum record time for both the cameras and studio recorders was only half an hour, a severe limitation in television production. There was also the limitation that high quality recording was only possible if the original component signals were available, as they were in a Betacam camcorder. If the recording started as composite video, re-converting them to components for recording and then eventually back to composite for broadcast caused a drop in quality compared to recording component video directly.
Variants:
Betacam SP In 1986, Betacam SP (commonly referred to as Beta SP) was developed, which increased horizontal resolution to 340 lines. While the quality improvement of the format itself was minor, the improvement to the VTRs was enormous, particularly in quality and features. In addition to the existing cassette a larger cassette (25.3 × 14.4 × 2.5 cm) was introduced with 90 minutes of recording time. Betacam SP (for "Superior Performance") became the industry standard for most TV stations and high-end production houses until the late 1990s. Despite the format's age and its discontinuation in 2001, Betacam SP remained a common standard for standard definition video post-production into the 2010s. The recording time is the same as for Betacam, 30 and 90 minutes for S and L, respectively. Tape speed is slightly slower in machines working in the 625/50 format, increasing tape duration by one minute for every five minutes of run time. So, a 90-minute tape will record 108 minutes of video in PAL.
Variants:
Betacam SP is able to achieve its namesake "Superior Performance" over Betacam in the fact that it uses metal-formulated tape as opposed to Betacam's ferric oxide tape. Sony designed Betacam SP to be partially forward compatible with standard Betacam, with the capability that Betacam SP tapes recorded on Betacam SP decks can be played in oxide-era Betacam VTRs (such as the BVW-15 and BVW-40 mentioned earlier), but for playback only. Betacam SP-branded tapes cannot be used for recording in consumer Betamax VCRs like oxide Betacam tapes, due to Betacam SP's metal-formulation tape causing the video heads in a Betamax deck to wear prematurely, which are made of a softer material than the heads in a standard Betacam deck. However, Betacam SP tapes can be used without a problem in ED Beta VCRs, since the ED Beta format uses metal-formulated tape as well.
Variants:
The new Betacam SP studio decks were the players: The BVW-60 and BVW-65 (the BVW-65 features Dynamic Tracking); and the Edit Recorders: the BVW-70, and the Dynamic Tracking model, the BVW-75. The BVV-5 was the Betacam SP dockable camera back, which could play back in color if its companion playback adapter was used. A new SP field recorder, the BVW-35, possessed the added benefit of a standard RS422 serial control port that enabled it to be used as an edit feeder deck. Though the four new studio decks could utilize the full 90-minute Betacam SP cassettes, the BVW-35 remained limited to the original Betacam small 30-minute cassette shells. Answering a need for a basic office player, Sony also introduced the BVW-22, a much less expensive desktop model that could be used for viewing and logging 90-minute cassettes of both BetacamSP and oxide types, but could not be configured into an edit system and offered only Composite Video output.
Variants:
Sony followed up the SP Field Recorder with the BVW-50, that could record and play the large-size 90 minute cassettes. After this, the deck line was relatively stagnant and incredibly popular for a decade, aside from some specialty models that could record digital audio. Some Betacam SP VCRs were sold by Broadcast Television Systems Inc. (BTS).
Until the introduction of the BVW-200 camera though, the camera and recorder configuration was a docking system. The BVW-200 was an integrated camera recorder system. It sacrificed the flexibility of a docking camera in order to lose a substantial amount of weight. Eventually, non-docking camcorders became the most popular design by the mid-1990s.
The final Betacam SP camcorder was the BVW-600, which paired a digital professional video camera front section, very similar to the one on the DigiBeta DVW-700, with an integrated Betacam SP recorder. Like every other Betacam camera system, and unlike the DigiBeta DVW-700, the camera could not play back in color without the use of an outboard adapter.
Variants:
In 1991, the less-expensive, "Professional", PV line of Betacam SP decks was introduced. The PV line consisted of only four models: the full-sized PVW-2600 (VTP), PVW-2650 (VTP with Dynamic tracking allowing up to fwd x3, whereas the BVW line only offered x2 DT playback) and PVW-2800 (VTR) editing decks, and the PVV-3 camera-dockable VTR. These high quality machines were similar to the original BV series machines, but lacked the third and fourth audio channels. In 1993, the far less expensive UVW series debuted. These machines were considerably simpler, somewhat lower quality, and were designed primarily to be used as companions to computer systems, for industrial video, and other low-cost, yet high-quality, uses. The UVW decks possessed very limited front panel controls, no jog and shuttle (except by use of a DSRM-10 cable remote control); and with Time Base Corrector (TBC) control available only with an optional remote TBC controller. These were represented by the UVW-1800, a very popular editing VTR (and companion UVW-1600 edit VTP), and the non-editing UVW-1400 VTR, and UVW-1200 VTP. The UVW-100 (and later 100B) one-piece camcorder rounded out the UVW series.
Variants:
Third-party support Betacam and Betacam SP tape cassette shells varied in color depending on the manufacturer. Many companies sold Betacam tapes, sometimes of their own manufacture, sometimes re-branded. Fuji, Maxell, Ampex/Quantegy, BASF/EMTEC and 3M were just some of the major brands to do so.
Ampex, Thomson SA, Bosch and Philips each sold OEM versions of some of the Sony VTRs and camcorders at various times in the 1980s and 1990s. Other than nameplates, these models were identical to the Sony models. Internal components still bore the Sony name.
Variants:
Digital Betacam Digital Betacam (commonly referred to as DigiBeta, D-Beta, DBC or simply Digi) was launched at 18th International Television Symposium in Montreux on June 10, 1993. It supersedes both Betacam and Betacam SP, while costing significantly less than the first, 100% uncompressed D1 format. S tapes are available with up to 40 minutes running time, and L tapes with up to 124 minutes.
Variants:
The Digital Betacam format records 2.34:1 DCT-compressed digital component video signal at 10-bit YUV 4:2:2 sampling in NTSC (720×486) or PAL (720×576) resolutions at a bitrate of 90 Mbit/s plus four channels of uncompressed 48 kHz / 20 bit PCM-encoded digital audio. A fifth analog audio track is available for cueing, and a linear timecode track is also used on the tape. It was a popular digital video cassette format for broadcast television use. It uses a head drum that rotates at 5400 RPM for NTSC video. The video heads in the drum read helical tracks 24 microns wide. Audio is also recorded on the helical tracks. The compression algorithm used by Digital Betacam is propietary.Another key element which aided adoption was Sony's implementation of the SDI coaxial digital connection on Digital Betacam decks. Facilities could begin using digital signals on their existing coaxial wiring without having to commit to an expensive re-installation.
Variants:
Betacam SX Betacam SX is a digital version of Betacam SP introduced in 1996, positioned as a cheaper alternative to Digital Betacam. It stores video using MPEG-2 4:2:2 Profile@ML compression, along with four channels of 48 kHz 16 bit PCM audio. All Betacam SX equipment is compatible with Betacam SP tapes. S tapes have a recording time up to 62 minutes, and L tapes up to 194 minutes.
Variants:
The Betacam SX system was very successful with newsgathering operations, which had a legacy of Betacam and Betacam SP tapes. Some Betacam SX decks, such as the DNW-A75 or DNW-A50, can natively play and work from the analog tapes interchangeably, because they contain both analog and digital playback heads.
Betacam SX uses MPEG-2 4:2:2P@ML compression, compliant with CCIR 601, in comparison with other similar systems that use 4:1:1 or 4:2:0 chroma subsampling for coding. It gives better chroma resolution and allows certain postproduction processes such as Chroma-key.
Variants:
This format compresses the video signal from approximately 180 Mbit/s to only 18 Mbit/s. This means a compression ratio of around 10:1, which is achieved by the use of mild temporal compression, where alternate frames are stored as MPEG I-frames and B-frames, giving rise to an IBIB sequence on tape. Due to the low bitrate this format was not standardized by any standards body.Together with Betacam SX, Sony introduced a generation of hybrid recorder, allowing use of both tape and disk recording on the same deck, and high speed dubbing from one to another. This was intended to save wear on the video heads for television studio applications, as well to speed up online editing.
Variants:
Betacam SX also features a good shot mark (a method for qualitative decisions made in the camcorder to be utilized during the editing process) feature that allows marking of each scene for fast scanning of the tape, looking at recorded marks on each single cassette, and showing the markers to the operator.
The cameras themselves are generally considered by most sound recordists to be quite noisy in operation, possibly because the amount of computer processing power, and subsequent generated heat leads to cooling fans being used to keep the camera at a reasonable temperature.
Betacam SX tape shells are bright yellow, but SX recordings may also be found recorded on analogue Betacam SP cassettes. Of course if such a Betacam SP tape with SX recording is inserted into a Betacam SP player, no picture or sound will appear.
Variants:
The helical scan head drum is 81 mm in diameter. The video tracks read by the video heads in the drum, are 32 microns wide, the drum rotates at 5400 RPM for NTSC video. The video heads have a 15.25 degree azimuth.Although Betacam SX machines have gone out of production since 2008, the format is still used by many newsgathering operations, including Canada's CTV, Atlanta's WSB-TV, San Diego's KFMB-TV and NBC's operations in the San Francisco Bay Area at KNTV and KSTS. Many news archives still contain SX tapes. In August 2011, Betacam SX tapes were found in Muammar Gaddafi's underground studio in Tripoli. CNN reporter Sara Sidner commented on-air that CNN still used the same type of tapes.
Variants:
MPEG IMX MPEG IMX is a 2000 development of the Digital Betacam format. Digital video compression uses H.262/MPEG-2 Part 2 encoding at a higher bitrate than Betacam SX: 30 Mbit/s (6:1 compression), 40 Mbit/s (4:1 compression) or 50 Mbit/s (3.3:1 compression). Unlike most other MPEG-2 implementations, IMX uses intraframe compression. Additionally, IMX ensures that each frame has the same exact size in bytes to simplify recording onto video tape. Video recorded in the IMX format is compliant with CCIR 601 specification, with eight channels of audio and timecode track. It lacks an analog audio (cue) track as the Digital Betacam, but will read it as channel 7 if used for playback. This format has been standardized in SMPTE 365M and SMPTE 356M as "MPEG D10 Streaming".With its IMX VTRs, Sony introduced some new technologies including SDTI and e-VTR. SDTI allows for audio, video, timecode, and remote control functions to be transported by a single coaxial cable, while e-VTR technology extends this by allowing the same data to be transported over IP by way of an ethernet interface on the VTR itself.
Variants:
All IMX VTRs can natively playback Betacam SX tapes, and some, such as the MSW-M2000P/1 are capable of playing back Digital Betacam cassettes as well as analog Betacam and Betacam SP cassettes, but they can only record to their native IMX cassettes. S tapes are available with up to 60 minutes capacity, and L tapes hold up to 184 minutes. These values are for 525/60 decks, but will extend in 625/50. A 184-minute tape will record for, as the label itself specifies, 220 minutes.
Variants:
IMX machines feature the same good shot mark function of the Betacam SX.
MPEG IMX cassettes are a muted green.
Variants:
This format uses a helical scan head drum 80 mm in diameter. The video tracks read by the video heads in the drum, are 22 microns wide. The video heads have a 15.25 degree azimuth. 4:2:2 Chroma subsampling is used, and the drum rotates at 5400 RPM for NTSC video. Due to the use of an MPEG format, video is recorded with 8-bit samples (8-bit color).The XDCAM format, unveiled in 2003, allows recording of MPEG IMX video in MXF container onto Professional Disc.
Variants:
HDCAM/HDCAM SR HDCAM, introduced in 1997, was the first HD format available in Betacam form-factor, using an 8-bit DCT compressed 3:1:1 recording, in 1080i-compatible downsampled resolution of 1440×1080, and adding 24p and 23.976 PsF modes to later models. The HDCAM codec uses non-square pixels and as such the recorded 1440×1080 content is upsampled to 1920×1080 on playback. The recorded video bitrate is 144 Mbit/s. There are four channels of AES/EBU 20-bit/48 kHz digital audio.
Variants:
It was used for some of Sony's cinema-targeted CineAlta range of products (other CineAlta devices use flash storage).
Variants:
HDCAM SR, introduced in 2003, uses a higher particle density tape and is capable of recording in 10 bits 4:2:2 or 4:4:4 RGB with a bitrate of 440 Mbit/s. The "SR" stands for "Superior Resolution". The increased bitrate (over HDCAM) allows HDCAM SR to capture much more of the full bandwidth of the HD-SDI signal (1920×1080). Some HDCAM SR VTRs can also use a 2× mode with an even higher bitrate of 880 Mbit/s, allowing for a 4:4:4 RGB stream at a lower compression. HDCAM SR uses the new MPEG-4 Part 2 Studio Profile for compression, and expands the number of audio channels up to 12 at 48 kHz/24 bit.
Variants:
HDCAM SR was used commonly for HDTV television production.
Variants:
Some HDCAM VTRs play back older Betacam variants, for example, the Sony SRW-5500 HDCAM SR recorder, plays back and records HDCAM and HDCAM SR tapes and with optional hardware also plays and upconverts Digital Betacam tapes to HD format. Tape lengths are the same as for Digital Betacam, up to 40 minutes for S and 124 minutes for L tapes. In 24p mode the runtime increases to 50 and 155 minutes, respectively.
Variants:
Sony branded HDCAM cassettes are black with an orange lid, and HDCAM SR cassettes black with a cyan lid.
440 Mbit/s mode is known as SQ, and 880 Mbit/s mode is known as HQ, and this mode has recently become available in studio models (e.g. SRW-5800) as well as portable models previously available. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Elk farming**
Elk farming:
Elk farming is an agricultural industry for the production of elk as livestock or for the sport of hunting. Elk have a variety of uses. The velvet antler or, the antler in the premature stages of growth, is believed by some to have medicinal purposes. Elk are also raised for venison, their meat. All of these markets are rising in popularity thus causing an increase in the breeding industry. Other species of similar type are farmed in the same way such as deer, moose, and red stag.
Livestock:
The elk farming market is relatively new to the United States. In its early stages, the breeding stock has become of high value. The same standard pertains to the production of elk as cattle: the bigger the better. In 1990 the North American Elk Breeders Association (NAEBA) was founded.[1] NAEBA has set up rules and regulations for breed purity and strength, ownership, and marketing. They help the industry to continually increase in production and quality of animal.
Livestock:
Elk do not need the close care that it takes to raise cattle. This is because of the strong, hardy nature of the animal. They will eat just about anything they can find, ranging from grass, shrubs, weeds, and even tree bark. The most common feeds are alfalfa and grain.
Livestock:
In an area suitable to hold one beef cow, two to three elk may be kept comfortably. Elk may eat 2 to 3 percent of their body weight daily. On average a cow elk, the female, has a live weight of 450 to 650 pounds. The bulls are much larger, weighing from 800 to 1,000 pounds. Elk need an increase of nutrients so that they can produce better products. For example, before and during breeding, while the antler is growing so they it will produce a larger amount of velvet, and after calving.
Livestock:
The facilities that hold elk are very different than those of cattle. The fence is made of high-tensile wire, which provides strength and durability, which should be at least 8 feet high. The area should provide a large grazing area along with a fresh water supply and shelter. It is recommended that a strand of barbed wire be stretched at ground level to keep predators out and calves in. Electrified wire placed slightly above ground level is another option.
Breeding:
Elk breed from early September through November. This period is called the rut. A cow will give birth after a 250-day gestation. The calves are carried throughout the winter. Therefore, it is necessary that they are well fed and receive the needed nutrients during this period. If they are well taken care of, the elk will have up to a 95% pregnancy rate. Calves are born from May through July. Cow elk can begin to breed after 18 months, but bulls should wait to mature for two to three years. A cow elk can breed for more than 15 years effectively. The estrus cycle is about 21 days. A bull may breed as many as 20 cows in a season.
Breeding:
It has become a very common practice amongst elk breeders to use artificial insemination, a method of ensuring male genetic superiority—e.g., a bull with large antlers will pass that trait onto his offspring. For this purpose the semen is bought and the cow is bred artificially with the hope that the young will receive that genetic trait. Through artificial insemination and semen preservation, a sire can continue to produce offspring even after he is dead or his health has declined.
Products:
Velvet antler, the antler in the premature growing stages, is the main product derived from mature bull elk. In the second year of a bull elk life the antler begins to grow and continues to do so every year after that. The velvet is harvested while in the late stages of growth, just before it starts to turn into antler. That is when it calcifies and becomes hard like bone. A mature bull will produce 20 pounds or more of velvet annually. The current record is about 50 pounds in North America.
Products:
Velvet antler is tissue that is living and growing rapidly. It can grow up to one half pound a day. Because it is living, the velvet must be removed surgically. Like in any operation precautionary measures are taken to ensure the humane care and safety of the animal. Once cut, the velvet is then frozen and shipped to the manufacturer where it is then made into a consumable substance. Recent studies have shown that velvet contains large amounts of minerals with natural anti-inflammatory agents.
Venison:
There is a growing demand for elk meat around the world. Elk meat is famous for its taste as well as its health benefits, as it is high in protein but low in fat, cholesterol, and calories.
Sources:
Forrest, R. (2004, November). Grande premium meats. Retrieved February 24, 2008, Web site: https://elkusa.com/elk_farming/ Westendorf, M.L. (2000). Deer and elk farming. Rutgers Cooperative Extension. 6.
Thorleifson, I.,T.Pearse, & B. Friedel (2000). Elk farming handbook. Canada: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Family room**
Family room:
A family room is an informal, all-purpose room in a house. The family room is designed to be a place where family and guests gather for group recreation like talking, reading, watching TV, and other family activities. Often, the family room is located adjacent to the kitchen, and at times, flows into it with no visual breaks. A family room often has doors leading to the back yard and specific outdoor living areas such as a deck, garden, or terrace.
Family room:
The term family room is defined in the 1945 book Tomorrow's House by George Nelson and Henry Wright. Chapter 7, entitled "The Room Without a Name" spoke of the need in modern life for a new "biggest room in the house" that would serve the social and recreational needs of the entire family, allowing activities that would not be permitted in the living room.
Family room:
This "big room" would have furnishings and materials that were "tough", for hard use, and it should be easy to clean. In contrast with the existing "rumpus rooms" of the time, it would occasionally serve for slightly more formal entertainment, so it should be a handsome room and should have cupboards where toys, tools, etc. could be kept out of sight. The distinction between a family room, living room, and recreation room is fluid, but can be classified according to three characteristics: location, function and design. Football games on large color televisions made family rooms large enough for parents and children more popular during the 1970s. In homes with more than one, the family room is less formal, both in function and furnishings and is located away from the main entrance, while the living room is usually the more formal, reserved for guests, special occasions and the display of items such as antiques or artwork. It is typically located in the central part of the house towards the front. The recreation room is typically in the basement and used for games and playtime.
Family room:
In homes with only one, the terms are generally used interchangeably. In floorplans, a "great room" is where the living room and family room are combined into one high-ceilinged room adjacent to the kitchen. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Car condo**
Car condo:
A car condo is a type of property that allows the owner to have a dedicated space for their vehicle. Unlike a regular garage, a car condo is a separate unit that can be bought or rented by the owner. [1] The car condo building also has shared facilities, such as a lounge, a workshop, or a wash bay, that are co-owned by all the tenants. The owner of a car condo pays a monthly fee to cover the expenses of these common areas.
Car condo:
Car condo developers are marketing their projects to the following demographic: A classic car owner who wishes to store "his/her" vehicle in an optimum environment.
A person who has a secondary residence in a popular vacation destination (e.g., South Florida, Las Vegas, Scottsdale) and wishes to keep a car year-round at that destination.
Car condo:
A resident of an urban area (e.g., New York City) where parking a car is extremely expensive and where the car owner wishes a property interest in return for the large monthly parking outlay.Car condos range in price based on location, size, features and services. Some people take a regular storage facility and store their vehicle while others offer sizes ranging from 800 to 10,000 square feet (930 m2).
Car condo:
High-end luxury car condos are based more on a country club approach and serve their members. Some are centered on racing, R&D, motorsports and philanthropic pursuits. The idea of mixing high-end and vintage automobiles with fund raising is reaching heights as evident in the $100,000 USD raised in one night for the Santa Clara Valley Medical Center.As with any country club, the members require a club house which can serve as a venue for charitable, political, and corporate functions. These facilities offer their owners amenities such as round-the-clock security, automobile-related businesses and concierge services. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Petrenko-Kritschenko piperidone synthesis**
Petrenko-Kritschenko piperidone synthesis:
The Petrenko-Kritschenko reaction is a classic multicomponent-name reaction that is closely related to the Robinson–Schöpf tropinone synthesis, but was published 12 years earlier.
Classic reaction:
In the original publication diethyl-α-ketoglurate, a derivative of acetonedicarboxylic acid, is used in combination with ammonia and benzaldehyde. The relative stereochemistry was not elucidated in the original publication, structural analysis using X-rays or NMR was not available in these days. In the absence of ammonia or ammonium salts a 4-oxotetrahydropyran is formed.
Classic reaction:
In contrast to the Robinson synthesis, it does not employ dialdehydes like succinaldehyde or glutaraldehyde but simpler aldehydes like benzaldehyde. Therefore, the product of the reaction is not a bicyclic structure (see tropinone and pseudopelletierine) but a 4-piperidone. The synthesis of tropinone can be seen as a variation of the Petrenko-Kritschenko reaction in which the two aldehyde functions are covalently linked in a single molecule. Apart from the Hantzsch synthesis the Petrenko-Kritschenko reaction is one of the few examples in which a symmetric pyridine precursor can be obtained in a multicomponent ring-condensation reaction followed by an oxidation. The oxidation by chromium trioxide in acetic acid leads to a symmetrically substituted 4-pyridone, decarboxylation yields the 3,5-unsubstituted derivative.
Modern variants:
Acetoacetate can be used instead of diethyl-α-ketoglurate in the presence of indium salts. The use of aniline has also been reported in the original Publication. The product of this reaction shows transoid configuration of the phenyl groups at C-2 and C-6.
Natural product synthesis:
The reaction has been used to prepare precoccinellin, an alkaloid found in certain ladybugs.
Applications to coordination chemistry:
When benzaldehyde is substituted with 2-pyridinecarboxaldehyde the reaction can be used to prepare precursors for bispidone-ligands. Essentially this method is based on two subsequent Petrenko-Kritschenko reactions. These ligands can be used to prepare compounds containing high-valent iron, that are able to oxidize cyclohexane in the presence of hydrogen peroxide. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**C10orf53**
C10orf53:
C10orf53 is a protein that in humans is encoded by the C10orf53 gene. The gene is located on the positive strand of the DNA and is 30,611 nucleotides in length. The protein is 157 amino acids and the gene has 3 exons. C10orf53 orthologs are found in mammals, birds, reptiles, amphibians, fish, and invertebrates. It is primarily expressed in the testes and at very low levels in the cerebellum, liver, placenta, and trachea.
Gene:
Chromosome 10 open reading frame 53 (10orf53), also known as uncharacterized protein family 0728 (UPF0728), in humans, is encoded by Chromosome 10 (10q11.23), spanning 30,611 nucleotides. The gene is located on the positive strand with 3 identified exons.
Transcript:
The table outlines the two identified isoforms of C10orf53. The most common isoform has 93 amino acids which is a shorter amino acid sequence due to an alternative 3’ terminal exon and distinct C-terminus. Isoform A has a higher frequency of analysis due to it being the longer isoform. The analysis in this article is focused on isoform A.
Structure:
The molecular weight predicted was 17.6 kDa. The isoelectric point for C10orf53 is estimated to be 6.36 pl Secondary MPKNAVVILRYGPYSAAGLPVEHHTFRLQGLQAVLAIDGHEVILEKIEDWNVVELMVNEEVIFHCNI 67 CCCCCSSSSSSCCCHHCCSSSSSCHHHHHHHHHHHHHCCCSSSSSSSCCCCSSSSSSCCCSSSSSCC 67 KDLEFGKLTPSSDKRTTSSSRLTFHQLSSPCRMKVSPLQQFPQKTQDLTCTVLAQIGSCIHFQTNLC 134 CCCCCCCCCHHHHHHHHHHHHHHHHCCCCHHHHCCCHHHHCCCCCCCSSSSSHHHHCCSSSSSCCCC 134 DLGWPGLDHMLISGLEKRGTQPY 157 CCCCCCCHHHHHHHHHHCCCCCC 157 The secondary structure above illustrates the estimated secondary structure for Isoform A of C10orf53. The C that are italicized indicate that the amino acid is located within a coil, the bolded S is referring to the amino acid being in a strand, and the underlined H shows the amino acid is in a helix. The strand and helix structures can then be translated into the predicted tertiary structure done through I-TASSER.
Structure:
Tertiary The predicted tertiary structure of C10orf53 is shown. The tertiary structure contains the seven helix and 8 strand structures predicted through I-TASSER.
Regulation:
Gene C10orf53 is primarily expressed in the testes, but also has very low levels of expression in the cerebellum, liver, placenta, and trachea. It is tissue-specific to the testes due to the low expression in other tissues compared to the testes. C10orf53 majorly is secreted in the cytoplasm of cells and has moderate levels in the nucleus and mitochondria.
Protein C10orf53 was found to have two phosphorylation sites, two SUMOylation sites, and one lysine acetylation site. All of these regions are shown in the conceptual translation of C10orf53.
Evolution:
C10orf53 is predicted to have a slower evolution compared to the gene, fibrinogen alpha, but it also has a quicker evolution compared to cytochrome c. Fibrinogen alpha is considered as a gene that had evolved rather quickly when examining the gene in different organisms. When two organisms diverged into different taxa, the gene went through alterations that made it significantly different from other organisms, causing it to have a quick rate of evolution. In contrast, Cytochrome C is relatively conserved throughout different organisms, which shows that it has a slow rate of evolution. C10orf53 has a rate of evolution that is smaller than fibrinogen alpha, but larger than cytochrome c.
Evolution:
Homology A group of distantly and closely related orthologs were chosen and categorized by their date of divergence from humans. The percent similarity and percent identity in relation to humans showed the predicted conservation between C10orf53 in humans compared to their orthologs.
Interacting Proteins:
The table contains all predicted proteins that were found to interact with C10orf53. It includes the protein acronym and name, the means that identification occurred, and the function of each protein. Due to the related function of some of the proteins, this provides evidence that these interactions coincide with the predicted location.
Clinical Significance:
A study was conducted that compared the relative spermatogenesis in humans to the relative expression of RNAs correlated to teratozoospermia. In non-afflicted humans, there is a relatively high expression of C10orf53 across the RNAs tested. However, when a human had teratozoospermia, those levels dropped to almost zero. Another study examined the expression of C10orf53 in spermatogenesis and testis development in mice during development. C10orf53 is only highly expressed from day 30-56 of mice development, with the expression decreasing slightly on each five-day period until 56 days were reached. The final study looked at research done within the past five years has correlated African American prostate cancer patients with the presence of C10orf535. When examining the exosome found in Caucasian populations associated with prostate cancer (PCC) against the African American exosome (PAA), C10orf53 was unique only to PAA. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Strafing (video games)**
Strafing (video games):
Strafing is the act of sliding in a direction, while keeping momentum which carries the player to the intended destination, in order to increase speed. Strafing allows a player to keep the camera focused on a target such as an enemy, while moving in a different direction.
Techniques:
Circle strafing Circle strafing is the technique of moving around an opponent in a circle while facing them. Circle strafing allows a player to fire continuously at an opponent while evading their attacks. Circle strafing is most useful in close-quarters combat where the apparent motion of the circle strafing player is much greater than that of their stationary enemy, and thus the chance of making the enemy lose track of their target is higher and/or the enemy is required to lead the target when firing. The effectiveness of circle strafing is mitigated when the opponent's weapon fires projectiles that travel instantaneously (also referred to as a hitscan weapon), or fires at a high rate, e.g. with a machine gun.Circle strafing is especially effective when lag negatively affects the players' ability to hit their target. When latency is high and the game doesn't have client-side hit detection, this can lead to two players circling each other, both missing all their attacks.
Techniques:
Many shooters will allow players to aim down the sights of a gun or use a scope, usually exchanging movement speed and field of vision for greater accuracy. This can make a player more vulnerable to circle strafing, as targets will pass through their field of vision more quickly, they are less capable of keeping up with a target, and their slow movement makes dodging more difficult.
Techniques:
Strafing in melee combat Circle strafing has also spread to some 3D action and adventure video games that involve melee combat. Circle strafing in melee combat can be made easier with a lock-on system that snaps the camera's (and the player character's) focus on one particular target, guaranteeing that most of the player character's attacks will land a direct hit on the target. It enables the player character to concentrate on moving around the enemy to dodge their attacks while staying automatically focused on the enemy. This can be a crucial strategy against bosses and powerful enemies, and is notably employed in many The Legend of Zelda titles, starting with Ocarina of Time.
Techniques:
Strafe-running Particularly in early first-person shooters, strafe-running (known as speed-strafing among players of GoldenEye 007 and Perfect Dark, and as trichording among players of the Descent series) is a technique that allows a player to run or fly faster through levels by moving forwards and sideways at the same time. The game combines these actions and the player achieves roughly 1.4 (square root of 2) times the speed they would moving in a single direction. The method used by the game can be demonstrated using vector addition. Pathways into Darkness was one of the first games to allow strafe-running.
Techniques:
The games in which strafe-running can be employed treat forward motion independently of sideways (strafing) motion. If, for each update of the player's location, the game moves the player forward one unit and then moves the player to the side by one unit, the overall distance moved is 2 . Thus, in games with such behavior, moving sideways while simultaneously moving forward will give an overall higher speed than just moving forward, although the player will move in a direction diagonal to the direction being faced. This feature is even more enhanced if moving along three axes (e.g. forward + left + up), providing 3 (roughly 1.73) times greater speed, in games such as Descent.
Techniques:
This technique is not possible in all games; most and especially modern games would clamp the player's speed and acceleration to a uniform maximum when moving in any direction.
Strafe-jumping Strafe-jumping is a technique used to increase a player's movement speed in computer games based on the Quake engine and its successors, most of which are first-person shooters.
Techniques:
History Strafe-jumping was a result of a bug in the code base of the 1996 first-person shooter video game Quake. In Quake's sequels it was decided to be kept intact, as it had become a standard technique used by players. The exploit relies on an oversight in acceleration and maximum speed calculation: when pressing a movement key, the game adds an acceleration vector in that direction to the player's current velocity. When the player has reached a maximum speed value, further acceleration is prevented. However, the movement speed limit is only applied in relation to the acceleration vector's direction and not the direction of the overall velocity, meaning that precisely manipulating the angle between overall velocity and this acceleration vector lets the player break the intended speed cap.
Techniques:
Method Strafe-jumping requires a precise combination of mouse and keyboard inputs. The exact technique involved depends on the game in question. In several games, there are entire maps devoted to this, much like obstacle courses.
The controls are typically as follows: The player holds the move forward key, accelerating to the maximum walking speed.
The player jumps and simultaneously starts holding either the strafe left or the strafe right key.
While airborne, the player moves the mouse slowly in the direction they're strafing. This turns the character and directs the acceleration to an angle that lets the player break the speed cap.
To prevent speed loss from ground friction, the player immediately jumps again on landing.
Techniques:
Strafe-jumping this way will slowly curve the player's trajectory, so to compensate the player can switch the direction of strafing and mouse movement to the opposite side.Done correctly and continuously, this will gradually increase the player's speed. Mastering this technique requires much practice. Sustained strafe-jumping is mainly a matter of muscle memory, as both the required range and precision of mouse movements increase as the player builds up speed.
Techniques:
In Quake III Arena and some games based on its engine, such as Call of Duty and Wolfenstein: Enemy Territory, slight increases in jump height can be achieved by playing the game at specific frame rates.
Pre Strafe The pre strafe is an action performed by the player at the start of strafe-jumping, giving an initial burst of speed. It uses the same mechanics as strafe-jumping, but on the ground before the first jump, and requires faster mouse movement.
The controls are as follows: The player stands facing 90-135 degrees away from the direction they desire to eventually move in.
The player starts holding both the move forward key and the strafe key towards the desired direction, and also moves the mouse in the same direction. This turns and rapidly accelerates the player.
When the player is facing the desired movement direction, they jump to preserve the gained speed.
The player can now start strafe-jumping and continue accelerating.
Techniques:
Bunny hopping Bunny hopping is an advanced movement method used in some first-person shooter games which relies on exploiting movement mechanics by combining strafing and jumping. For instance, In games utilising the Quake or GoldSrc game engines or their derivatives, bunny hopping is a technique which leverages strafe-jumping, allowing for a player accelerate beyond the intended maximum movement speed and quickly change direction while in mid-air. Similarly, jumping on sloped surfaces while strafing into them to gain speed can also be called bunny hopping in games such as The Elder Scrolls Online, Portal 2 and a few other first-person-shooter games. Overall, bunny hopping is a technical exploit allowing the player to move faster or more nimbly than normal. The earliest (and most advanced) method of bunny hopping that utilized strafing controls exists in Quake, the Quake III Arena mod Challenge ProMode Arena, and their derivatives such as Warsow and Xonotic; Half-Life (version 1.1.0.8, released in 2001, introduced a speed cap limiting the effectiveness of bunny hopping) and many of its mods and sibling games such as Team Fortress Classic, Team Fortress 2, Dystopia, and the Counter-Strike series; Painkiller, Dark Messiah of Might and Magic, Kingpin: Life of Crime, Titanfall 2, and Apex Legends. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hydrofluoric acid burn**
Hydrofluoric acid burn:
A hydrofluoric acid burn is a chemical burn from hydrofluoric acid. Where it contacts the skin it results in significant pain, swelling, redness, and skin breakdown. If the fumes are breathed in swelling of the upper airway and bleeding may occur. Complications can include electrolyte, heart, lung, kidney, and neurological problems.Most exposures occur at work. With concentrations less than 7%, onset of symptoms may not occur for hours while with concentrations greater than 15% onset of symptoms is nearly immediate. Diagnosis should include blood tests for calcium, potassium, and magnesium along with an electrocardiogram.Initial treatment of exposure involves removing contaminated clothing and washing with large amounts of water over at least 30 minutes. Other measures include applying calcium gluconate cream. It is estimated that about a thousand cases occur each year. Most people affected are adult males.
Signs and symptoms:
Symptoms of HF exposure include irritation of the eyes, skin, nose, and throat, eye and skin burns, and bone damage.Complications may occur due to fluoride toxicity. Once absorbed into blood through the skin, it reacts with blood calcium and may cause cardiac arrest. Burns with areas larger than 160 cm2 (25 square inches) have the potential to cause serious systemic toxicity from interference with blood and tissue calcium levels. In some cases, exposures can lead to hypocalcemia.
Signs and symptoms:
Breathing in the HF fumes can result in fevers, pulmonary edema (fluid buildup in the lungs), bleeding, and low blood oxygen.
Cause:
Hydrogen fluoride is used in a number of industries including glass etching and electronics manufacturing.It is generated upon combustion of many fluorine-containing compounds such as products containing Viton and polytetrafluoroethylene (Teflon) parts. Hydrofluorocarbons in automatic fire suppression systems can release hydrogen fluoride at high temperatures, and this has led to deaths from acute respiratory failure in military personnel when a rocket-propelled grenade hit the fire suppression system in their vehicle. Hydrofluoric acid can be released from volcanoes, sea salt aerosol, and from welding or manufacturing processes.
Pathophysiology:
In the body, hydrofluoric acid reacts with the ubiquitous biologically important ions Ca2+ and Mg2+. Formation of insoluble calcium fluoride is proposed as the cause for both precipitous fall in serum calcium and the severe pain associated with tissue toxicity.
Diagnosis:
Diagnosis should include blood tests for calcium, potassium, and magnesium along with an electrocardiogram (ECG). ECG changes may include QRS widening and a prolonged QT interval.
Treatment:
Initial treatment of exposure involves removing contaminated clothing and washing the affected area with large amount of water over at least 30 minutes. Calcium gluconate cream is then usually applied. If pain continues calcium gluconate can be injected into the affected area or given by injection into a vein or artery. Surgical removal of the affected tissue may be required.The calcium gluconate is a source of Ca2+ that sequesters the fluoride ions. Other special rinsing solutions may also be used. An element that has been very useful to avoid the adverse effects of chemical burns and counteract the effect of calcium precipitation is the Hexafluorine solution, which is recommended to implement in laboratory kits, along with first aid items and emergency showers.
Treatment:
Inhaled HF may require oxygen therapy and tracheal intubation. In this situation neutralized calcium gluconate may be used.In absolutely all cases, it should be treated in an advanced medical manner after first aid has been rendered. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N-acetyllactosaminide beta-1,6-N-acetylglucosaminyl-transferase**
N-acetyllactosaminide beta-1,6-N-acetylglucosaminyl-transferase:
In enzymology, a N-acetyllactosaminide beta-1,6-N-acetylglucosaminyl-transferase (EC 2.4.1.150) is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-glucosamine + beta-D-galactosyl-1,4-N-acetyl-D-glucosaminyl-R ⇌ UDP + N-acetyl-beta-D-glucosaminyl-1,6-beta-D-galactosyl-1,4-N-acetyl-D- glucosaminyl-RThus, the two substrates of this enzyme are UDP-N-acetyl-D-glucosamine and beta-D-galactosyl-1,4-N-acetyl-D-glucosaminyl-R, whereas its 3 products are UDP, N-acetyl-beta-D-glucosaminyl-1,6-beta-D-galactosyl-1,4-N-acetyl-D-, and glucosaminyl-R.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine:beta-D-galactosyl-1,4-N-acetyl-D-glucosaminide beta-1,6-N-acetyl-D-glucosaminyltransferase. Other names in common use include N-acetylglucosaminyltransferase, uridine diphosphoacetylglucosamine-acetyllactosaminide, beta1->6-acetylglucosaminyltransferase, Galbeta1->4GlcNAc-R beta1->6 N-acetylglucosaminyltransferase, and UDP-GlcNAc:Gal-R, beta-D-6-N-acetylglucosaminyltransferase. This enzyme participates in glycosphingolipid biosynthesis - neo-lactoseries and glycan structures - biosynthesis 2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Treasure trove**
Treasure trove:
A treasure trove is an amount of money or coin, gold, silver, plate, or bullion found hidden underground or in places such as cellars or attics, where the treasure seems old enough for it to be presumed that the true owner is dead and the heirs undiscoverable. An archaeological find of treasure trove is known as a hoard. The legal definition of what constitutes treasure trove and its treatment under law vary considerably from country to country, and from era to era.
Treasure trove:
The term is also often used metaphorically. Collections of articles published as a book are often titled Treasure Trove, as in A Treasure Trove of Science. This was especially fashionable for titles of children's books in the early- and mid-20th century.
Terminology:
Treasure trove, sometimes rendered treasure-trove, literally means "treasure that has been found". The English term treasure trove was derived from tresor trové, the Anglo-French equivalent of the Latin legal term thesaurus inventus. In 15th-century English the Anglo-French term was translated as "treasure found", but from the 16th century it began appearing in its modern form with the French word trové anglicized as trovey, trouve or trove. The term wealth deposit has been proposed as a more accurate alternative.The term treasure trove is often used metaphorically to mean a "valuable find", and hence a source of treasure, or a reserve or repository of valuable things. Trove is often used alone to refer to the concept, the word having been reanalysed as a noun via folk etymology from an original Anglo-French adjective trové (cognate to the French past participle trouvé, literally "found"). Treasure trove is therefore akin to similar Anglo-French or Anglo-French-derived legal terms whereby a post-positive adjective in a noun phrase (contrary to standard English syntax) has been reanalysed as a compound noun phrase, as in court martial, force majeure, and Princess Royal. Phrases of this form are often used either with the etymologically correct plural form (for example, "Courts-martial deal with serious offences ...") or as fully rederived plural forms (such as "... ordering court-martials ..."). In the case of treasure trove, the typical plural form is almost always treasure troves, with treasures trove found mostly in historical or literary works.
History:
Roman law In Roman law, treasure trove was called thesaurus ("treasure" in Latin), and defined by the Roman jurist Paulus as "vetus quædam depositio pecuniæ, cujus non extat memoria, ut jam dominum non habeat" (an ancient deposit of money, of which no memory exists, so that it has no present owner). R. W. Lee, in his book The Elements of Roman Law (4th ed., 1956), commented that this definition was "not quite satisfactory" as treasure was not confined to money, nor was there any abandonment of ownership. Under the emperors, if treasure was found on a person's own land or on sacred or religious land, the finder was entitled to keep it. However, if the treasure was found fortuitously, and not by deliberate search, on another person's land, half went to the finder and half to the owner of the land, who might be the emperor, the fiscus (public treasury), the city, or some other proprietor. According to Dutch jurist Hugo Grotius (1583–1645), as the feudal system spread over Europe and the prince was looked on as the ultimate owner of all lands, his right to the treasure trove became jus commune et quasi gentium (a common and quasi-international right) in England, Germany, France, Spain and Denmark.An interpretation of Roman law regarding treasure trove makes an appearance in the 13th chapter of the Gospel of Matthew. The Parable of the Hidden Treasure is told by Jesus of Nazareth to the crowds surrounding him and his disciples. In the parable, the treasure trove is hidden in a field, which is open country and anyone could conceivably discover something hidden in that location. It is also assumed that the present owner has no knowledge or memory of the treasure. The finder of the treasure concealed the discovery until he could raise capital to purchase the land. Selling all he had, the finder purchased the land and then unearthed the treasure, to which he was entitled as both finder and landowner. Jesus compared the kingdom of Heaven to the treasure, being of greater value than all a person's earthly wealth and a wise investment that not everyone understands at first.
History:
England and Wales common law It has been said that the concept of treasure trove in English law dates back to the time of Edward the Confessor (c. 1003/1004–1066). Under the common law, treasure trove was defined as gold or silver in any form, whether coin, plate (gold or silver vessels or utensils) or bullion (a lump of gold or silver), which had been hidden and rediscovered, and which no person could prove he or she owned. If the person who had hidden the treasure was known or discovered later, it belonged to him or her or persons claiming through him or her such as descendants. To be treasure trove, an object had to be substantially – that is, more than 50% – gold or silver.
History:
Treasure trove had to be hidden with animus revocandi, that is, an intention to recover it later. If an object was simply lost or abandoned (for instance, scattered on the surface of the earth or in the sea), it belonged either to the first person who found it or to the landowner according to the law of finders, that is, legal principles concerning the finding of objects. For this reason, the objects found in 1939 at Sutton Hoo were determined not to be treasure trove; as the objects were part of a ship burial, there had been no intention to recover the buried objects later. The Crown had a prerogative right to treasure trove, and if the circumstances under which an object was found raised a prima facie presumption that it had been hidden, it belonged to the Crown unless someone else could show a better title to it. The Crown could grant its right to treasure trove to any person in the form of a franchise.It was the duty of the finder, and indeed of anyone who had acquired knowledge of the matter, to report the finding of a potential treasure trove to the coroner of the district. Concealing a find was a misdemeanour punishable with fine and imprisonment. The coroner was required to hold an inquest with a jury to determine who were the finders or the persons suspected to be the finders, "and that may be well perceived where one liveth riotously and have done so of long time". Where there had been an apparent concealment of treasure trove the coroner's jury could investigate the title of the treasure to discover if it had been concealed from the supposed owner, but any such finding was not conclusive as the coroner generally had no jurisdiction to enquire into questions of title to the treasure between the Crown and any other claimant. If a person wished to assert title to the treasure, he or she had to bring separate court proceedings.In the early 20th century, it became the practice of the Lords Commissioners of the Treasury to pay those finders who fully and promptly reported discoveries of treasure troves and handed them over to the proper authorities, the full antiquarian value of objects which were retained for national or other institutions such as museums. Objects not retained were returned to the finders.The law regarding treasure trove was amended in 1996 so that these principles no longer hold (see § Present-day legal definitions: England, Northern Ireland, and Wales below).
History:
Scottish common law Under the common law of Scotland, the law of treasure trove was and still is a specialized application of the general rule governing bona vacantia ("vacant goods") – that is, objects that are lost, forgotten or abandoned. The rule is quod nullius est fit domini regis: "that which belongs to nobody becomes our Lord the King's [or Queen's]". The Crown in Scotland has a prerogative right to treasure trove for it is one of the regalia minora ("minor things of the king"), that is, property rights which the Crown may exercise as it pleases and which it may alienate (transfer to another party). As the Scottish law of treasure trove on the matter has not changed, it is discussed in the "Present-day legal definitions" section below, under the subheading "Scotland".
History:
United States law Many states in the U.S. enacted statutes that received English common law into their legal systems. For example, in 1863 the legislature of Idaho enacted a statute that made "the common law of England ... the rule of decision in all courts" of the state. However, English common law principles of treasure trove were not applied in the U.S. Instead, courts applied rules relating to the finding of lost and ownerless items. The treasure trove rule was first given serious consideration by the Oregon Supreme Court in 1904 in a case involving boys who had discovered thousands of dollars in gold coins hidden in metal cans while cleaning out a henhouse. The Court wrongly believed that the rule operated in the same way as early rules that awarded possession – and, effectively, legal title as well – to innocent finders of items that had been hidden or concealed and the owners of which were unknown. By awarding the coins to the boys, the Court implied that finders were entitled to buried valuables, and that any claims by landowners should be disregarded.In subsequent years the legal position became unclear as a series of English and American cases decided that landowners were entitled to buried valuables. The Maine Supreme Judicial Court reconsidered the rule in 1908. The case before it involved three workers who had found coins while digging on their employer's land. The Court decided along the lines of the 1904 Oregon case and awarded the coins to the finders. For the next 30 years, the courts of a number of states, including Georgia, Indiana, Iowa, Ohio and Wisconsin, applied this modified "treasure trove" rule, most recently in 1948. Since that time, however, the rule has fallen out of favour. Modern legal texts regard it as "a recognized, if not controlling, rule of decision", but one commentator has called it "a minority rule of dubious heritage that was misunderstood and misapplied in a few states between 1904 and 1948".
Present-day legal definitions:
United Kingdom England, Northern Ireland, and Wales Throughout the ages, farmers, archaeologists and amateur treasure hunters have unearthed important treasures of immense historical, scientific and financial value. However, the strictness of the common law rules meant that such items were sometimes not treasure trove. The items risked being sold abroad, or were only saved for the nation by being purchased at a high price. Mention has already been made of the objects comprising the Sutton Hoo ship burial, which were not treasure trove as they had been interred without any intention to retrieve them. The objects were later presented to the nation by their owner, Edith May Pretty, in a 1942 bequest. In March 1973, a hoard of about 7,811 Roman coins was found buried in a field at Coleby in Lincolnshire. It was made up of antoniniani believed to have been minted between AD 253 and 281. The Court of Appeal of England and Wales held in the 1981 case of Attorney-General of the Duchy of Lancaster v. G.E. Overton (Farms) Ltd. that the hoard was not treasure trove as the coins were bronze and did not have a substantial silver content. Thus, it belonged to the owner of the field and could not be retained by the British Museum.To remedy the faults of the old treasure trove regime, the Treasure Act 1996 introduced a new scheme which came into effect on 24 September 1997. Any treasure found on and after that date regardless of the circumstances in which it was deposited, even if it was lost or left with no intention of recovery, belongs to the Crown, subject to any prior interests or rights held by any franchisee of the Crown. The Secretary of State (currently meaning the Secretary of State for Culture, Media and Sport) may direct that any such treasure be transferred or disposed of, or that the Crown's title in it be disclaimed.The Act uses the term treasure instead of treasure trove; the latter term is now confined to objects found before the Act came into force. Objects falling within the following definition are "treasure" under the Act: If the object is not a coin, it must be at least 300 years old and at least 10% precious metal (that is, gold or silver) by weight.
Present-day legal definitions:
If the object is a coin, it must either be: one of at least two coins in the same find which are at least 300 years old at that time and are at least 10% precious metal by weight; or one of at least ten coins in the same find which are at least 300 years old at that time.
Present-day legal definitions:
Any object at least 200 years old when found which belongs to a class of objects of outstanding historical, archaeological or cultural importance that has been designated as treasure by the Secretary of State. As of 2006, the following classes of objects had been so designated:Any object, other than a coin, any part of which is base metal (that is, not gold or silver), which when found is one of at least two base metal objects in the same find which are of prehistoric date.
Present-day legal definitions:
Any object, other than a coin, which is of prehistoric date, and any part of which is gold or silver.
Any object which would have been treasure trove if found before 24 September 1997.
Present-day legal definitions:
Any object which, when found, is part of the same find as: an object within head (1), (2), (3) or (4) above found at the same time or earlier; or an object found earlier which would be within head (1), (2) or (3) above if it had been found at the same time.Treasure does not include unworked natural objects, or minerals extracted from a natural deposit, or objects that have been designated not to be treasure by the Secretary of State. Objects falling within the definition of wreck are also not treasure.Coroners continue to have jurisdiction to enquire into any treasure found in their districts, and into who are or are suspected to be its finders. Anyone finding an object he or she believes or has reasonable grounds to believe is treasure must notify the coroner for the district in which the object is found within 14 days starting from the day after the find or, if later, the day on which the finder first believes or has reason to believe the object is treasure. Not doing so is an offence. Inquests are held without a jury unless the coroner decides otherwise. The coroner must notify the British Museum if his or her district is in England, the Department of the Environment if it is in Northern Ireland, or the National Museum Wales if it is in Wales. The coroner must also take reasonable steps to notify any person who appears may have found the treasure; any person who, at the time it was found, occupied land which it appears may be where the treasure was found; and any other interested persons, including persons involved in the find or having an interest in the land where the treasure was found at that time or since. However, coroners still have no power to make any legal determination as to whether the finder, landowner or occupier of the land has title to the treasure. The courts have to resolve that issue, and may also review coroners' decisions in relation to treasure.When treasure has vested in the Crown and is to be transferred to a museum, the Secretary of State is required to determine whether a reward should be paid by the museum before the transfer to the finder or any other person involved in the finding of the treasure, the occupier of the land at the time of the find, or any person who had an interest in the land at the time of the find or has had such an interest at any time since then. If the Secretary of State determines that a reward should be paid, he or she must also determine the market value of the treasure (assisted by the Treasure Valuation Committee), the amount of the reward (which cannot exceed the market value), to whom the reward should be paid and, if more than one person should be paid, how much each person should receive.
Present-day legal definitions:
In England and Wales, finders of objects that are not treasure or treasure trove are encouraged to voluntarily report them under the Portable Antiquities Scheme to finds liaison officers at county councils and local museums. Under the scheme, which started in September 1997, the officers examine finds and provide finders with information on them. They also record the finds, their functions, dates, materials and locations, and place this information into a database which can be analysed. The information on the findspots may be used to organize further research on the areas. Non-treasure finds remain the property of their finders or landowners, who are free to dispose of them as they wish.On 5 July 2009 the largest single Anglo-Saxon hoard as of that date, consisting of over 1,500 gold and precious metal pieces, helmets and sword decorations tentatively dated to around AD 600–800, was discovered by Terry Herbert in Staffordshire, England. Herbert reported the find to his local Portable Antiquities Scheme officer, and on 24 September 2009 it was declared to be treasure by the South Staffordshire coroner.In 2019 two metal detectorists, Lisa Grace and Adam Staples, discovered a hoard of 2,528 silver coins spanning the Norman Conquest of 1066. Around half the silver coins depicted the defeated Harold II and half depicted the victorious William the Conqueror. A small number of the coins were 'mule' coins with designs from both reigns, believed to have been the product of early tax evasion, where the minters failed to purchase the up to date die. As at August 28, 2019, the Avon Coroner is yet to rule on the find. The hoard has been described as extremely significant by experts, including the curator of medieval coinage at the British Museum. Avon and Somerset council has expressed a desire to obtain the collection for display in Bath, if it is declared treasure.
Present-day legal definitions:
Scotland The Treasure Act 1996 does not apply in Scotland, where treasure trove is dealt with under the common law of Scotland. The general rule that governs bona vacantia ("vacant goods")—that is, objects that are lost, forgotten or abandoned—is quod nullius est fit domini regis ("that which belongs to nobody becomes our lord the king's [or queen's]"), and the law of treasure trove is a specialized application of that rule. As in England, the Crown in Scotland has a prerogative right to treasure trove for it is one of the regalia minora ("minor things of the king"), that is, property rights which the Crown may exercise as it pleases and which it may alienate (transfer to another party).
Present-day legal definitions:
To qualify as treasure trove, an object must be precious, it must be hidden, and there must be no proof of its property or reasonable presumption of its former ownership. Unlike under English common law, treasure is not restricted to only gold and silver objects. In 1888 a prehistoric jet necklace and some other articles found in Forfarshire were claimed by the authorities though they were neither gold nor silver. A compromise was eventually reached, and the find was deposited in the National Museum of Scotland. In July 1958, a porpoise bone was found together with 28 other objects of silver alloy (12 brooches, seven bowls, a hanging bowl and other small metal work) underneath a stone slab marked with a cross on the floor of St. Ninian's Church on St. Ninian's Isle in Shetland. The objects were dated to c. AD 800. A dispute having arisen over ownership of the objects between the Crown on the one hand, and the finder (the University of Aberdeen, which had carried out the archaeological excavation) and the landowner on the other, in Lord Advocate v. University of Aberdeen (1963) the Court of Session held that the bone should be regarded as treasure trove together with the silver objects. Further, the requirement that an object must be "hidden" means no more than that it must be concealed; it refers to the condition in which the object was found and does not refer back to the intention which the owner of the object may have had in hiding it. Finally, the requirement that there must be no reasonable presumption of former ownership means that it must not be possible to trace the ownership of the object to a person or family currently existing. Even if an object does not qualify as treasure trove, it may be claimed by the Crown as bona vacantia.The King's and Lord Treasurer's Remembrancer (KLTR), an office held by the Crown Agent who is the senior officer of the Crown Office in Scotland, is responsible for claiming bona vacantia on behalf of the Crown in Scotland. Finders of items are required to report such finds to the Crown Office or to the Treasure Trove Unit (TTU) at the National Museums of Scotland in Edinburgh. Each find is assessed by the Scottish Archaeological Finds Allocation Panel, which recommends if the find should be claimed. If it is, the matter is referred by the TTU to the KLTR department at the Crown Office, which will inform the finder that it has accepted the Panel's recommendation to claim the objects in the find as treasure trove or bona vacantia.The Panel also recommends to the KLTR a reward for the find based on its current market value where appropriate, and the most appropriate museum in Scotland to allocate it to. The TTU then contacts all museums which have bid for finds to advise them of the Panel's recommendations. The museums have 14 days in which to accept or reject the proposed allocation and reward for the find. If the KLTR accepts the Panel's recommendations, it will notify the finder of the amount of any reward being paid and the museum to which the find has been allocated. The KLTR also asks the museum to pay the finder's reward.While a Treasury order of 1886 made provision for the preservation of suitable objects in various national museums and payment of rewards to their finders, the Crown is under no legal obligation to offer any rewards for treasure trove objects it has claimed. However, it usually does so, using the objects' market price as a guide. A reward may be withheld or reduced if the finder has inappropriately handled an object, for instance, damaged it by cleaning it or applying waxes and varnishes to it. Finders may elect to waive their rewards. Rewards are not paid for finds occurring during organized fieldwork.
Present-day legal definitions:
United States State laws The law of treasure trove in the United States varies from state to state, but certain general conclusions may be drawn. To be treasure trove, an object must be of gold or silver. Paper money is also deemed to be treasure trove since it previously represented gold or silver. On the same reasoning, it might be imagined that coins and tokens in metals other than gold or silver are also included, but this has yet to be clearly established. The object must have been concealed for long enough so it is unlikely that the true owner will reappear to claim it. The consensus appears to be that the object must be at least a few decades old.A majority of state courts, including those of Arkansas, Connecticut, Delaware, Georgia, Indiana, Iowa, Maine, Maryland, New York, Ohio, Oregon and Wisconsin, have ruled that the finder of treasure trove is entitled to it. The theory is that the English monarch's claim to treasure trove was based on a statutory enactment which replaced the finder's original right. When this statute was not re-enacted in the United States after its independence, the right to treasure trove reverted to the finder.In Idaho and Tennessee courts have decided that treasure trove belongs to the owner of the place where it was found, the rationale being to avoid rewarding trespassers. In one Pennsylvania case, a lower court ruled that the common law did not vest treasure trove in the finder but in the sovereign, and awarded a find of US$92,800 cash to the state. However, this judgment was reversed by the Supreme Court of Pennsylvania on the basis that it had not yet been decided if the law of treasure trove was part of Pennsylvania law. The Supreme Court deliberately refrained from deciding the issue.Finds of money and lost property are dealt with by other states through legislation. These statutes usually require finders to report their finds to the police and transfer to their custody the objects. The police then advertise the finds to try to locate their true owner. If the objects remain unclaimed for a specified period of time, title in them vests in the finders. New Jersey vests buried or hidden property in the landowner, Indiana in the county, Vermont in the town, and Maine in the township and the finder equally. In Louisiana, French codes have been followed, so half of a found object goes to the finder and the other half to the landowner. The position in Puerto Rico, the laws of which are based on civil law, is similar.Finders who are trespassers generally lose all their rights to finds, unless the trespass is regarded as "technical or trivial".Where the finder is an employee, most cases hold that the find should be awarded to the employer if it has a heightened legal obligation to take care of its customers' property, otherwise it should go to the employee. A find occurring in a bank is generally awarded to the bank as the owner is likely to have been a bank customer and the bank has a fiduciary duty to try to reunite lost property with their owners. For similar reasons, common carriers are preferred to passengers and hotels to guests (but only where finds occur in guest rooms, not common areas). The view has been taken that such a rule is suitable for recently misplaced objects as it provides the best chance for them to be reunited with their owners. However, it effectively delivers title of old artefacts to landowners, since the older an object is, the less likely it is that the original depositor will return to claim it. The rule is therefore of little or no relevance to objects of archaeological value.Due to the potential for a conflict of interest, police officers and other persons working in law enforcement occupations, and armed forces are not entitled to finds in some states.
Present-day legal definitions:
Federal law U.S. Federal laws governing recovery of treasure are governed by the Archaeological Resources Protection Act of 1979, Under ARPA, "archaeological resources" more than one hundred years old on public lands belong to the government. The term "archaeological resource" means any material remains of past human life or activities which are "of archaeological interest", as determined by federal regulations. Such regulations include, but are not limited to: pottery, basketry, bottles, weapons, weapon projectiles, tools, structures or portions of structures, pit houses, rock paintings, rock carvings, intaglios, graves, human skeletal materials, or any portion or piece of any of the foregoing items. The definition of "archaeological resource" and "archaeological interest" has been broadly interpreted under U.S. agency regulations in recent years to include nearly anything of human origin more than 100 years old, while permits to allow recovery of such items have been largely restricted to digs by credentialed archaeologists. The effect of ARPA as currently defined by federal regulations outlaws virtually all treasure hunting of items more than 100 years old, even treasure troves of gold and silver coin or scrip, under penalty of total forfeiture. Furthermore, the Federal policy against spoliation and removal of "archaeological resources" of any type from federal or Indian lands, even coins and scrip less than 100 years old, means it is unlikely that a finder of gold or silver coinage on Federal lands will prevail with an argument that the find constitutes a treasure trove of coinage, but rather "embedded property" that belongs to the property owner, i.e. the government. The broad use of ARPA to target not only archaeological looting but also to prohibit all treasure hunting on federal or Indian lands has been criticized on the grounds that total prohibition and forfeiture simply encourages concealment or misrepresentation of the age of the found coinage or treasure trove, thus hampering archaeological research, as archaeologists cannot study items that when found will never be reported. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantum efficiency**
Quantum efficiency:
The term quantum efficiency (QE) may apply to incident photon to converted electron (IPCE) ratio of a photosensitive device, or it may refer to the TMR effect of a Magnetic Tunnel Junction.
Quantum efficiency:
This article deals with the term as a measurement of a device's electrical sensitivity to light. In a charge-coupled device (CCD) or other photodetector, it is the ratio between the number of charge carriers collected at either terminal and the number of photons hitting the device's photoreactive surface. As a ratio, QE is dimensionless, but it is closely related to the responsivity, which is expressed in amps per watt. Since the energy of a photon is inversely proportional to its wavelength, QE is often measured over a range of different wavelengths to characterize a device's efficiency at each photon energy level. For typical semiconductor photodetectors, QE drops to zero for photons whose energy is below the band gap. A photographic film typically has a QE of much less than 10%, while CCDs can have a QE of well over 90% at some wavelengths.
Quantum efficiency of solar cells:
A solar cell's quantum efficiency value indicates the amount of current that the cell will produce when irradiated by photons of a particular wavelength. If the cell's quantum efficiency is integrated over the whole solar electromagnetic spectrum, one can evaluate the amount of current that the cell will produce when exposed to sunlight. The ratio between this energy-production value and the highest possible energy-production value for the cell (i.e., if the QE were 100% over the whole spectrum) gives the cell's overall energy conversion efficiency value. Note that in the event of multiple exciton generation (MEG), quantum efficiencies of greater than 100% may be achieved since the incident photons have more than twice the band gap energy and can create two or more electron-hole pairs per incident photon.
Quantum efficiency of solar cells:
Types Two types of quantum efficiency of a solar cell are often considered: External Quantum Efficiency (EQE) is the ratio of the number of charge carriers collected by the solar cell to the number of photons of a given energy shining on the solar cell from outside (incident photons).
Quantum efficiency of solar cells:
Internal Quantum Efficiency (IQE) is the ratio of the number of charge carriers collected by the solar cell to the number of photons of a given energy that shine on the solar cell from outside and are absorbed by the cell.The IQE is always larger than the EQE in the visible spectrum. A low IQE indicates that the active layer of the solar cell is unable to make good use of the photons, most likely due to poor carrier collection efficiency. To measure the IQE, one first measures the EQE of the solar device, then measures its transmission and reflection, and combines these data to infer the IQE.
Quantum efficiency of solar cells:
The external quantum efficiency therefore depends on both the absorption of light and the collection of charges. Once a photon has been absorbed and has generated an electron-hole pair, these charges must be separated and collected at the junction. A "good" material avoids charge recombination. Charge recombination causes a drop in the external quantum efficiency.
Quantum efficiency of solar cells:
The ideal quantum efficiency graph has a square shape, where the QE value is fairly constant across the entire spectrum of wavelengths measured. However, the QE for most solar cells is reduced because of the effects of recombination, where charge carriers are not able to move into an external circuit. The same mechanisms that affect the collection probability also affect the QE. For example, modifying the front surface can affect carriers generated near the surface. Highly doped front surface layers can also cause 'free carrier absorption' which reduces QE in the longer wavelengths. And because high-energy (blue) light is absorbed very close to the surface, considerable recombination at the front surface will affect the "blue" portion of the QE. Similarly, lower energy (green) light is absorbed in the bulk of a solar cell, and a low diffusion length will affect the collection probability from the solar cell bulk, reducing the QE in the green portion of the spectrum. Generally, solar cells on the market today do not produce much electricity from ultraviolet and infrared light (<400 nm and >1100 nm wavelengths, respectively); these wavelengths of light are either filtered out or are absorbed by the cell, thus heating the cell. That heat is wasted energy, and could damage the cell.Quantum efficiency of Image Sensors : Quantum efficiency (QE) is the fraction of photon flux that contributes to the photocurrent in a photodetector or a pixel. Quantum efficiency is one of the most important parameters used to evaluate the quality of a detector and is often called the spectral response to reflect its wavelength dependence. It is defined as the number of signal electrons created per incident photon. In some cases it can exceed 100% (i.e. when more than one electron is created per incident photon).
Quantum efficiency of solar cells:
EQE mapping : Conventional measurement of the EQE will give the efficiency of the overall device. However it is often useful to have a map of the EQE over large area of the device. This mapping provides an efficient way to visualize the homogeneity and/or the defects in the sample. It was realized by researchers from the Institute of Researcher and Development on Photovoltaic Energy (IRDEP) who calculated the EQE mapping from electroluminescence measurements taken with a hyperspectral imager.
Spectral responsivity:
Spectral responsivity is a similar measurement, but it has different units: amperes per watt (A/W); (i.e. how much current comes out of the device per unit of incident light power). Responsivity is ordinarily specified for monochromatic light (i.e. light of a single wavelength). Both the quantum efficiency and the responsivity are functions of the photons' wavelength (indicated by the subscript λ).
Spectral responsivity:
To convert from responsivity (Rλ, in A/W) to QEλ (on a scale 0 to 1): where λ is the wavelength in nm, h is the Planck constant, c is the speed of light in vacuum, and e is the elementary charge. Note that the unit W/A (watts per ampere) is equivalent to V (volts).
Determination where Ne = number of electrons produced, Nν = number of photons absorbed.
Assuming each photon absorbed in the depletion layer produces a viable electron-hole pair, and all other photons do not, where t is the measurement time (in seconds), Φo = incident optical power in watts, Φξ = optical power absorbed in depletion layer, also in watts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mock-heroic**
Mock-heroic:
Mock-heroic, mock-epic or heroi-comic works are typically satires or parodies that mock common Classical stereotypes of heroes and heroic literature. Typically, mock-heroic works either put a fool in the role of the hero or exaggerate the heroic qualities to such a point that they become absurd.
History:
Historically, the mock-heroic style was popular in 17th-century Italy, and in the post-Restoration and Augustan periods in Great Britain.
History:
The earliest example of the form is the Batrachomyomachia ascribed to Homer by the Romans and parodying his work, but believed by most modern scholars to be the work of an anonymous poet in the time of Alexander the Great.A longstanding assumption on the origin of the mock-heroic in the 17th century is that epic and the pastoral genres had become used up and exhausted, and so they got parodically reprised. In the 17th century the epic genre was heavily criticized, because it was felt to be merely expressing the traditional values of feudal society.
History:
Among the new genres, closer to the modern feelings and proposing new ideals, the satirical literature was particularly effective in criticizing the old habits and values. Beside the Spanish picaresque novels and the French burlesque novel, in Italy flourished the poema eroicomico. In this country those who still wrote epic poems, following the rules set by Torquato Tasso in his work Discorsi del poema eroico (Discussions about the Epic Poems) and realized in his masterwork, the Jerusalem Delivered, were felt as antiquated. The new mock-heroic poem accepted the same metre, vocabulary, rhetoric of the epics. However, the new genre turned the old epic upside down about the meaning, setting the stories in more familiar situations, to ridiculize the traditional epics. In this context was created the parody of epic genre.
History:
Lo scherno degli dèi (The Mockery of Gods) by Francesco Bracciolini, printed in 1618 is often regarded as the first Italian poema eroicomico.
However, the best known of the form is La secchia rapita (The rape of the Bucket) by Alessandro Tassoni (1622).
History:
Other Italian mock-heroic poems were La Gigantea by Girolamo Amelonghi (1566), La moscheide by Giovanni Battista Lalli (1624), the Viaggio di Colonia (Travel to Cologne) by Antonio Abbondanti (1625), L'asino (The donkey) by Carlo de' Dottori (1652), La Troja rapita by Loreto Vittori (1662), Il Malmantile racquistato by Lorenzo Lippi (1688), La presa di San Miniato by Ippolito Neri (1764).
History:
Also in Italian dialects were written mock-heroic poems. For example, in Neapolitan dialect the best known work of the form was La Vaiasseide by Giulio Cesare Cortese (1612). While in Romanesco Giovanni Camillo Peresio wrote Il maggio romanesco (1688), Giuseppe Berneri published Meo Patacca in 1695, and, finally, Benedetto Micheli printed La libbertà romana acquistata e defesa in 1765.
History:
After the translation of Don Quixote, by Miguel de Cervantes, English authors began to imitate the inflated language of Romance poetry and narrative to describe misguided or common characters. The most likely genesis for the mock-heroic, as distinct from the picaresque, burlesque, and satirical poem is the comic poem Hudibras (1662–1674), by Samuel Butler. Butler's poem describes a "trew blew" Puritan knight during the Interregnum, in language that imitates Romance and epic poetry. After Butler, there was an explosion of poetry that described a despised subject in the elevated language of heroic poetry and plays. Hudibras gave rise to a particular verse form, commonly called the "Hudibrastic". The Hudibrastic is poetry in closed rhyming couplets in iambic tetrameter, where the rhymes are often feminine rhymes or unexpected conjunctions. For example, Butler describes the English Civil War as a time which "Made men fight like mad or drunk/ For dame religion as for punk/ Whose honesty all durst swear for/ Tho' not one knew why or wherefore" ("punk" meaning a prostitute). The strained and unexpected rhymes increase the comic effect and heighten the parody. This formal indication of satire proved to separate one form of mock-heroic from the others. After Butler, Jonathan Swift is the most notable practitioner of the Hudibrastic, as he used that form for almost all of his poetry.
History:
Poet Laureate John Dryden is responsible for some of the dominance among satirical genres of the mock-heroic in the later Restoration era. While Dryden's own plays would themselves furnish later mock-heroics (specifically, The Conquest of Granada is satirized in the mock-heroic The Author's Farce and Tom Thumb by Henry Fielding, as well as The Rehearsal), Dryden's Mac Flecknoe is perhaps the locus classicus of the mock-heroic form as it would be practiced for a century to come. In that poem, Dryden indirectly compares Thomas Shadwell with Aeneas by using the language of Aeneid to describe the coronation of Shadwell on the throne of Dullness formerly held by King Flecknoe. The parody of Virgil satirizes Shadwell. Dryden's prosody is identical to regular heroic verse: iambic pentameter closed couplets. The parody is not formal, but merely contextual and ironic. (For an excellent overview of the history of the mock-heroic in the 17th and 18th centuries see "the English Mock-Heroic poem of the 18th Century" by Grazyna Bystydzienska, published by Polish Scientific Publishers, 1982.) After Dryden, the form continued to flourish, and there are countless minor mock-heroic poems from 1680 to 1780. Additionally, there were a few attempts at a mock-heroic novel. The most significant later mock-heroic poems were by Alexander Pope. Pope’s The Rape of the Lock is a noted example of the Mock-Heroic style; indeed, Pope never deviates from mimicking epic poetry such as Homer's Iliad and Virgil's Aeneid . The overall form of the poem, written in cantos, follows the tradition of epics, along with the precursory “Invocation of the Muse”; in this case, Pope's Muse is literally the person who prodded him to write the poem, John Caryll: “this verse to Caryll, Muse, is due!” (line 3). Epics always include foreshadowing which is usually given by an otherworldly figure, and Pope mocks tradition through Ariel the sprite, who sees some “dread event” (line 109) impending on Belinda. These epic introductory tendencies give way to the main portion of the story, usually involving a battle of some kind (such as in the Iliad) that follows this pattern: dressing for battle (description of Achilles shield, preparation for battle), altar sacrifice/libation to the gods, some battle change (perhaps involving drugs), treachery (Achilles ankle is told to be his weak spot), a journey to the Underworld, and the final battle. All of these elements are followed eloquently by Pope in that specific order: Belinda readies herself for the card game (which includes a description of her hair and beauty), the Baron makes a sacrifice for her hair (the altar built for love and the deal with Clarissa), the “mock” battle of cards changes in the Baron’s favor, Clarissa’s treachery to her supposed friend Belinda by slipping the Baron scissors, and finally the treatment of the card game as a battle and the Baron’s victory. Pope’s mastery of the Mock-Heroic Archived 2011-02-14 at the Wayback Machine is clear in every instance. Even the typical apotheosis found in the epics is mimicked in The Rape of the Lock, as “the stars inscribe Belinda’s name!” (line 150). He invokes the same Mock-heroic style in The Dunciad which also employs the language of heroic poetry to describe menial or trivial subjects. In this mock-epic the progress of Dulness over the face of the earth, the coming of stupidity and tastelessness, is treated in the same way as the coming of civilization is in the Aeneid (see also the metaphor of translatio studii). John Gay's Trivia and Beggar's Opera were mock-heroic (the latter in opera), and Samuel Johnson's London is a mock-heroic of a sort.
History:
By the time of Pope, however, the mock-heroic was giving ground to narrative parody, and authors such as Fielding led the mock-heroic novel into a more general novel of parody. The ascension of the novel drew a slow end to the age of the mock-heroic, which had originated in Cervantes's novel. After Romanticism's flourishing, mock-heroics like Byron's Don Juan were uncommon.
History:
Finally, the mock-heroic genre spread throughout Europe, in France, in Scotland, in Poland, in Bohemia, in Russia. The most noted mock-heroic poems in French were Le Vergile Travesti (The disguised Vergil) by Paul Scarron (1648–52) and The Maid of Orleans by Voltaire (1730). In macaronic Latin enriched with Scottish Gaelic expressions William Drummond of Hawthornden wrote Polemo-Middinia inter Vitarvam et Nebernam in 1684. The main author of mock-heroic poems in Polish was Ignacy Krasicki, who wrote Myszeida (Mouseiad) in 1775 and Monacomachia (The War of the Monks) in 1778. In the same language Tomasz Kajetan Węgierski published Organy in 1775–77. The Bohemian poet Šebestiàn Hnĕvkovský in 1805 printed two mock-heroic poems: Dĕvin in Czech and Der böhmische Mägderkrieg in German. In 1791 the Russian poet N. P. Osipov published Eneida travestied (Russian: Вирги́лиева Энеи́да, вы́вороченная наизна́нку). Ivan Kotliarevsky's mock-epic poem Eneyida (Ukrainian: Енеїда), written in 1798, is considered to be the first literary work published wholly in the modern Ukrainian language. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Isocodeine**
Isocodeine:
Isocodeine is an opioid research chemical related to codeine. It is an epimer of codeine that can be prepared from codeine via a Mitsunobu reaction.Dozens of derivatives and analogs of isocodeine and the related compound isomorphine have been produced. One of these, dihydroisocodeine is a pharmaceutical four times stronger than dihydrocodeine and thus six times stronger than codeine which was used more extensively in the past in Continental Europe and other locales. Other isomers of codeine include allocodeine, pseudocodeine, heterocodeine and substances with intermediate qualities such as pseudoallocodeine and formylallocodeine can be prepared in the laboratory. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ultra Hockey**
Ultra Hockey:
Ultra Hockey is a video game developed and published by Konami for the arcade.
Gameplay:
Ultra Hockey is a simple competitive hockey game as a tabletop arcade machine with an overhead view.
Reception:
Next Generation reviewed the arcade version of the game, rating it three stars out of five, and stated that "Essentially an exact duplicate of Konami's own Five A Side Soccer, except using hockey players and an icy white background instead of a green one". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chirplet transform**
Chirplet transform:
In signal processing, the chirplet transform is an inner product of an input signal with a family of analysis primitives called chirplets.Similar to the wavelet transform, chirplets are usually generated from (or can be expressed as being from) a single mother chirplet (analogous to the so-called mother wavelet of wavelet theory).
Definitions:
The term chirplet transform was coined by Steve Mann, as the title of the first published paper on chirplets. The term chirplet itself (apart from chirplet transform) was also used by Steve Mann, Domingo Mihovilovic, and Ronald Bracewell to describe a windowed portion of a chirp function. In Mann's words: A wavelet is a piece of a wave, and a chirplet, similarly, is a piece of a chirp. More precisely, a chirplet is a windowed portion of a chirp function, where the window provides some time localization property. In terms of time–frequency space, chirplets exist as rotated, sheared, or other structures that move from the traditional parallelism with the time and frequency axes that are typical for waves (Fourier and short-time Fourier transforms) or wavelets.
Definitions:
The chirplet transform thus represents a rotated, sheared, or otherwise transformed tiling of the time–frequency plane. Although chirp signals have been known for many years in radar, pulse compression, and the like, the first published reference to the chirplet transform described specific signal representations based on families of functions related to one another by time–varying frequency modulation or frequency varying time modulation, in addition to time and frequency shifting, and scale changes. In that paper, the Gaussian chirplet transform was presented as one such example, together with a successful application to ice fragment detection in radar (improving target detection results over previous approaches). The term chirplet (but not the term chirplet transform) was also proposed for a similar transform, apparently independently, by Mihovilovic and Bracewell later that same year.
Applications:
The first practical application of the chirplet transform was in water-human-computer interaction (WaterHCI) for marine safety, to assist vessels in navigating through ice-infested waters, using marine radar to detect growlers (small iceberg fragments too small to be visible on conventional radar, yet large enough to damage a vessel).Other applications of the chirplet transform in WaterHCI include the SWIM (Sequential Wave Imprinting Machine).More recently other practical applications have been developed, including image processing (e.g. where there is periodic structure imaged through projective geometry), as well as to excise chirp-like interference in spread spectrum communications, in EEG processing, and Chirplet Time Domain Reflectometry.
Extensions:
The warblet transform is a particular example of the chirplet transform introduced by Mann and Haykin in 1992 and now widely used. It provides a signal representation based on cyclically varying frequency modulated signals (warbling signals). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Focal choroidal excavation**
Focal choroidal excavation:
Focal choroidal excavation (FCE) is a concavity in the choroidal layer of the eye that can be detected by optical coherence tomography. The disease is usually unilateral and not associated with any accompanying systemic diseases.
Pathophysiology:
Focal choroidal excavation (FCE) is a concavity in the choroidal layer of the eye without posterior staphyloma or scleral ectasia, that can be detected by optical coherence tomography. The concavity is commonly seen in the macular region. The disease is usually unilateral and not associated with any accompanying systemic diseases.Choroidal vascular disorders which cause visual symptoms, including central serous chorioretinopathy (CSCR), choroidal neovascularization (CNV), and polypoidal choroidal vasculopathy (PCV) may also present with focal choroidal excavation.
Etiology:
The exact etiology of FCE is still (as of 2022) unknown. It was previously considered a congenital disease, but later it was suggested that FCEs can also occur with choroidal atrophy and choroiditis.
Signs and symptoms:
In FCE, visual acuity may be normal and the overlying retina may also appear normal.
Classification:
There are three types of classification systems used to classify FCE.
Classification:
If there is no separation between photoreceptor outer segments and the retinal pigment epithelium (RPE), it is classified as conforming and if there is a space it is considered as non-conforming.Based on shape of the choroidal concavity FCE can be classified as cone-shaped, bowl-shaped, or mixed morphology. Based on the location of the lesion, it can be classified as foveal or extrafoveal.
Treatment:
Asymptomatic FCE without any other choroidal or retinal changes, observation is only recommended. If the lesion expands or the sclera thickens, rule out other underlying causes and treat it.
History:
Jampol et al. first identified the lesion in 2006. Margolis et al. named the condition as focal choroidal excavation. Later Shinojima et al. described a classification system based on shape of the choroidal concavity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Veterinary pathology**
Veterinary pathology:
Veterinary pathologists are veterinarians who specialize in the diagnosis of diseases through the examination of animal tissue and body fluids. Like medical pathology, veterinary pathology is divided into two branches, anatomical pathology and clinical pathology. Other than the diagnosis of disease in food-producing animals, companion animals, zoo animals and wildlife, veterinary pathologists also have an important role in drug discovery and safety as well as scientific research.
Veterinary anatomical pathology:
Anatomical pathology (Commonwealth) or Anatomic pathology (U.S.) is concerned with the diagnosis of disease based on the gross examination, microscopic, and molecular examination of organs, tissues, and whole bodies (necropsy). The Indian, European, Japanese and American Colleges of Veterinary Pathologists certify veterinary pathologists through a certifying exam. The American College of Veterinary Pathologist certification exam consists of four parts - gross pathology, microscopic pathology, veterinary pathology, and general pathology. Only the general pathology section is shared between the anatomic and clinical pathology examinations. Anatomic pathologists are employed in a number of different positions, including diagnostics, teaching, research, and the pharmaceutical industry.
Veterinary clinical pathology:
Clinical pathology is concerned with the diagnosis of disease based on the laboratory analysis of bodily fluids such as blood, urine or cavitary effusions, or tissue aspirates using the tools of chemistry, microbiology, hematology and molecular pathology. The Indian, European, Japanese and American Colleges of Veterinary Pathologists certify veterinary clinical pathologists. The American College of Veterinary Pathologists certification exam consists of four parts: General Pathology (shared with the Anatomic Pathology certifying examination), Cytology and Surgical Pathology, Hematology, and Clinical Chemistry. The credential, DACVP (Diplomate, American College of Veterinary Pathologists) is usually followed by a parenthetical notation of "(Clinical Pathology)" to distinguish DACVP counterparts certified for anatomic pathology. The European credential is DipECVCP (Diplomate of the European College of Veterinary Clinical Pathology). Clinical pathologists are employed in diagnostic pathology, veterinary and medical teaching, research, and the pharmaceutical industry. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Environmental radioactivity**
Environmental radioactivity:
Environmental radioactivity is produced by radioactive materials in the human environment. While some radioisotopes, such as strontium-90 (90Sr) and technetium-99 (99Tc), are only found on Earth as a result of human activity, and some, like potassium-40 (40K), are only present due to natural processes, a few isotopes, e.g. tritium (3H), result from both natural processes and human activities. The concentration and location of some natural isotopes, particularly uranium-238 (238U), can be affected by human activity.
Background level in soils:
Radioactivity is present everywhere, and has been since the formation of the earth. Natural radioactivity detected in soil is predominantly due to the following four natural radioisotopes: 40K, 226Ra, 238U, and 232Th. In one kilogram of soil, the potassium-40 amounts to an average 370 Bq of radiation, with a typical range of 100–700 Bq; the others each contribute some 25 Bq, with typical ranges of 10–50 Bq (7–50 Bq for the 232Th). Some soils may vary greatly from these norms.
Background level in soils:
Sea and river silt A recent report on the Sava river in Serbia suggests that many of the river silts contain about 100 Bq kg−1 of natural radioisotopes (226Ra, 232Th, and 238U). According to the United Nations the normal concentration of uranium in soil ranges between 300 μg kg−1 and 11.7 mg kg−1. It is well known that some plants, called hyperaccumulators, are able to absorb and concentrate metals within their tissues; iodine was first isolated from seaweed in France, which suggests that seaweed is an iodine hyperaccumulator.
Background level in soils:
Synthetic radioisotopes also can be detected in silt. Busby quotes a report on the plutonium activity in Welsh intertidal sediments by Garland et al. (1989), which suggests that the closer a site is to Sellafield, the higher is the concentration of plutonium in the silt. Some relationship between distance and activity can be seen in their data, when fitted to an exponential curve, but the scatter of the points is large (R2 = 0.3683).
Man-made:
The additional radioactivity in the biosphere caused by human activity due to the releases of man-made radioactivity and of Naturally Occurring Radioactive Materials (NORM) can be divided into several classes.
Normal licensed releases which occur during the regular operation of a plant or process handling man-made radioactive materials.
For instance the release of 99Tc from a nuclear medicine department of a hospital which occurs when a person given a Tc imaging agent expels the agent.
Releases of man-made radioactive materials which occur during an industrial or research accident.
For instance the Chernobyl accident.
Releases which occur as a result of military activity.
For example, a nuclear weapons test.
Releases which occur as a result of a crime.
For example, the Goiânia accident where thieves, unaware of its radioactive content, stole some medical equipment and as a result a number of people were exposed to radiation.
Releases of naturally occurring radioactive materials (NORM) as a result of mining etc.
For example, the release of the trace quantities of uranium and thorium in coal, when it is burned in power stations.
Farming and the transfer to humans of deposited radioactivity Just because a radioisotope lands on the surface of the soil, does not mean it will enter the human food chain. After release into the environment, radioactive materials can reach humans in a range of different routes, and the chemistry of the element usually dictates the most likely route.
Man-made:
Cows Jiří Hála claims in his textbook "Radioactivity, Ionizing Radiation and Nuclear Energy" that cattle only pass a minority of the strontium, caesium, plutonium and americium they ingest to the humans who consume milk and meat. Using milk as an example, if the cow has a daily intake of 1000 Bq of the preceding isotopes then the milk will have the following activities.
Man-made:
90Sr, 2 Bq/L 137Cs, 5 Bq/L 239Pu, 0.001 Bq/L 241Am, 0.001 Bq/L Soil Jiří Hála's textbook states that soils vary greatly in their ability to bind radioisotopes, the clay particles and humic acids can alter the distribution of the isotopes between the soil water and the soil. The distribution coefficient Kd is the ratio of the soil's radioactivity (Bq g−1) to that of the soil water (Bq ml−1). If the radioactivity is tightly bonded to by the minerals in the soil then less radioactivity can be absorbed by crops and grass growing in the soil.
Man-made:
Cs-137 Kd = 1000 Pu-239 Kd = 10000 to 100000 Sr-90 Kd = 80 to 150 I-131 Kd = 0.007 to 50 The Trinity test One dramatic source of man-made radioactivity is a nuclear weapons test. The glassy trinitite created by the first atom bomb contains radioisotopes formed by neutron activation and nuclear fission. In addition some natural radioisotopes are present. A recent paper reports the levels of long-lived radioisotopes in the trinitite. The trinitite was formed from feldspar and quartz which were melted by the heat. Two samples of trinitite were used, the first (left-hand-side bars in the graph) was taken from between 40 and 65 meters of ground zero while the other sample was taken from further away from the ground zero point.
Man-made:
The 152Eu (half life 13.54 year) and 154Eu (half life 8.59 year) were mainly formed by the neutron activation of the europium in the soil, it is clear that the level of radioactivity for these isotopes is highest where the neutron dose to the soil was larger. Some of the 60Co (half life 5.27 year) is generated by activation of the cobalt in the soil, but some was also generated by the activation of the cobalt in the steel (100 foot) tower. This 60Co from the tower would have been scattered over the site reducing the difference in the soil levels.
Man-made:
The 133Ba (half life 10.5 year) and 241Am (half life 432.6 year) are due to the neutron activation of barium and plutonium inside the bomb. The barium was present in the form of the nitrate in the chemical explosives used while the plutonium was the fissile fuel used.
The 137Cs level is higher in the sample that was further away from the ground zero point – this is thought to be because the precursors to the 137Cs (137I and 137Xe) and, to a lesser degree, the caesium itself are volatile. The natural radioisotopes in the glass are about the same in both locations.
Man-made:
Activation products The action of neutrons on stable isotopes can form radioisotopes, for instance the neutron bombardment (neutron activation) of nitrogen-14 forms carbon-14. This radioisotope can be released from the nuclear fuel cycle; this is the radioisotope responsible for the majority of the dose experienced by the population as a result of the activities of the nuclear power industry.Nuclear bomb tests have increased the specific activity of carbon, whereas the use of fossil fuels has decreased it. See the article on radiocarbon dating for further details.
Man-made:
Fission products Discharges from nuclear plants within the nuclear fuel cycle introduce fission products to the environment. The releases from nuclear reprocessing plants tend to be medium to long-lived radioisotopes; this is because the nuclear fuel is allowed to cool for several years before being dissolved in the nitric acid. The releases from nuclear reactor accidents and bomb detonations will contain a greater amount of the short-lived radioisotopes (when the amounts are expressed in activity Bq)).
Man-made:
Short lived An example of a short-lived fission product is iodine-131, this can also be formed as an activation product by the neutron activation of tellurium.
Man-made:
In both bomb fallout and a release from a power reactor accident, the short-lived isotopes cause the dose rate on day one to be much higher than that which will be experienced at the same site many days later. This holds true even if no attempts at decontamination are made. In the graphs below, the total gamma dose rate and the share of the dose due to each main isotope released by the Chernobyl accident are shown.
Man-made:
Medium lived An example of a medium lived is 137Cs, which has a half-life of 30 years. Caesium is released in bomb fallout and from the nuclear fuel cycle. A paper has been written on the radioactivity in oysters found in the Irish Sea, these were found by gamma spectroscopy to contain 141Ce, 144Ce, 103Ru, 106Ru, 137Cs, 95Zr and 95Nb. In addition, a zinc activation product (65Zn) was found, this is thought to be due to the corrosion of magnox fuel cladding in cooling ponds. The concentration of all these isotopes in the Irish Sea attributable to nuclear facilities such as Sellafield has significantly decreased in recent decades.
Man-made:
An important part of the Chernobyl release was the caesium-137, this isotope is responsible for much of the long term (at least one year after the fire) external exposure which has occurred at the site. The caesium isotopes in the fallout have had an effect on farming. [2] A large amount of caesium was released during the Goiânia accident where a radioactive source (made for medical use) was stolen and then smashed open during an attempt to convert it into scrap metal. The accident could have been stopped at several stages; first, the last legal owners of the source failed to make arrangements for the source to be stored in a safe and secure place; and second, the scrap metal workers who took it did not recognise the markings which indicated that it was a radioactive object.
Man-made:
Soudek et al. reported in 2006 details of the uptake of 90Sr and 137Cs into sunflowers grown under hydroponic conditions. The caesium was found in the leaf veins, in the stem and in the apical leaves. It was found that 12% of the caesium entered the plant, and 20% of the strontium. This paper also reports details of the effect of potassium, ammonium and calcium ions on the uptake of the radioisotopes.
Man-made:
Caesium binds tightly to clay minerals such as illite and montmorillonite; hence it remains in the upper layers of soil where it can be accessed by plants with shallow roots (such as grass). Hence grass and mushrooms can carry a considerable amount of 137Cs which can be transferred to humans through the food chain. One of the best countermeasures in dairy farming against 137Cs is to mix up the soil by deeply ploughing the soil. This has the effect of putting the 137Cs out of reach of the shallow roots of the grass, hence the level of radioactivity in the grass will be lowered. Also, after a nuclear war or serious accident, the removal of top few cm of soil and its burial in a shallow trench will reduce the long term gamma dose to humans due to 137Cs as the gamma photons will be attenuated by their passage through the soil. The more remote the trench is from humans and the deeper the trench is the better the degree of protection which will be afforded to the human population.
Man-made:
In livestock farming, an important countermeasure against 137Cs is to feed to animals a little prussian blue. This iron potassium cyanide compound acts as an ion-exchanger. The cyanide is so tightly bonded to the iron that it is safe for a human to eat several grams of prussian blue per day. The prussian blue reduces the biological half-life (not to be confused with the nuclear half-life) of the caesium). The physical or nuclear half-life of 137Cs is about 30 years, which is a constant and can not be changed; however, the biological half-life will change according to the nature and habits of the organism for which it is expressed. Caesium in humans normally has a biological half-life of between one and four months. An added advantage of the prussian blue is that the caesium which is stripped from the animal in the droppings is in a form which is not available to plants. Hence, it prevents the caesium from being recycled. The form of prussian blue required for the treatment of humans or animals is a special grade. Attempts to use the pigment grade used in paints have not been successful.
Man-made:
Long lived Examples of long-lived isotopes include iodine-129 and Tc-99, which have nuclear half-lives of 15 million and 200,000 years, respectively.
Man-made:
Plutonium and the other actinides In popular culture, plutonium is credited with being the ultimate threat to life and limb which is wrong; while ingesting plutonium is not likely to be good for one's health, other radioisotopes such as radium are more toxic to humans. Regardless, the introduction of the transuranium elements such as plutonium into the environment should be avoided wherever possible. Currently, the activities of the nuclear reprocessing industry have been subject to great debate as one of the fears of those opposed to the industry is that large amounts of plutonium will be either mismanaged or released into the environment.
Man-made:
In the past, one of the largest releases of plutonium into the environment has been nuclear bomb testing.
Those tests in the air scattered some plutonium over the entire globe; this great dilution of the plutonium has resulted in the threat to each exposed person being very small as each person is only exposed to a very small amount.
The underground tests tend to form molten rock, which rapidly cools and seals the actinides into the rock, so rendering them unable to move; again the threat to humans is small unless the site of the test is dug up.
The safety trials where bombs were subject to simulated accidents pose the greatest threat to people; some areas of land used for such experiments (conducted in the open air) have not been fully released for general use despite in one case an extensive decontamination.
Natural:
Activation products from cosmic rays Cosmogenic isotopes (or cosmogenic nuclides) are rare isotopes created when a high-energy cosmic ray interacts with the nucleus of an in situ atom. These isotopes are produced within earth materials such as rocks or soil, in Earth's atmosphere, and in extraterrestrial items such as meteorites. By measuring cosmogenic isotopes, scientists are able to gain insight into a range of geological and astronomical processes. There are both radioactive and stable cosmogenic isotopes. Some of these radioisotopes are tritium, carbon-14 and phosphorus-32.
Natural:
Production modes Here is a list of radioisotopes formed by the action of cosmic rays on the atmosphere; the list also contains the production mode of the isotope. These data were obtained from the SCOPE50 report, see table 1.9 of chapter 1.
Transfer to ground The level of beryllium-7 in the air is related to the sun spot cycle, as radiation from the sun forms this radioisotope in the atmosphere. The rate at which it is transferred from the air to the ground is controlled in part by the weather.
Natural:
Applications in geology listed by isotope Applications of dating Because cosmogenic isotopes have long half-lives (anywhere from thousands to millions of years), scientists find them useful for geologic dating. Cosmogenic isotopes are produced at or near the surface of the Earth, and thus are commonly applied to problems of measuring ages and rates of geomorphic and sedimentary events and processes.
Natural:
Specific applications of cosmogenic isotopes include: exposure dating of earth surfaces, including glacially scoured bedrock, fault scarps, landslide debris burial dating of sediment, bedrock, ice measurement of steady-state erosion rates absolute dating of organic matter (radiocarbon dating) absolute dating of water masses, measurement of groundwater transport rates absolute dating of meteorites, lunar surfaces Methods of measurement for the long-lived isotopes To measure cosmogenic isotopes produced within solid earth materials, such as rock, samples are generally first put through a process of mechanical separation. The sample is crushed and desirable material, such as a particular mineral (quartz in the case of Be-10), is separated from non-desirable material by using a density separation in a heavy liquid medium such as lithium sodium tungstate (LST). The sample is then dissolved, a common isotope carrier added (Be-9 carrier in the case of Be-10), and the aqueous solution is purified down to an oxide or other pure solid.
Natural:
Finally, the ratio of the rare cosmogenic isotope to the common isotope is measured using accelerator mass spectrometry. The original concentration of cosmogenic isotope in the sample is then calculated using the measured isotopic ratio, the mass of the sample, and the mass of carrier added to the sample.
Radium and radon from the decay of long-lived actinides Radium and radon are in the environment because they are decay products of uranium and thorium.
The radon (222Rn) released into the air decays to 210Pb and other radioisotopes, and the levels of 210Pb can be measured. The rate of deposition of this radioisotope is dependent on the weather. Below is a graph of the deposition rate observed in Japan.
Natural:
Uranium-lead dating Uranium-lead dating is usually performed on the mineral zircon (ZrSiO4), though other materials can be used. Zircon incorporates uranium atoms into its crystalline structure as substitutes for zirconium, but strongly rejects lead. It has a high blocking temperature, is resistant to mechanical weathering and is chemically inert. Zircon also forms multiple crystal layers during metamorphic events, which each may record an isotopic age of the event. These can be dated by a SHRIMP ion microprobe.
Natural:
One of the advantages of this method is that any sample provides two clocks, one based on uranium-235's decay to lead-207 with a half-life of about 703 million years, and one based on uranium-238's decay to lead-206 with a half-life of about 4.5 billion years, providing a built-in crosscheck that allows accurate determination of the age of the sample even if some of the lead has been lost. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Internet-related prefixes**
Internet-related prefixes:
Internet-related prefixes such as e-, i-, cyber-, info-, techno- and net- are added to a wide range of existing words to describe new, Internet- or computer-related flavors of existing concepts, often electronic products and services that already have a non-electronic counterpart. The adjective virtual is often used in a similar manner.
Cyber-, e-, i, and virtual:
"Cyber-" Cyber- is derived from "cybernetic", from the Greek κυβερνητικός 'skilled in steering or governing'. Examples: cyberspace, cyberlaw, cyberbullying, cybercrime, cyberwarfare, cyberterrorism, cybersex, and cyberdelic. It is commonly used for policies and politics regarding computer systems and networks (as in the above cases), but also for information technology products and services.
"E-" E-, standing for electronic, is used in the terms e-mail, e-commerce, e-business, e-banking, e-sports, e-paper, e-cigarette, e-car, e-girl, e-reservation, and e-book.The lowercase initial e prefix was used as early as 1994 by eWorld, Apple's online service.
Cyber-, e-, i, and virtual:
"i-" The i- prefix was used in 1964 in "In Watermelon Sugar", Richard Brautigan's American postmodern post-apocalyptic novel published in 1968. Set in the aftermath of a fallen civilization, it focuses on a commune organized around a central gathering house which is named "iDEATH" The i- prefix was used as early as 1994 by iVillage, an internet community site by and for women. More recent examples include the BBC's iPlayer, and Google's former iGoogle service. It has even been used by companies not in the IT sector for their websites, such as Coca-Cola's now-defunct icoke.com.
Cyber-, e-, i, and virtual:
Apple Inc. is especially connected to the i- prefix. They first employed it for the iMac line of computers starting in 1998, and have since used it in many of their other product names, including iCal, iSync, iChat, iBook, iDVD, iLife, iMessage, iPod (and iPod Socks), iSight, iPhone, iWeb, iTunes, iCloud, and others. They have said it stands for "Internet".Promotional materials for the 2004 film I, Robot, inspired by Isaac Asimov's short-story collection of the same name, utilized a lowercase i as a cultural reference to the rising popularity at that time of the prefix in product names.The letter "i" was also used in the popular Nickelodeon show iCarly, as that show primarily uses the internet as its main theme and to parodize the fact that Apple uses "i-" in almost all its products.
Cyber-, e-, i, and virtual:
"Virtual" The word virtual is used in a similar way to the prefixes above, but it is an adjective instead of a prefix. For example, it is used in the terms virtual reality, virtual world, and virtual sex.
Linguistic behaviour:
These prefixes are productive. Michael Quinion notes that most of these formations are nonce words that will never be seen again. He writes that new terms such as "e-health" are unneeded; in this case telemedicine already exists to describe the application of telecommunications to medicine. He similarly points out the redundancy of e-tail, e-commerce, and e-business. Martin likewise characterizes many of these words as "fad words" and believes many will disappear once the technology that resulted in their coinage becomes better accepted and understood. For example, he writes, "when using computers becomes the standard way to do business, there will be no need to call it 'e-business' — it may be just 'business.'"
Spelling controversies:
There is some confusion over whether these prefixes should be hyphenated and/or in upper case. In the case of e-mail, it was originally hyphenated and lowercase in general usage, but the hyphen is no longer common.In 1999, Michael Quinion attributed the forms "email", "E-mail" and "Email" to uncertainty on the parts of newer Internet users. In 2003, Ronald Smith prescribed that the e- should always be lowercase and hyphenated. In 2013, the Associated Press Stylebook removed the hyphen from "e-mail", following the general usage of the word.
History:
The term 'cybernetics' was used in Norbert Wiener's book Cybernetics or Control and Communication in the Animal and the Machine (MIT Press, 1948). Wiener used the term in reference to the control of complex systems in the animal world and in mechanical networks, in particular self-regulating control systems. By 1960, doctors were performing research into surgically or mechanically augmenting humans or animals to operate machinery in space, leading to the coining of the term "cyborg", for "cybernetic organism".
History:
In 1965, the ABPC The Avengers television series introduced artificial humanoids called Cybernauts. In 1966, the BBC Doctor Who serial The Tenth Planet introduced a monster called cybermen.
History:
Fred J Cook (Winner of the 1961 Hillman Award) in his 1966 book "The Corrupted Land : The Social Morality of Modern America" introduces his book with "such ideals as free enterprise, 'rugged individualism' and laissez faire are anachronisms in this age of CYBERNATION." By the 1970s, the Control Data Corporation (CDC) sold the "Cyber" range of supercomputers, establishing the word cyber- as synonymous with computing. Robert Trappl credits William Gibson and his novel Neuromancer with triggering a "cyber- prefix flood" in the 1980s.McFedries observes that a backlash against the use of e- and cyber- can be traced to the late 1990s, quoting Hale and Scanlon requesting writers in 1999 to "resist the urge to use this vowel-as-cliché" when it comes to e- and calling cyber- "terminally overused".A comparable usage from outside the English language is the Japanese prefix denki (電気), meaning electricity, which was used in Meiji-era Japan to denote products exhibiting a Western sensibility. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Céa's lemma**
Céa's lemma:
Céa's lemma is a lemma in mathematics. Introduced by Jean Céa in his Ph.D. dissertation, it is an important tool for proving error estimates for the finite element method applied to elliptic partial differential equations.
Lemma statement:
Let V be a real Hilbert space with the norm ‖⋅‖.
Let a:V×V→R be a bilinear form with the properties |a(v,w)|≤γ‖v‖‖w‖ for some constant γ>0 and all v,w in V (continuity) a(v,v)≥α‖v‖2 for some constant α>0 and all v in V (coercivity or V -ellipticity).Let L:V→R be a bounded linear operator. Consider the problem of finding an element u in V such that a(u,v)=L(v) for all v in V.
Consider the same problem on a finite-dimensional subspace Vh of V, so, uh in Vh satisfies a(uh,v)=L(v) for all v in Vh.
By the Lax–Milgram theorem, each of these problems has exactly one solution. Céa's lemma states that ‖u−uh‖≤γα‖u−v‖ for all v in Vh.
That is to say, the subspace solution uh is "the best" approximation of u in Vh, up to the constant γ/α.
The proof is straightforward α‖u−uh‖2≤a(u−uh,u−uh)=a(u−uh,u−v)+a(u−uh,v−uh)=a(u−uh,u−v)≤γ‖u−uh‖‖u−v‖ for all v in Vh.
Lemma statement:
We used the a -orthogonality of u−uh and v−uh∈Vh a(u−uh,v)=0,∀v∈Vh which follows directly from Vh⊂V a(u,v)=L(v)=a(uh,v) for all v in Vh .Note: Céa's lemma holds on complex Hilbert spaces also, one then uses a sesquilinear form a(⋅,⋅) instead of a bilinear one. The coercivity assumption then becomes |a(v,v)|≥α‖v‖2 for all v in V (notice the absolute value sign around a(v,v) ).
Error estimate in the energy norm:
In many applications, the bilinear form a:V×V→R is symmetric, so a(v,w)=a(w,v) for all v,w in V.
This, together with the above properties of this form, implies that a(⋅,⋅) is an inner product on V.
The resulting norm ‖v‖a=a(v,v) is called the energy norm, since it corresponds to a physical energy in many problems. This norm is equivalent to the original norm ‖⋅‖.
Using the a -orthogonality of u−uh and Vh and the Cauchy–Schwarz inequality ‖u−uh‖a2=a(u−uh,u−uh)=a(u−uh,u−v)≤‖u−uh‖a⋅‖u−v‖a for all v in Vh .Hence, in the energy norm, the inequality in Céa's lemma becomes ‖u−uh‖a≤‖u−v‖a for all v in Vh (notice that the constant γ/α on the right-hand side is no longer present).
This states that the subspace solution uh is the best approximation to the full-space solution u in respect to the energy norm. Geometrically, this means that uh is the projection of the solution u onto the subspace Vh in respect to the inner product a(⋅,⋅) (see the adjacent picture).
Using this result, one can also derive a sharper estimate in the norm ‖⋅‖ . Since α‖u−uh‖2≤a(u−uh,u−uh)=‖u−uh‖a2≤‖u−v‖a2≤γ‖u−v‖2 for all v in Vh ,it follows that ‖u−uh‖≤γα‖u−v‖ for all v in Vh
An application of Céa's lemma:
We will apply Céa's lemma to estimate the error of calculating the solution to an elliptic differential equation by the finite element method.
Consider the problem of finding a function u:[a,b]→R satisfying the conditions in [a,b]u(a)=u(b)=0 where f:[a,b]→R is a given continuous function.
An application of Céa's lemma:
Physically, the solution u to this two-point boundary value problem represents the shape taken by a string under the influence of a force such that at every point x between a and b the force density is f(x)e (where e is a unit vector pointing vertically, while the endpoints of the string are on a horizontal line, see the adjacent picture). For example, that force may be the gravity, when f is a constant function (since the gravitational force is the same at all points).
An application of Céa's lemma:
Let the Hilbert space V be the Sobolev space H01(a,b), which is the space of all square-integrable functions v defined on [a,b] that have a weak derivative on [a,b] with v′ also being square integrable, and v satisfies the conditions 0.
The inner product on this space is (v,w)=∫ab(v(x)w(w)+v′(x)w′(x))dx for all v and w in V.
After multiplying the original boundary value problem by v in this space and performing an integration by parts, one obtains the equivalent problem a(u,v)=L(v) for all v in V ,with a(u,v)=∫abu′(x)v′(x)dx ,and L(v)=∫abf(x)v(x)dx.
It can be shown that the bilinear form a(⋅,⋅) and the operator L satisfy the assumptions of Céa's lemma.
In order to determine a finite-dimensional subspace Vh of V, consider a partition a=x0<x1<⋯<xn−1<xn=b of the interval [a,b], and let Vh be the space of all continuous functions that are affine on each subinterval in the partition (such functions are called piecewise-linear). In addition, assume that any function in Vh takes the value 0 at the endpoints of [a,b].
It follows that Vh is a vector subspace of V whose dimension is n−1 (the number of points in the partition that are not endpoints).
Let uh be the solution to the subspace problem a(uh,v)=L(v) for all v in Vh, so one can think of uh as of a piecewise-linear approximation to the exact solution u.
By Céa's lemma, there exists a constant C>0 dependent only on the bilinear form a(⋅,⋅), such that ‖u−uh‖≤C‖u−v‖ for all v in Vh.
An application of Céa's lemma:
To explicitly calculate the error between u and uh, consider the function πu in Vh that has the same values as u at the nodes of the partition (so πu is obtained by linear interpolation on each interval [xi,xi+1] from the values of u at interval's endpoints). It can be shown using Taylor's theorem that there exists a constant K that depends only on the endpoints a and b, such that |u′(x)−(πu)′(x)|≤Kh‖u″‖L2(a,b) for all x in [a,b], where h is the largest length of the subintervals [xi,xi+1] in the partition, and the norm on the right-hand side is the L2 norm.
An application of Céa's lemma:
This inequality then yields an estimate for the error ‖u−πu‖.
Then, by substituting v=πu in Céa's lemma it follows that ‖u−uh‖≤Ch‖u″‖L2(a,b), where C is a different constant from the above (it depends only on the bilinear form, which implicitly depends on the interval [a,b] ).
This result is of a fundamental importance, as it states that the finite element method can be used to approximately calculate the solution of our problem, and that the error in the computed solution decreases proportionately to the partition size h.
Céa's lemma can be applied along the same lines to derive error estimates for finite element problems in higher dimensions (here the domain of u was in one dimension), and while using higher order polynomials for the subspace Vh. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Renal calyx**
Renal calyx:
The renal calyces are conduits in the kidney through which urine passes. The minor calyces form a cup-shaped drain around the apex of the renal pyramids. Urine formed in the kidney passes through a renal papilla at the apex into the minor calyx; 4-5 minor calyces converge to form a major calyx through which urine passes into the renal pelvis (which in turn drains urine out of the kidney through the ureter).
Function:
Peristalsis of the smooth muscle originating in pace-maker cells originating in the walls of the calyces propels urine through the renal pelvis and ureters to the bladder. The initiation is caused by the increase in volume that stretches the walls of the calyces. This causes them to fire impulses which stimulate rhythmical contraction and relaxation, called peristalsis. Parasympathetic innervation enhances the peristalsis while sympathetic innervation inhibits it.
Clinical significance:
A "staghorn calculus" is a kidney stone that may extend into the renal calyces.
A renal diverticulum is diverticulum of renal calyces. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Millennium Run**
Millennium Run:
The Millennium Run, or Millennium Simulation (referring to its size) is a computer N-body simulation used to investigate how the distribution of matter in the Universe has evolved over time, in particular, how the observed population of galaxies was formed. It is used by scientists working in physical cosmology to compare observations with theoretical predictions.
Overview:
A basic scientific method for testing theories in cosmology is to evaluate their consequences for the observable parts of the universe. One piece of observational evidence is the distribution of matter, including galaxies and intergalactic gas, which are observed today. Light emitted from more distant matter must travel longer in order to reach Earth, meaning looking at distant objects is like looking further back in time. This means the evolution in time of the matter distribution in the universe can also be observed directly.
Overview:
The Millennium Simulation was run in 2005 by the Virgo Consortium, an international group of astrophysicists from Germany, the United Kingdom, Canada, Japan and the United States. It starts at the epoch when the cosmic background radiation was emitted, about 379,000 years after the universe began. The cosmic background radiation has been studied by satellite experiments, and the observed inhomogeneities in the cosmic background serve as the starting point for following the evolution of the corresponding matter distribution. Using the physical laws expected to hold in the currently known cosmologies and simplified representations of the astrophysical processes observed to affect real galaxies, the initial distribution of matter is allowed to evolve, and the simulation's predictions for formation of galaxies and black holes are recorded.
Overview:
Since the completion of the Millennium Run simulation in 2005, a series of ever more sophisticated and higher fidelity simulations of the formation of the galaxy population have been built within its stored output and have been made publicly available over the internet. In addition to improving the treatment of the astrophysics of galaxy formation, recent versions have adjusted the parameters of the underlying cosmological model to reflect changing ideas about their precise values. To date (mid-2018) more than 950 published papers have made use of data from the Millennium Run, making it, at least by this measure, the highest impact astrophysical simulation of all time.
Size of the simulation:
For the first scientific results, published on June 2, 2005, the Millennium Simulation traced 21603, or just over 10 billion, "particles." These are not particles in the particle physics sense – each "particle" represents approximately a billion solar masses of dark matter. The region of space simulated was a cube with about 2 billion light years as its length. This volume was populated by about 20 million "galaxies". A super computer located in Garching, Germany executed the simulation, which used a version of the GADGET code, for more than a month. The output of the simulation needed about 25 terabytes of storage.
First results:
The Sloan Digital Sky Survey had challenged the current understanding of cosmology by finding black hole candidates in very bright quasars at large distances. This meant that they were created much earlier than initially expected. In successfully managing to produce quasars at early times, the Millennium Simulation demonstrated that these objects do not contradict our models of the evolution of the universe.
Millennium II:
In 2009, the same group ran the 'Millennium II' simulation (MS-II) on a smaller cube (about 400 million light years on a side), with the same number of particles but with each particle representing 6.9 million solar masses. This is a rather harder numerical task since splitting the computational domain between processors becomes harder when dense clumps of matter are present. MS-II used 1.4 million CPU hours over 2048 cores (i.e. about a month) on the Power-6 computer at Garching; a simulation was also run with the same initial conditions and fewer particles to check that features in the higher-resolution run were also seen at lower resolution.
Millennium XXL:
In 2010, the 'Millennium XXL' simulation (MXXL) was performed, this time using a much larger cube (over 13 billion light years on a side), and 67203 particles each representing 7 billion times the mass of the Sun. The MXXL spans a cosmological volume 216 and 27,000 times the size of the Millennium and the MS-II simulation boxes, respectively. The simulation was run on JUROPA, one of the top 15 supercomputers in the world in 2010. It used more than 12,000 cores for an equivalent of 300 years CPU time, 30 terabytes of RAM and generated more than 100 terabytes of data. Cosmologists use the MXXL simulation to study the distribution of galaxies and dark matter halos on very large scales and how the rarest and most massive structures in the universe came about.
Millennium Run Observatory:
In 2012, the Millennium Run Observatory (MRObs) project was launched. The MRObs is a theoretical virtual observatory that integrates detailed predictions for the dark matter (from the Millennium simulations) and for the galaxies (from semi-analytical models) with a virtual telescope to synthesize artificial observations. Astrophysicists use these virtual observations to study how the predictions from the Millennium simulations compare to the real universe, to plan future observational surveys, and to calibrate the techniques used by astronomers to analyze real observations. A first set of virtual observations produced by the MRObs have been released to the astronomical community for analysis through the MRObs Web portal. The virtual universe can also be accessed through a new online tool, the MRObs browser, which allows users to interact with the Millennium Run Relational Database where the properties of millions of dark matter halos and their galaxies from the Millennium project are being stored. Upgrades to the MRObs framework, and its extension to other types of simulations, are currently being planned. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scriptcase**
Scriptcase:
Scriptcase is a Rapid application development platform that works as a code generator for PHP web applications, and is based on the same script language. It is web oriented and can be installed on an intranet or internet server. Developers use a graphical interface to design and generate code. The software was developed by NetMake in 2000 and can be used on Mac, Windows, and Linux operating systems.
Scriptcase:
The development as well as runtime environment use a webserver like Apache, PHP and an SQL database. In difference to PHP frameworks, after deployment the development software is no longer necessary to run the application.
Features:
Scriptcase can be used as a mere CRUD (Create, Read, Update and Delete) tool for given database tables, but also enables custom code to manage business rules and validation. It allows to create forms and queries, ranging from simple forms to a high level of complex elements to manipulate data from databases like MySQL, PostgreSQL, SQLite, Interbase, Firebird, Access, Oracle, MS SQLServer, IBM Db2, SyBase, Informix and ODBC connections.
Features:
The software facilitates development with JavaScript and allows to create applications with AJAX through a set of features and services, such as navigation between pages or sections, or automatic validation of fields.
Features:
Report output can be exported to MS Word, MS Excel, PDF or printed. Complex SQL statements can be used like sub-selects, joins and stored procedures. Scriptcase allows users to write PHP code to handle exceptions and create more complex validation. It is also possible to create infrastructure such as menus, login screens and a security system with authentication. Tabs in forms allow to group form pages or queries on the same page. The package also includes a documentation generator that can integrate the developer team.
Features:
Platform development began in 2000. Since then, it has been receiving regular updates.The pricing model includes yearly subscriptions as well as lifetime options. Prices start from 400$ per year for the 'starter' version and reach 1.400$ for a lifetime 'enterprise' package or 650$ per developer which supports more database products. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**European Chemical Site Promotion Platform**
European Chemical Site Promotion Platform:
The European Chemical Site Promotion Platform (ECSPP) was founded in 2005 to promote new investments in Europe's chemical industrial complexes.
Origins:
The idea of ECSPP was developed at the 2002 European Petrochemical Association (EPCA) Annual Meeting in Berlin between Peter Anderton of the Port of Rotterdam and Neil Kenley of SembCorp Utilities UK Ltd.
One of the conclusions emerging from this discussion was that European chemical sites could benefit from having a forum where information and insights on matters of common interest could be exchanged. In particular, it was felt that Europe needed to be more pro-active in profiling itself as an attractive region for new chemical investment.
Brainstorming session:
In November 2003, Peter Anderton invited a number of chemical site management organizations to Rotterdam to take part in a brainstorming session to determine whether there was wider support for setting up such a forum, to be known as European Chemical Site Promotion Platform (ECSPP). There was enthusiastic backing for the idea and it was decided to develop the framework and conditions required to establish ECSPP as a formal organization.
Steering committee:
A steering committee was formed to manage this process and in the course of 2004, a Business Plan was drafted. At a plenary meeting in March 2005, the Business Plan was approved unanimously and the decision was taken to proceed with the formal establishment of ECSPP.
ECSPP was officially launched at a Press Conference held during EPCA's Annual Meeting in September 2005 in Vienna.
The North East of England Process Industry Cluster are members of ECSPP. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ORMDL1**
ORMDL1:
ORMDL sphingolipid biosynthesis regulator 1 is a protein that in humans is encoded by the ORMDL1 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vocative case**
Vocative case:
In grammar, the vocative case (abbreviated VOC) is a grammatical case which is used for a noun that identifies a person (animal, object, etc.) being addressed, or occasionally for the noun modifiers (determiners, adjectives, participles, and numerals) of that noun. A vocative expression is an expression of direct address by which the identity of the party spoken to is set forth expressly within a sentence. For example, in the sentence "I don't know, John," John is a vocative expression that indicates the party being addressed, as opposed to the sentence "I don't know John" in which "John" is the direct object of the verb "know".
Vocative case:
Historically, the vocative case was an element of the Indo-European case system and existed in Latin, Sanskrit and Ancient Greek. Many modern Indo-European languages (English, Spanish, etc.) have lost the vocative case, but others retain it, including the Baltic languages, some Celtic languages and most Slavic languages. Some linguists, such as Albert Thumb,(de) argue that the vocative form is not a case but a special form of nouns not belonging to any case, as vocative expressions are not related syntactically to other words in sentences. Pronouns usually lack vocative forms.
Indo-European languages:
Comparison Distinct vocative forms are assumed to have existed in all early Indo-European languages and survive in some. Here is, for example, the Indo-European word for "wolf" in various languages: The elements separated with hyphens denote the stem, the so-called thematic vowel of the case and the actual suffix. In Latin, for example, the nominative case is lupus and the vocative case is lupe, but the accusative case is lupum. The asterisks before the Proto-Indo-European words means that they are theoretical reconstructions and are not attested in a written source. The symbol ◌̩ (vertical line below) indicates a consonant serving as a vowel (it should appear directly below the "l" or "r" in these examples but may appear after them on some systems from issues of font display). All final consonants were lost in Proto-Slavic, so both the nominative and vocative Old Church Slavonic forms do not have true endings, only reflexes of the old thematic vowels.
Indo-European languages:
The vocative ending changes the stem consonant in Old Church Slavonic because of the so-called First Palatalization. Most modern Slavic languages that retain the vocative case have altered the ending to avoid the change: Bulgarian вълко occurs far more frequently than вълче.
Baltic languages Lithuanian The vocative is distinct in singular and identical to the nominative in the plural, for all inflected nouns. Nouns with a nominative singular ending in -a have a vocative singular usually identically written but distinct in accentuation.
In Lithuanian, the form that a given noun takes depends on its declension class and, sometimes, on its gender. There have been several changes in history, the last being the -ai ending formed between the 18th and 19th centuries. The older forms are listed under "other forms".
Some nouns of the e- and a- stems declensions (both proper ones and not) are stressed differently: "aikštė": "aikšte!" (square); "tauta": "tauta!". In addition, nouns of e-stems have an ablaut of long vowel ė in nominative and short vowel e /ɛ/ in vocative. In pronunciation, ė is close-mid vowel [eː], and e is open-mid vowel /ɛ/.
The vocative of diminutive nouns with the suffix -(i)ukas most frequently has no ending: broliùk "brother!", etc. A less frequent alternative is the ending -ai, which is also slightly dialectal: broliùkai, etc.
Colloquially, some personal names with a masculine -(i)(j)o stem and diminutives with the suffixes -elis, -ėlis have an alternative vocative singular form characterized by a zero ending (i.e. the stem alone acts as the voc. sg.): Adõm "Adam!" in addition to Adõmai, Mýkol "Michael!" in addition to Mýkolai, vaikẽl "kid!" in addition to vaikẽli, etc.
Celtic languages Goidelic languages Irish The vocative case in Irish operates in a similar fashion to Scottish Gaelic. The principal marker is the vocative particle a, which causes lenition of the initial letter.
Indo-European languages:
In the singular there is no special form, except for first declension nouns. These are masculine nouns that end in a broad (non-palatal) consonant, which is made slender (palatal) to build the singular vocative (as well as the singular genitive and plural nominative). Adjectives are also lenited. In many cases this means that (in the singular) masculine vocative expressions resemble the genitive and feminine vocative expressions resemble the nominative.
Indo-European languages:
The vocative plural is usually the same as the nominative plural except, again, for first declension nouns. In the standard language first declension nouns show the vocative plural by adding -a. In the spoken dialects the vocative plural is often has the same form as the nominative plural (as with the nouns of other declensions) or the dative plural (e.g. a fhearaibh! = Men!) Scottish Gaelic The vocative case in Scottish Gaelic follows the same basic pattern as Irish. The vocative case causes lenition of the initial consonant of nouns. Lenition changes the initial sound of the word (or name).
Indo-European languages:
In addition, masculine nouns are slenderized if possible (that is, in writing, an 'i' is inserted before the final consonant) This also changes the pronunciation of the word.
Indo-European languages:
Also, the particle a is placed before the noun unless it begins with a vowel (or f followed immediately by a vowel, which becomes silent when lenited). Examples of the use of the vocative personal names (as in Irish): The name "Hamish" is just the English spelling of "Sheumais" (the vocative of "Seumas" and pronounced "Hamish"), and thus is actually a Gaelic vocative. Likewise, the name "Vairi" is an English spelling of "Mhàiri," the vocative for Màiri.
Indo-European languages:
Manx The basic pattern is similar to Irish and Scottish. The vocative is confined to personal names, in which it is common. Foreign names (not of Manx origin) are not used in the vocative. The vocative case causes lenition of the initial consonant of names. It can be used with the particle "y".
The name "Voirrey" is actually the Manx vocative of "Moirrey" (Mary).
Indo-European languages:
Brythonic languages Welsh Welsh lacks case declension but marks vocative constructions by lenition of the initial consonant of the word, with no obligatory particle. Despite its use being less common, it is still used in formal address: the common phrase foneddigion a boneddigesau means "gentlemen and ladies", with the initial consonant of boneddigion undergoing a soft mutation; the same is true of gyfeillion ("[dear] friends") in which cyfeillion has been lenited. It is often used to draw attention to at public notices orally and written – teachers will say "Blant" (mutation of "children") and signage such as one right show mutation of "myfyrwyr" (students) to draw attention to the importance of the notice.
Indo-European languages:
Germanic languages English The vocative is not generally marked in English in regular communication. A vocative expression in English may be marked by the particle "O" preceding the noun; this is often used in English translations of languages that do have the vocative case. It is often seen in the King James Version of the Bible: "O ye of little faith" (in Matthew 8:26). While it is not strictly archaic, it is sometimes used to "archaeise" speech; it is often seen as very formal, and sees use in rhetoric and poetry, or as a comedic device to subvert modern speech. Another example is the recurrent use of the phrase "O (my) Best Beloved" by Rudyard Kipling in his Just So Stories. The use of O may be considered a form of clitic and should not be confused with the interjection oh. However, as the Oxford English Dictionary points out, "O" and "oh" were originally used interchangeably. With the advent of "oh" as a written interjection, however, "O" is the preferred modern spelling in vocative phrases.Modern English commonly uses the objective case for vocative expressions but sets them off from the rest of the sentences with pauses as interjections, rendered in writing as commas (the vocative comma). Two common examples of vocative expressions in English are the phrases "Mr. President" and "Madam Chairwoman".
Indo-European languages:
Some traditional texts use Jesu, the Latin vocative form of Jesus. One of the best-known examples is Jesu, Joy of Man's Desiring.
Indo-European languages:
German dialects In some German dialects, like the Ripuarian dialect of Cologne, it is common to use the (gender-appropriate) article before a person's name. In the vocative phrase then the article is, as in Venetian and Catalan, omitted. Thus, the determiner precedes nouns in all cases except the vocative. Any noun not preceded by an article or other determiner is in the vocative case. It is most often used to address someone or some group of living beings, usually in conjunction with an imperative construct. It can also be used to address dead matter as if the matter could react or to tell something astonishing or just happening such as "Your nose is dripping." Colognian examples: Icelandic The vocative case generally does not appear in Icelandic, but a few words retain an archaic vocative declension from Latin, such as the word Jesús, which is Jesú in the vocative. That comes from Latin, as the Latin for Jesus in the nominative is Jesus and its vocative is Jesu. That is also the case in traditional English (without the accent) (see above): The native words sonur ("son") and vinur ("friend") also sometimes appear in the shortened forms son and vin in vocative phrases. Additionally, adjectives in vocative phrases are always weakly declined, but elsewhere with proper nouns, they would usually be declined strongly: Norwegian Nouns in Norwegian are not inflected for the vocative case, but adjectives qualifying those nouns are; adjectival adjuncts modifying vocative nouns are inflected for the definite (see: Norwegian language#Adjectives).: 223–224 The definite and plural inflections are in most cases identical, so it is more easily observable with adjectives that inflect for plural and definite differently, e.g. liten being lille when definite, but små when plural, an instance of suppletion.: 116 In several Norwegian dialects, north of an isogloss running from Oslo to Bergen, names in argument position are associated with proprial articles, e.g. gendered pronouns such as han ('he') or hun ('she'), which either precede or follow the noun in question. This is not the case when in vocative constructions.
Indo-European languages:
Greek In Ancient Greek, the vocative case is usually identical to the nominative case, with the exception of masculine second-declension nouns (ending in -ος) and third-declension nouns.
Indo-European languages:
Second-declension masculine nouns have a regular vocative ending in -ε. Third-declension nouns with one syllable ending in -ς have a vocative that is identical to the nominative (νύξ, night); otherwise, the stem (with necessary alterations, such as dropping final consonants) serves as the vocative (nom. πόλις, voc. πόλι; nom. σῶμα, gen. σώματος, voc. σῶμα). Irregular vocatives exist as well, such as nom. Σωκράτης, voc. Σώκρατες.
Indo-European languages:
In Modern Greek, second-declension masculine nouns still have a vocative ending in -ε. However, the accusative case is often used as a vocative in informal speech for a limited number of nouns, and always used for certain modern Greek person names: "Έλα εδώ, Χρήστο" "Come here, Christos" instead of "...Χρήστε". Other nominal declensions use the same form in the vocative as the accusative in formal or informal speech, with the exception of learned Katharevousa forms that are inherited from Ancient Greek Ἕλλην (Demotic Έλληνας, "Greek man"), which have the same nominative and vocative forms instead.
Indo-European languages:
Iranian languages Kurdish Kurdish has a vocative case. For instance, in the dialect of Kurmanji, it is created by adding the suffix -o at the end of masculine words and the -ê suffix at the end of feminine ones. In the Jafi dialect of Sorani it is created by adding the suffix of -i at the end of names.
Indo-European languages:
Instead of the vocative case, forms of address may be created by using the grammatical particles lê (feminine) and lo (masculine): Indo-Aryan languages Hindi-Urdu In Hindi-Urdu (Hindustani), the vocative case has same form as the nominative case for all singular nouns except for the singular masculine nouns that terminate in the vowel आ /a:/ (ā) and for all nouns in their plural forms the vocative case is always distinct from the nominative case. Adjectives in Hindi-Urdu also have a vocative case form. In the absence of a noun argument, some adjectives decline like masculine nouns that do not end in आ /a:/ (ā). The vocative case has many similarities with the oblique case in Hindustani.
Indo-European languages:
Sanskrit In Sanskrit, the vocative (सम्बोधन विभक्ति sambodhana vibhakti) has the same form as the nominative except in the singular. In vowel-stem nouns, if there is a –ḥ in the nominative, it is omitted and the stem vowel may be altered: –ā and –ĭ become –e, –ŭ becomes –o, –ī and –ū become short and –ṛ becomes –ar. Consonant-stem nouns have no ending in the vocative: The vocative form is the same as the nominative except in the masculine and feminine singular.
Indo-European languages:
Slavic languages Old Church Slavonic Old Church Slavonic has a distinct vocative case for many stems of singular masculine and feminine nouns, otherwise it is identical to the nominative. When different from the nominative, the vocative is simply formed from the nominative by appending either -e (rabъ: rabe "slave") or -o (ryba: rybo "fish"), but occasionally -u (krai: kraju "border", synъ: synu "son", vračь: vraču "physician") and -i (kostь: kosti "bone", gostь:gosti "guest", dьnь: dьni "day", kamy: kameni "stone") appear. Nouns ending with -ьcь have a vocative ending of -če (otьcь: otьče "father", kupьcь: kupьče "merchant"), likewise nouns ending with -dzь assume the vocative suffix -že (kъnědzь: kъněže "prince"). This is similar to Greek, Latin, Lithuanian, and Sanskrit, which also employ the -e suffix in vocatives.
Indo-European languages:
Bulgarian Unlike most other Slavic languages, Bulgarian has lost case marking for nouns. However, Bulgarian preserves vocative forms. Traditional male names usually have a vocative ending.
More-recent names and foreign names may have a vocative form but it is rarely used (Ричарде, instead of simply Ричард Richard, sounds unusual or humorous to native speakers).
Indo-European languages:
Vocative phrases like господине министре (Mr. Minister) have been almost completely replaced by nominative forms, especially in official writing. Proper nouns usually also have vocative forms, but they are used less frequently. Here are some proper nouns that are frequently used in vocative: Vocative case forms also normally exist for female given names: Except for forms that end in -е, they are considered rude and are normally avoided. For female kinship terms, the vocative is always used: Czech In Czech, the vocative (vokativ, or 5. pád – "the fifth case") usually differs from the nominative in masculine and feminine nouns in the singular.
Indo-European languages:
In older common Czech (19th century), vocative form was sometimes replaced by nominative form in case of female names ("Lojzka, dej pokoj!") and in case of male nouns past a title ("pane učitel!", "pane továrník!", "pane Novák!"). This phenomenon was caused mainly by the German influence, and almost disappeared from the modern Czech. It can be felt as rude, discourteous or uncultivated, or as familiar, and is associated also with Slovakian influence (from the Czechoslovak Army) or Russian. In informal speech, it is common (but grammatically incorrect) to use the male surname (see also Czech name) in the nominative to address men: pane Novák! instead of pane Nováku! (Female surnames are adjectives, and their nominative and vocative have the same form: see Czech declension.) Using the vocative is strongly recommended in official and written styles.
Indo-European languages:
Polish In Polish, the vocative (wołacz) is formed with feminine nouns usually taking -o except those that end in -sia, -cia, -nia, and -dzia, which take -u, and those that end in -ść, which take -i. Masculine nouns generally follow the complex pattern of the locative case, with the exception of a handful of words such as Bóg → Boże ("God"), ojciec → ojcze ("father") and chłopiec → chłopcze ("boy"). Neuter nouns and all plural nouns have the same form in the nominative and the vocative: The latter form of the vocative of człowiek (human) is now considered poetical.
Indo-European languages:
The nominative is increasingly used instead of the vocative to address people with their proper names. In other contexts the vocative remains prevalent. It is used: To address an individual with the function, title, other attribute, family role Panie doktorze (Doctor!), Panie prezesie! (Chairman!) Przybywasz za późno, pływaku (You arrive too late, swimmer) synu (son), mamo (mum), tato (dad) After adjectives, demonstrative pronouns and possessive pronouns Nie rozumiesz mnie, moja droga Basiu! (You don't understand me, my dear Basia!) To address an individual in an offensive or condescending manner: Zamknij się, pajacu! ("Shut up, you buffoon!") Co się gapisz, idioto? ("What are you staring at, idiot?") Nie znasz się, baranie, to nie pisz! ("Stop writing, idiot, you don't know what you're doing!") Spadaj, wieśniaku! ("Get lost, hillbilly!") After "Ty" (second person singular pronoun) Ty kłamczuchu! (You liar!) Set expressions: (O) Matko!, (O) Boże!, chłopieThe vocative is also often employed in affectionate and endearing contexts such as Kocham Cię, Krzysiu! ("I love you, Chris!") or Tęsknię za Tobą, moja Żono ("I miss you, my wife."). In addition, the vocative form sometimes takes the place of the nominative in informal conversations: Józiu przyszedł instead of "Józio przyszedł" ("Joey's arrived"). When referring to someone by their first name, the nominative commonly takes the place of the vocative as well: Ania, chodź tu! instead of Aniu, chodź tu! ("Anne, come here!").
Indo-European languages:
Russian Historic vocative The historic Slavic vocative has been lost in Russian and is now used only in archaic expressions. Several of them, mostly of Old Church Slavonic origin, are common in colloquial Russian: "Боже!" (Bože, vocative of "Бог" Bog, "God") and "Боже мой!" (Bože moj, "My God!"), and "Господи!" (Gospodi, vocative of "Господь" Gospodj, "Lord"), which can also be expressed as "Господи Иисусе!" (Gospodi Iisuse!, Iisuse vocative of "Иисус" Iisus, "Jesus"). The vocative is also used in prayers: "Отче наш!" (Otče naš, "Our Father!"). Such expressions are used to express strong emotions (much like English "O my God!"), and are often combined ("Господи, Боже мой"). More examples of the historic vocative can be found in other Biblical quotes that are sometimes used as proverbs: "Врачу, исцелися сам" (Vraču, iscelisia sam, "Physician, heal thyself", nom. "врач", vrač). Vocative forms are also used in modern Church Slavonic. The patriarch and bishops of the Russian Orthodox Church are addressed as "владыко" (vladyko, hegemon, nom. "владыка", vladyka). In the latter case, the vocative is often also incorrectly used for the nominative to refer to bishops and patriarchs.
Indo-European languages:
New vocative In modern colloquial Russian, given names and a small family of terms often take a special "shortened" form that some linguists consider to be a re-emerging vocative case. It is used only for given names and nouns that end in -a and -я, which are sometimes dropped in the vocative form: "Лен, где ты?" ("Lena, where are you?"). It is basically equivalent to "Лена, где ты?" but suggests a positive personal and emotional bond between the speaker and the person being addressed. Names that end in -я then acquire a soft sign: "Оль!" = "Оля!" ("Olga!"). In addition to given names, the form is often used with words like "мама" (mom) and "папа" (dad), which would be respectively shortened to "мам" and "пап". The plural form is used with words such as "ребят", "девчат" (nom: "ребята", "девчата" guys, gals).Such usage differs from the historic vocative, which would be "Лено" and is not related.
Indo-European languages:
Serbo-Croatian Distinct vocatives exist only for singular masculine and feminine nouns. Nouns of the neuter gender and all nouns in plural have a vocative equal to the nominative. All vocative suffixes known from Old Church Slavonic also exist in Serbo-Croatian.The vocative in Serbo-Croatian is formed according to one of three types of declension, which are classes of nouns having equal declension suffixes.
Indo-European languages:
First declension The first declension comprises masculine nouns that end with a consonant. These have a vocative suffix of either -e (doktor: doktore "doctor") or -u (gospodar: gospodaru "master").
Indo-European languages:
Nouns terminating in -or have the -e vocative suffix: (doktor: doktore "doctor", major: majore "major", majstor: majstore "artisan") also nouns possessing an unsteady a (vetar: vetre "wind", svekar: svekre "father-in-law") and the noun car: care "emperor". All other nouns in this class form the vocative with -u: gospodar: gospodaru "master", pastir: pastiru "shepherd", inženjer: inženjeru "engineer", pisar: pisaru "scribe", sekretar: sekretaru "secretary".
Indo-European languages:
In particular, masculine nouns ending with a palatal or prepalatal consonant j, lj, nj, č. dž. ć, đ. š form vocatives with the -u suffix: heroj: heroju "hero", prijatelj: prijatelju "friend", konj: konju "horse", vozač: vozaču "driver", mladić: mladiću "youngster", kočijaš: kočijašu "coachman", muž: mužu "husband".
Nouns ending with the velars -k, -g and -h are palatalized to -č, -ž, -š in the vocative: vojnik: vojniče "soldier", drug: druže "comrade", duh: duše "ghost". A final -c becomes -č in the vocative: stric: striče "uncle", lovac: lovče "hunter". Likewise, a final -z becomes -ž in only two cases: knez: kneže "prince" and vitez: viteže "knight".
Indo-European languages:
The loss of the unsteady a can trigger a sound change by hardening of consonants, as in vrabac: vrapče "sparrow" (not vrabče), lisac: lišče "male fox" (not lisče) and ženomrzac: ženomršče "misogynist" (not ženomrzče). There may be a loss of -t before -c like in otac: oče "father" (instead of otče). svetac: sveče "saint" (instead of svetče). When these phonetic alterations would substantially change the base noun, the vocative remains equal to the nominative, for example tetak "uncle", mačak "male cat", bratac "cousin". This also holds true for foreign names ending with -k, -g and -h like Džek (Jack), Dag (Doug), King, Hajnrih.
Indo-European languages:
Male names ending with -o and -e have a vocative equal to the infinitive: Marko, Mihailo, Danilo, Đorđe, Pavle, Radoje etc.
Second declension The second declension affects nouns with the ending -a. These are mainly of feminine but sometimes also of masculine gender. These nouns have a vocative suffix -o: riba: ribo "fish", sluga: slugo "servant", kolega: kolego "colleague", poslovođa: poslovođo "manager".
Indo-European languages:
Exemptions to this rule are male and female names, which have a vocative equal to the nominative, e. g. Vera, Zorka, Olga, Marija, Gordana, Nataša, Nikola, Kosta, Ilija etc. However, this is different for twosyllabic names with an ascending accent such as Nâda, Zôra, Mîca, Nêna and the male names Pêra, Bôža, Pâja etc., which form vocatives with -o: Nâdo, Zôro, Mîco, Pêro, Bôžo, Pâjo etc.
Indo-European languages:
Denominations of relatives like mama "mom", tata "dad", deda "grandfather", tetka "aunt", ujna "aunt" (mother's brother's wife), strina "aunt" (father's brother's wife), baba "grandmother" have vocatives equal to the nominative. This also holds true for country names ending in -ska, -čka, -ška.
Indo-European languages:
Nouns ending with the diminutive suffix -ica that consist of three or more syllables have a vocative with -e: učiteljica: učiteljice "female teacher", drugarica: drugarice "girlfriend", tatica: tatice "daddy", mamica: mamice "mommy". This also applies to female names Danica: Danice, Milica: Milice, Zorica: Zorice, and the male names Perica: Perice, Tomica: Tomice. Nouns of this class that can be applied to both males and females usually have a vocative ending of -ico (pijanica: pijanico "drunkard", izdajica: izdajico "traitor", kukavica: kukavico "coward"), but vocatives with -ice are also seen.
Indo-European languages:
The use of vocative endings for names varies among Serbo-Croatian dialects. People in Croatia often use only nominative forms as vocatives, while others are more likely to use grammatical vocatives.
Third declension The third declension affects feminine nouns ending with a consonant. The vocative is formed by appending the suffix -i to the nominative (reč: reči "word", noć: noći "night").
Indo-European languages:
Slovak Until the end of the 1980s, the existence of a distinct vocative case in Slovak was recognised and taught at schools. Today, the case is no longer considered to exist except for a few archaic examples of the original vocative remaining in religious, literary or ironic contexts: In everyday use, the Czech vocative is sometimes retrofitted to certain words: Another stamp of vernacular vocative is emerging, presumably under the influence of Hungarian for certain family members or proper names: Ukrainian Ukrainian has retained the vocative case mostly as it was in Proto-Slavic: There are some exceptions: It is used even for loanwords and foreign names: It is obligatory for all native names: It is used for patronymics: Latin In Latin, the form of the vocative case of a noun is often the same as the nominative. Exceptions include singular non-neuter second-declension nouns that end in -us in the nominative case. An example would be the famous line from Shakespeare, "Et tu, Brute?" (commonly translated as "And you, Brutus?"): Brute is the vocative case and Brutus would be the nominative.
Indo-European languages:
Nouns that end in -ius end with -ī instead of the expected -ie. Thus, Julius becomes Julī and filius becomes filī. The shortening does not shift the accent so the vocative of Vergilius is Vergilī, with accent on the second syllable even though it is short. Nouns that end in -aius and -eius have vocatives that end in -aī or -eī even though the i in the nominative is consonantal.
Indo-European languages:
First-declension and second-declension adjectives also have distinct vocative forms in the masculine singular if the nominative ends in -us, with the ending -e. Adjectives that end in -ius have vocatives in -ie so the vocative of eximius is eximie.
Indo-European languages:
Nouns and adjectives that end in -eus do not follow the rules above. Meus forms the vocative irregularly as mī or meus, while Christian Deus does not have a distinct vocative and retains the form Deus. "My God!" in Latin is thus mī Deus!, but Jerome's Vulgate consistently used Deus meus as a vocative. Classical Latin did not use a vocative of deus either (in reference to pagan gods, the Romans used the suppletive form dive).
Indo-European languages:
Romance languages West Iberian languages Portuguese drops the article to form the vocative. The vocative is always between commas and, like in many other languages, a particle Ó is commonly used: In Extremaduran and Fala, some post-tonical vowels open in vocative forms of nouns, a new development that is unrelated to the Latin vocative case.
Catalan Catalan drops the article to form the vocative.
French Like English, French sometimes uses (or historically used) a particle Ô to mark vocative phrases rather than by change to the form of the noun. A famous example is the title and first line of the Canadian national anthem, O Canada (French title: Ô Canada), a vocative phrase addressing Canada.
Indo-European languages:
Romanian The vocative case in Romanian is partly inherited, occasionally causing other morphophonemic changes (see also the article on Romanian nouns): singular masculine/neuter: "-e" as in "om": "omule!" (man, human being), "băiat": "băiete!" or "băiatule!" (boy), "văr": "vere!" (cousin), "Ion": "Ioane!" (John); singular feminine: "-o" as in "soră": "soro!" (sister), "nebună": "nebuno!" (mad woman), also in masculine (nebunul) "deșteaptă": "deșteapto!" (smart one (f), often used sarcastically), "Ileana": "Ileano!" (Helen);Since there is no -o vocative in Latin, it must have been borrowed from Slavic: compare the corresponding Bulgarian forms сестро (sestro), откачалко (otkachalko), Елено (Eleno).
Indo-European languages:
plural, all genders: "-lor" as in "frați": "fraților!" (brothers), "boi": "boilor!" (oxen, used toward people as an invective), "doamne și domni": "doamnelor și domnilor!" (ladies and gentlemen).In formal speech, the vocative often simply copies the nominative/accusative form even when it does have its own form. That is because the vocative is often perceived as very direct and so can seem rude.
Indo-European languages:
Venetian Venetian has lost all case endings, like most other Romance languages. However, with feminine proper names the role of the vocative is played by the absence of the determiner: the personal article ła / l' usually precedes feminine names in other situations, even in predicates. Masculine names and other nouns lack articles and so rely on prosody to mark forms of address: Predicative constructions:
Arabic:
Properly speaking, Arabic has only three cases: nominative, accusative and genitive. However, a meaning similar to that conveyed by the vocative case in other languages is indicated by the use of the particle yā (Arabic: يا) placed before a noun inflected in the nominative case (or accusative if the noun is in construct form). In English translations, it is often translated literally as O instead of being omitted. A longer form used in Classical Arabic is أيّها ayyuhā (masculine), أيّتها ayyatuhā (feminine), sometimes combined with yā. The particle yā was also used in the old Castilian language because of Arabic influence via Mozarabic immigrations.
Mandarin:
Mandarin uses no special inflected forms for address. However, special forms and morphemes (that are not inflections) exist for addressing.
Mandarin has several particles that can be attached to the word of address to mark certain special vocative forces, where appropriate. A common one is 啊 a, attached to the end of the address word. For example, 日记 rìjì "diary" becomes 日记啊 rìjì'a.
Mandarin:
Certain specialized vocative morphemes also exist, albeit with limited applicabilities. For instance, the Beijing dialect of Mandarin Chinese, to express strong feelings (especially negative ones) to someone, a neutral tone suffix -ei may be attached to certain address words. It is most commonly applied to the word 孙子 (sūnzi, "grandson"), to form sūnzei, meaning approximately "Hey you nasty one!". Another example is 小子 (xiǎozi, lit. "kid; young one"), resulting in xiǎozei "Hey kiddo!".
Japanese:
The vocative case is present in Japanese as the particle よ. This usage is often literary or poetic. For example: In conversational Japanese, this same particle is often used at the end of a sentence to indicate assertiveness, certainty or emphasis.
Georgian:
In Georgian, the vocative case is used to address the second-person singular and plural. For word roots that end with a consonant, the vocative case suffix is -o, and for the words that end with a vowel, it is -v like in Old Georgian, but for some words, it is considered archaic. For example, kats- is the root for the word "man". If one addresses someone with the word, it becomes katso.
Georgian:
Adjectives are also declined in the vocative case. Just like nouns, consonant final stem adjectives take the suffix -o in the vocative case, and the vowel final stems are not changed: lamazi kali "beautiful woman" (nominative case) lamazo kalo! "beautiful woman!" (vocative case)In the second phrase, both the adjective and the noun are declined. The personal pronouns are also used in the vocative case. Shen "you" (singular) and tkven "you" (plural) in the vocative case become she! and tkve, without the -n. Therefore, one could, for instance, say, with the declension of all of the elements: She lamazo kalo! "you beautiful woman!"
Korean:
The vocative case in Korean is commonly used with first names in casual situations by using the vocative case marker(호격 조사) 아 (a) if the name ends in a consonant and 야 (ya) if the name ends with a vowel:미진이 집에 가? (Mijini jibe ga?) (Is Mijin going home?) 미진아, 집에 가? (Mijina, jibe ga?) (Mijin, are you going home?) 동배 뭐 해? (Dongbae mwo hae?) (What is Dongbae doing?) 동배야, 뭐 해? (Dongbaeya, mwo hae?) (Dongbae, what are you doing?) In formal Korean, the marker 여 (yeo) or 이여 (iyeo) is used, the latter if the root ends with a consonant. Thus, a quotation of William S. Clark would be translated as follows: 소년이여, 야망을 가져라. (sonyeoniyeo, yamangeul gajyeora.) (Boys, be ambitious.) The honorific infix 시 (si) is inserted in between the 이 (i) and 여 (yeo).
Korean:
신이시여, 부디 저들을 용서하소서. (sinisiyeo, budi jeodeureul yongseohasoseo.) (Oh god, please forgive them.) In Middle Korean, there were three honorific classes of the vocative case:
Hungarian:
Hungarian has a number of vocative-like constructions, even though it lacks an explicit vocative inflection.
Hungarian:
Noun phrases in a vocative context always take the zero article. While noun phrases can take zero articles for other reasons, the lack of an article otherwise expected marks a vocative construction. This is especially prominent in dialects of Hungarian where personal proper names and other personal animate nouns tend to take the appropriate definite article, similarly to certain dialects of German detailed above. For example: With certain words such as barát ("friend"), hölgy ("lady"), úr ("gentleman, lord"), vocation is, in addition to the zero article, always marked by the first person possessive: Words like testvér ("sibling, brother") and other words of relation do not require the first person possessive, but it is readily used in common speech, especially in familiar contexts: The second-person pronoun can be used to emphasize a vocation when appropriate: Hát miért nem adtad oda neki, te bolond? ("Why did you not give it to him, you fool?"), Te Karcsi, nem láttad a szemüvegem? ("Charlie, have you seen my glasses?"), Lógtok ezért még, ti gazemberek. ("You shall yet hang for this, crooks!"), etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adalu (food)**
Adalu (food):
Adalu is a Nigerian porridge prepared from corn and beans, native to and popular among the Yoruba people.
Preparation:
Corn and beans are boiled separately before being combined. Palm oil, onion, pepper and salt are then added to taste.Adalu is often served with plantain and smoked fish. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meet-me room**
Meet-me room:
A meet-me room (MMR) is a place within a colocation center (or carrier hotel) where telecommunications companies can physically connect to one another and exchange data without incurring local loop fees. Services provided across connections in an MMR may be voice circuits, data circuits, or Internet Protocol traffic.
An MMR provides a safe production environment where the carrier handover point equipment can be expected to run on a 24/7 basis with minimal risk of interruption. It is typically located within the data center.
To interconnect, companies order a patch from their cage or suite to the MMR and then arrange for the organization running the facility to connect them together. These physical connections may be an optical fiber cable, coaxial cable, twisted pair, or any other networking medium.
Meet-me room:
Typically, a meet-me room will discourage or disallow customers from installing large amounts of equipment. However, multiplexing equipment is often welcome in the meet-me room, so that a customer can have a single connection between the room and the rest of their equipment in the building, and the multiplexing equipment can then break that out to allow for direct, private connections to several other organizations present in the meet-me room.
Meet-me room:
An Internet exchange point can also be present in a meet-me room to allow many organizations in the meet-me room to interchange traffic without having to make physical interconnections between every possible pair of organizations.
Examples:
One Wilshire: Los Angeles, California Westin Building: Seattle, Washington MAE-West (located in Market Post Tower): Downtown San Jose, California 60 Hudson St, New York 111 Eighth Ave, New York Infomart: Dallas, Texas 350 E. Cermak Rd: Chicago 165 Halsey Street, Newark, New Jersey 399 Chai Wan Road, Hong Kong | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cocktail hat**
Cocktail hat:
A cocktail hat is a small, extravagant, and typically brimless hat for a woman. It is usually a component of evening wear and is intended as an alternative to a large-brimmed hat. These hats are often decorated with beads, jewels or feathers, as well as a veil or netting. Cocktail hats were most popular between the 1930s and 1960s.Some fashion historians think that cocktail hats were the precursor to fascinators, hairpieces worn on the side of the head that gained popularity in the 1970s, while others argue that fascinators were worn during the day and cocktail hats in the late afternoon or evening. Unlike a fascinator, a cocktail hat has a fully formed and visible base.Cocktail hats can be of many shapes, ranging from modeled wool or felt or shaped straw to softer, turban-like constructions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Psychology and Alchemy**
Psychology and Alchemy:
Psychology and Alchemy, volume 12 in The Collected Works of C. G. Jung, is Carl Jung's study of the analogies between alchemy, Christian dogma, and psychological symbolism.Alchemy is central to Jung's hypothesis of the collective unconscious. This book begins with an outline of the process and aims of psychotherapy as seen by Jung. It then moves on to work out the analogies mentioned above and his own understanding of the analytic process. Jung reminds us of the dual nature of alchemy, comprising both the chemical process and a parallel mystical component. He also discusses the seemingly deliberate mystification of the alchemists. Finally, in using the alchemical process to provide insights into individuation, Jung emphasises the importance of alchemy in relating to us the transcendent nature of the psyche.Detailed abstracts of each chapter are available online. (Cover art by Mohammed Derbala)
Overview:
In this book, Jung argues for a reevaluation of the symbolism of Alchemy as being intimately related to the psychoanalytical process. Using a cycle of dreams of one of his patients he shows how the symbols used by the Alchemists occur in the psyche as part of the reservoir of mythological images drawn upon by the individual in their dream states. Jung draws an analogy between the Great Work of the Alchemists and the process of reintegration and individuation of the psyche in the modern psychiatric patient.
Overview:
In drawing these parallels Jung reinforces the universal nature of his theory of the archetype and makes an impassioned argument for the importance of spirituality in the psychic health of the modern man. Lavishly illustrated with images, drawings and paintings from Alchemy and other mythological sources including Christianity the book is another example of Jung's immense erudition and fascination with the eso- and exoteric expressions of spirituality and the psyche in religion and mysticism.
Overview:
Influenced by pioneering work by Ethan Allen Hitchcock and Herbert Silberer (who was in turn influenced by Jung), Psychology and Alchemy is a seminal work of reevaluation of a forgotten system of thought which did much to revitalise interest in Alchemy as a serious force in Western philosophical and esoteric culture.
Overview:
Also interesting about this book is that the patient whose dreams are being analyzed in the second section is the physicist Wolfgang Pauli, who would go on to collaborate with Jung on such ideas as the acausal connecting principle of synchronicity. The dreams are interpreted as a series to elucidate the meanings of recurring motifs and symbols, with the series culminating in the vision of a 'world clock', which is actually several clocks on different planes operating on different scales and colours as a symbol of Pauli's unconscious apprehension of some grand cosmic order. Three of the best of these dreams were also mentioned by Jung in his Terry lectures Psychology of Religion.
Content:
The fundamental thesis Jung is advancing about the relationship between Alchemy and Psychology is that for pre-scientific humans there is not a sharp distinction between subject and object and thus this leads them to unconsciously project their own inner states onto external objects (especially objects that are mostly unknown to them), so a reflective analysis of alchemical symbols becomes revelatory about the unconscious psychic life of this time period. Prior to this rational segregation of experience the world was a totally different one, phenomenologically, as people did not distinguish between the qualities of the object they were perceiving and their own values, emotions, and beliefs. It is partly for this reason that the alchemists cannot say aloud exactly what the philosopher's stone really 'is' and why there are so many different symbols for the work.
Content:
For the alchemist trying to understand matter and develop base metals into their purest form, gold, substances are grouped as being alike based on their perceived value. Jung documents as these alchemists collectively come to understand that they themselves must embody the change they hope to effect within their materials: for instance, if they hope to achieve the philosopher's stone that can redeem 'base' or 'vulgar' metals, then the alchemist too must become a redeemer figure. It became apparent to the alchemists that they were trying to redeem nature as Christ had redeemed man, hence the identification of the Lapis Philosophorum with Christ the Redeemer. The Opus (work) of alchemy, viewed through this interpretation, becomes a symbolic account of the fundamental process the human psyche undergoes as it re-orients its value system and creates meaning out of chaos. The opus beginning with the nigredo (blackening, akin to depression or nihilistic loss of value) in order to descend back into the manipulable prima materia and proceeding through a process of spiritual purification that must unite seemingly irreconcilable opposites (the coniunctio) to achieve new levels of consciousness.
Part I. Introduction to the Religious and Psychological Problems of Alchemy:
Jung sets out the central thesis of the book: that Alchemy draws upon a vast array of symbols, images and patterns drawn from the Collective Unconscious. Jung defends his exploration of the Psyche and Soul against various critics who have accused him of being both religious and anti-religious depending on their point of view. He argues for a deeper understanding of the Western spiritual traditions e.g. Esoteric Christianity and Alchemy alongside an examination of the Eastern ones e.g. Buddhism, Hinduism etc. Jung diagnoses the spiritual laziness of the West in not truly embracing the Christian Myth as an inner journey of transformation. Alchemy, he argues, is a 'Western Yoga' which was designed to facilitate this. The book will begin with a description of a whole cycle of dreams described by an unnamed patient (to protect confidentiality) which will be interpreted in their archetypal and mythological sense by Jung. This is designed to illustrate the existence of Jung's theory of the Collective Unconscious and the psychological goal or Great Work of psychic and spiritual integration or wholeness through the individuation process that affects the mind state.
Part II. Individual Dream Symbolism in Alchemy:
Jung sets out his agenda and explains his method. The text that follows will contain several cycles of dreams recounted by a patient to a student of Jung. Each dream will be described and then analysed and interpreted with reference to Alchemical imagery and psychoanalytic theory. Jung is at pains to explain that the patient knew nothing of Jung's interpretations and so was not influenced in any way during the dream process.
Part II. Individual Dream Symbolism in Alchemy:
Jung details an entire cycle of the patient's dreams, summarising the details of each then interpreting them in terms of their parallels with alchemical imagery to reveal their psychological content.
Part III. Religious Ideas in Alchemy:
Chapter 1 - Basic Concepts of Alchemy Chapter 2 - The Psychic Nature of Alchemical Work Chapter 3 - The Work Chapter 4 - The Prima Materia Chapter 5 - The Lapis-Christ Parallel Chapter 6 - Alchemical Symbolism in the History of Religion Quotations The real mystery does not behave mysteriously or secretively; it speaks a secret language, it adumbrates itself by a variety of images which all indicate its true nature. I am not speaking of a secret personally guarded by someone, with a content known to its possessor, but of a mystery, a matter or circumstance which is "secret," i.e., known only through vague hints but essentially unknown. The real nature of matter was unknown to the alchemist: he knew it only in hints. In seeking to explore it he projected the unconscious into the darkness of matter in order to illuminate it. In order to explain the mystery of matter he projected yet another mystery - his own psychic background -into what was to be explained: Obscurum per obscurius, ignotum per ignotius! This procedure was not, of course, intentional; it was an involuntary occurrence.
Part III. Religious Ideas in Alchemy:
I am therefore inclined to assume that the real root of alchemy is to be sought less in philosophical doctrines than in the projections of individual investigators. I mean by this that while working on his chemical experiments the operator had certain psychic experiences which appeared to him as the particular behaviour of the chemical process. Since it was a question of projection, he was naturally unconscious of the fact that the experience had nothing to do with matter itself (that is, with matter as we know it today). He experienced his projection as a property of matter; but what he was in reality experiencing was his own unconscious. In this way he recapitulated the whole history of mankind's knowledge of nature.... Such projections repeat themselves whenever man tries to explore an empty darkness and involuntarily fills it with living form.When the alchemist speaks of Mercurius, on the face of it he means quicksilver (mercury), but inwardly he means the world-creating spirit concealed or imprisoned in matter. The dragon is probably the oldest pictoral symbol in alchemy of which we have documentary evidence. It appears as the Ouroboros, the tail-eater, in the Codex Marcianus, which dates from the tenth or eleventh century, together with the legend 'the One, the All'. Time and again the alchemists reiterate that the opus proceeds from the one and leads back to the one, that it is a sort of circle like a dragon biting its own tail. For this reason the opus was often called circulare (circular) or else rota (the wheel). Mercurius stands at the beginning and end of the work: he is the prima materia, the caput corvi, the nigredo; as dragon he devours himself and as dragon he dies, to rise again in the lapis. He is the play of colours in the cauda pavonis and the division into the four elements. He is the hermaphrodite that was in the beginning, that splits into the classical brother-sister duality and is reunited in the coniunctio, to appear once again at the end in the radiant form of the lumen novum, the stone. He is metallic yet liquid, matter yet spirit, cold yet fiery, poison and yet healing draught - a symbol uniting all the opposites.Now, all these myth-pictures represent a drama of the human psyche on the further side of consciousness, showing man as both the one to be redeemed and the redeemer. The first formulation is Christian, the second alchemical. In the first case man attributes the need of redemption to himself and leaves the work of redemption, the actual opus, to the autonomous divine figure; in the latter case man takes upon himself the duty of carrying out the redeeming opus, and attributes the state of suffering and consequent need of redemption to the anima mundi imprisoned in matter. In both cases redemption is a work. In Christianity it is the life and death of the God-man which, by a unique sacrifice, bring about the reconciliation of man, who craves redemption and is sunk in materiality, with God. The mystical effect of the God-man's self-sacrifice extends, broadly speaking, to all men, though it is efficacious only for those who submit through faith or are chosen by divine grace; but in the Pauline acceptance it acts as an apocatastasis and extends also to non-human creation in general, which, in its imperfect state, awaits redemption like the merely natural man.From this point of view, alchemy seems like a continuation of Christian mysticism carried on in the subterranean darkness of the unconscious.... But this unconscious continuation never reached the surface, where the conscious mind could have dealt with it. All that appeared in consciousness were the symbolic symptoms of the unconscious process. Had the alchemist succeeded in forming any concrete idea of his unconscious contents, he would have been obliged to recognize that he had taken the place of Christ - or, to be more exact, that he, regarded not as ego but as self, had taken over the work of redeeming not man but God. He would then have had to recognize not only himself as the equivalent of Christ, but Christ as a symbol of the self. This tremendous conclusion failed to dawn on the medieval mind.
Editions:
Jung, C. G. 1968. Psychology and Alchemy, Collected Works of C. G. Jung. Princeton, NJ: Princeton University Press. ISBN 978-0-691-09771-8 Jung, C. G. 1980. Psychology and Alchemy (2nd ed.), Collected Works of C. G. Jung. London: Routledge. ISBN 978-0-415-03452-4 Jung, C. G. 1980, Psychology and Alchemy Arabic version 2023 Translated by Salma Elsharkawy From ElRawy publishing house (Cover art by Mohammed Derbala) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Job ads aggregator**
Job ads aggregator:
Job ads aggregator - also known as search engine for job ads - is a website that aggregates job ads from various job boards, multiposter sites, as well as from direct employers and employment agencies.
Job aggregation market was pioneered by Indeed which remains the biggest job ads aggregator today as per Similar Web rankings. However, its incumbent status has been challenged by many competitors. In 2017 Google joined the race launching Google for Jobs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Medical logic module**
Medical logic module:
A medical logic module (MLM) is an independent unit in a healthcare knowledge base that represents the knowledge published on a requirement for treating a patient according to a single medical decision.
Possible usage is with an event monitor program in an intensive care ward or with hospital information system on occurrence of defined conditions. See Arden syntax reference for examples. Early introduction is given with monographs.
Implementation:
The Arden syntax has been defined as a grammar which could make MLMs swappable between various platforms. XML representation of Arden (ArdenML) can be transformed by Extensible Stylesheet Language Transformations (XSLTs) to other forms.There is no reference stated for general implementation as a transfer method between different information systems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**English modal verbs**
English modal verbs:
The English modal verbs are a subset of the English auxiliary verbs used mostly to express modality (properties such as possibility, obligation, etc.). They can be distinguished from other verbs by their defectiveness (they do not have participle or infinitive forms) and by their neutralization (that they do not take the ending -(e)s in the third-person singular).
The principal English modal verbs are can, could, may, might, shall, should, will, would, and must. Certain other verbs are sometimes classed as modals; these include ought, had better, and (in certain uses) dare and need. Verbs which share only some of the characteristics of the principal modals are sometimes called "quasi-modals", "semi-modals", or "pseudo-modals".
Modal verbs and their features:
The verbs customarily classed as modals in English have the following properties: They do not inflect (in the modern language) except insofar as some of them come in present–past (present–preterite) pairs. They do not add the ending -(e)s in the third-person singular (the present-tense modals therefore follow the preterite-present paradigm).
They are defective: they are not used as infinitives or participles (except occasionally in non-standard English; see § Double modals below), nor as imperatives, nor (in the standard way) as subjunctives.
They function as auxiliary verbs: they modify the modality of another verb, which they govern. This verb generally appears as a bare infinitive, although in some definitions, a modal verb can also govern the to-infinitive (as in the case of ought).
They have the syntactic properties associated with auxiliary verbs in English, principally that they can undergo subject–auxiliary inversion (in questions, for example) and can be negated by the appending of not after the verb.
Modal verbs and their features:
The following verbs have all of the above properties, and can be classed as the principal modal verbs of English. They are listed here in present–preterite pairs where applicable: can and could may and might shall and should will and would must (no preterite; see etymology below)Note that the preterite forms are not necessarily used to refer to past time, and in some cases, they are near-synonyms to the present forms. Note that most of these so-called preterite forms are most often used in the subjunctive mood in the present tense. The auxiliary verbs may and let are also used often in the subjunctive mood. Famous examples of these are "May The Force be with you." and "Let God bless you with good." These are both sentences that express some uncertainty; hence they are subjunctive sentences.
Modal verbs and their features:
The verbs listed below mostly share the above features but with certain differences. They are sometimes, but not always, categorized as modal verbs. They may also be called "semi-modals".
The verb ought differs from the principal modals only in that it governs a to-infinitive rather than a bare infinitive (compare he should go with he ought to go).
Modal verbs and their features:
The verbs dare and need can be used as modals, often in the negative (Dare he fight?; You dare not do that.; You need not go.), although they are more commonly found in constructions where they appear as ordinary inflected verbs (He dares to fight; You don't need to go). There is also a dialect verb, nearly obsolete but sometimes heard in Appalachia and the Deep South of the United States: darest, which means "dare not", as in "You darest do that." The verb had in the expression had better behaves like a modal verb, hence had better (considered as a compound verb) is sometimes classed as a modal or semi-modal.
Modal verbs and their features:
The verb used in the expression used to (do something) can behave as a modal, but is more often used with do-support than with auxiliary-verb syntax: Did she used to do it? (or Did she use to do it?) and She didn't used to do it (or She didn't use to do it) are more common than Used she to do it? and She used not (usedn't) to do it.Other English auxiliaries appear in a variety of different forms and are not regarded as modal verbs. These are: be, used as an auxiliary in passive voice and continuous aspect constructions; it follows auxiliary-verb syntax even when used as a copula, and in auxiliary-like formations such as be going to, is to and be about to; have, used as an auxiliary in perfect aspect constructions, including the idiom have got (to); it is also used in have to, which has modal meaning, but here (as when denoting possession) have only rarely follows auxiliary-verb syntax (see also § Must and have to below); do; see do-support.For more general information about English verb inflection and auxiliary usage, see English verbs and English clause syntax. For details of the uses of the particular modals, see § Usage of specific verbs below.
Etymology:
The modals can and could are from Old English can(n) and cuþ, which were respectively present and preterite forms of the verb cunnan ("to be able"). The silent l in the spelling of could results from analogy with would and should.
Etymology:
Similarly, may and might are from Old English mæg and meahte, respectively present and preterite forms of magan ("may, to be able"); shall and should are from sceal and sceolde, respectively present and preterite forms of sculan ("to owe, be obliged"); and will and would are from wille and wolde, respectively present and preterite forms of willan ("to wish, want").
Etymology:
The aforementioned Old English verbs cunnan, magan, sculan, and willan followed the preterite-present paradigm (or, in the case of willan, a similar but irregular paradigm), which explains the absence of the ending -s in the third person on the present forms can, may, shall, and will. (The original Old English forms given above were first and third person singular forms; their descendant forms became generalized to all persons and numbers.) The verb must comes from Old English moste, part of the verb motan ("to be able to, be obliged to"). This was another preterite-present verb, of which moste was in fact the preterite (the present form mot gave rise to mote, which was used as a modal verb in Early Modern English; but must has now lost its past connotations and has replaced mote). Similarly, ought was originally a past form—it derives from ahte, preterite of agan ("to own"), another Old English preterite-present verb, whose present tense form ah has also given the modern (regular) verb owe (and ought was formerly used as a past tense of owe).
Etymology:
The verb dare also originates from a preterite-present verb, durran ("to dare"), specifically its present tense dear(r), although in its non-modal uses in Modern English it is conjugated regularly. However, need comes from the regular Old English verb neodian (meaning "to be necessary")—the alternative third person form need (in place of needs), which has become the norm in modal uses, became common in the 16th century.
Syntax:
A modal verb serves as an auxiliary to another verb, which appears in the infinitive form (the bare infinitive, or the to-infinitive in the cases of ought and used as discussed above). Examples: You must escape; This may be difficult.
Syntax:
The verb governed by the modal may be another auxiliary (necessarily one that can appear in infinitive form—this includes be and have, but not another modal, except in the non-standard cases described below under § Double modals). Hence a modal may introduce a chain (technically catena) of verb forms, in which the other auxiliaries express properties such as aspect and voice, as in He must have been given a new job.
Syntax:
Modals can appear in tag questions and other elliptical sentences without the governed verb being expressed: ...can he?; I mustn't.; Would they? Like other auxiliaries, modal verbs are negated by the addition of the word not after them. (The modification of meaning may not always correspond to simple negation, as in the case of must not.) The modal word can combine with not forms the single word cannot. Most of the modals have contracted negated forms in n't which are commonly used in informal English: can't, mustn't, won't (from will), etc.
Syntax:
Again like other auxiliaries, modal verbs undergo inversion with their subject, in forming questions and in the other cases described in the article on subject–auxiliary inversion: Could you do this?; On no account may you enter. When there is negation, the contraction with n't may undergo inversion as an auxiliary in its own right: Why can't I come in? (or: Why can I not come in?).
Syntax:
More information on these topics can be found at English clause syntax.
Past forms:
The preterite (past) forms given above (could, might, should, and would, corresponding to can, may, shall, and will, respectively) do not always simply modify the meaning of the modal to give it past time reference. The only one regularly used as an ordinary past tense is could, when referring to ability: I could swim may serve as a past form of I can swim.
Past forms:
All the preterites are used as past equivalents for the corresponding present modals in indirect speech and similar clauses requiring the rules of sequence of tenses to be applied. For example, in 1960, it might have been said that People think that we will all be driving hovercars by the year 2000, whereas at a later date it might be reported that In 1960, people thought we would all be driving hovercars by the year 2000.
Past forms:
This "future-in-the-past" (also known as the past prospective, see: prospective) usage of would can also occur in independent sentences: I moved to Green Gables in 1930; I would live there for the next ten years.
Past forms:
In many cases, in order to give modals past reference, they are used together with a "perfect infinitive", namely the auxiliary have and a past participle, as in I should have asked her; You may have seen me. Sometimes these expressions are limited in meaning; for example, must have can refer only to certainty, whereas past obligation is expressed by an alternative phrase such as had to (see § Replacements for defective forms below).
Past forms:
Conditional sentences The preterite forms of modals are used in counterfactual conditional sentences, in the apodosis (then-clause). The modal would (sometimes should as a first-person alternative) is used to produce the conditional construction which is typically used in clauses of this type: If you loved me, you would support me. It can be replaced by could (meaning "would be able to") and might (meaning "would possibly") as appropriate.
Past forms:
When the clause has past time reference, the construction with the modal plus perfect infinitive (see above) is used: If they (had) wanted to do it, they would (could/might) have done it by now. (The would have done construction is called the conditional perfect.) The protasis (if-clause) of such a sentence typically contains the past tense of a verb (or the past perfect construction, in the case of past time reference), without any modal. The modal could may be used here in its role as the past tense of can (if I could speak French). However all the modal preterites can be used in such clauses with certain types of hypothetical future reference: if I should lose or should I lose (equivalent to if I lose); if you would/might/could stop doing that (usually used as a form of request).
Past forms:
Sentences with the verb wish (and expressions of wish using if only...) follow similar patterns to the if-clauses referred to above, when they have counterfactual present or past reference. When they express a desired event in the near future, the modal would is used: I wish you would visit me; If only he would give me a sign.
For more information see English conditional sentences and English subjunctive.
Replacements for defective forms:
As noted above, English modal verbs are defective in that they do not have infinitive, participle, imperative, or (standard) subjunctive forms, and, in some cases, past forms. However in many cases there exist equivalent expressions that carry the same meaning as the modal, and can be used to supply the missing forms. In particular: The modals can and could, in their meanings expressing ability, can be replaced by am/is/are able to and was/were able to. Additional forms can thus be supplied: the infinitive (to) be able to, the subjunctive and (rarely) imperative be able to, and the participles being able to and been able to.
Replacements for defective forms:
The modals may and might, in their meanings expressing permission, can be replaced by am/is/are allowed to and was/were allowed to.
The modal must in most meanings can be replaced by have/has to. This supplies the past and past participle form had to, and other forms (to) have to, having to.
Will can be replaced by am/is/are going to. This can supply the past and other forms: was/were going to, (to) be going to, being/been going to.
The modals should and ought to might be replaced by am/is/are supposed to, thus supplying the forms was/were supposed to, (to) be supposed to, being/been supposed to.
Contractions and reduced pronunciation:
As already mentioned, most of the modals in combination with not form commonly used contractions: can't, won't, etc. Some of the modals also have contracted forms themselves: The verb will is often contracted to 'll; the same contraction may also represent shall.
The verb would (or should, when used as a first-person equivalent of would) is often contracted to 'd.
Contractions and reduced pronunciation:
The had of had better is also often contracted to 'd. (The same contraction is also used for other cases of had as an auxiliary.)Certain of the modals generally have a weak pronunciation when they are not stressed or otherwise prominent; for example, can is usually pronounced /kən/. The same applies to certain words following modals, particularly auxiliary have: a combination like should have is normally reduced to /ʃʊd(h)əv/ or just /ʃʊdə/ "shoulda". Also ought to can become /ɔːtə/ "oughta". See weak and strong forms in English.
Usage of specific verbs:
Can and could The modal verb can expresses possibility in a dynamic, deontic, or epistemic sense, that is, in terms of innate ability, permissibility, or possible circumstance. For example: I can speak English means "I am able to speak English" or "I know how to speak English." You can smoke here means "you may (are permitted to) smoke here" (in formal English may or might is sometimes considered more correct than can or could in these senses).
Usage of specific verbs:
There can be strong rivalry between siblings means that such rivalry is possible.The preterite form could is used as the past tense or conditional form of can in the above meanings (see § Past forms above). It is also used to express possible circumstance: We could be in trouble here. It is preferable to use could, may or might rather than can when expressing possible circumstance in a particular situation (as opposed to the general case, as in the "rivalry" example above, where can or may is used).
Usage of specific verbs:
Both can and could can be used to make requests: Can/could you pass me the cheese? means "Please pass me the cheese" (where could indicates greater politeness).
It is common to use can with verbs of perception such as see, hear, etc., as in I can see a tree. Aspectual distinctions can be made, such as I could see it (ongoing state) vs. I saw it (event). See can see.
Usage of specific verbs:
The use of could with the perfect infinitive expresses past ability or possibility, either in some counterfactual circumstance (I could have told him if I had seen him), or in some real circumstance where the act in question was not in fact realized: I could have told him yesterday (but in fact I didn't). The use of can with the perfect infinitive, can have..., is a rarer alternative to may have... (for the negative see below).
Usage of specific verbs:
The negation of can is the single word cannot, only occasionally written separately as can not. Though cannot is preferred (as can not is potentially ambiguous), its irregularity (all other uncontracted verbal negations use at least two words) sometimes causes those unfamiliar with the nuances of English spelling to use the separated form. Its contracted form is can't (pronounced /kɑːnt/ in RP and some other dialects). The negation of could is the regular could not, contracted to couldn't.
Usage of specific verbs:
The negative forms reverse the meaning of the modal (to express inability, impermissibility or impossibility). This differs from the case with may or might used to express possibility: it can't be true has a different meaning than it may not be true. Thus can't (or cannot) is often used to express disbelief in the possibility of something, as must expresses belief in the certainty of something. When the circumstance in question refers to the past, the form with the perfect infinitive is used: he can't (cannot) have done it means "I believe it impossible that he did it" (compare he must have done it).
Usage of specific verbs:
Occasionally not is applied to the infinitive rather than to the modal (stress would then be applied to make the meaning clear): I could not do that, but I'm going to do it anyway.
May and might The verb may expresses possibility in either an epistemic or deontic sense, that is, in terms of possible circumstance or permissibility. For example: The mouse may be dead means that it is possible that the mouse is dead.
You may leave the room means that the listener is permitted to leave the room.In expressing possible circumstance, may can have future as well as present reference (he may arrive means that it is possible that he will arrive; I may go to the mall means that I am considering going to the mall).
Usage of specific verbs:
The preterite form might is used as a synonym for may when expressing possible circumstance (as can could – see above). It is sometimes said that might and could express a greater degree of doubt than may. For uses of might in conditional sentences, and as a past equivalent to may in such contexts as indirect speech, see § Past forms above.
Usage of specific verbs:
May (or might) can also express irrelevance in spite of certain or likely truth: He may be taller than I am, but he is certainly not stronger could mean "While it is (or may be) true that he is taller than I am, that does not make a difference, as he is certainly not stronger." May can indicate presently given permission for present or future actions: You may go now. Might used in this way is milder: You might go now if you feel like it. Similarly May I use your phone? is a request for permission (might would be more hesitant or polite).
Usage of specific verbs:
A less common use of may is to express wishes, as in May you live long and happy or May the Force be with you (see also English subjunctive).
When used with the perfect infinitive, may have indicates uncertainty about a past circumstance, whereas might have can have that meaning, but it can also refer to possibilities that did not occur but could have in other circumstances (see also conditional sentences above).
She may have eaten the cake (the speaker does not know whether she ate cake).
She might have eaten cake (this means either the same as the above, or else means that she did not eat cake but that it was or would have been possible for her to eat cake).Note that the above perfect forms refer to possibility, not permission (although the second sense of might have might sometimes imply permission).
The negated form of may is may not; this does not have a common contraction (mayn't is obsolete). The negation of might is might not; this is sometimes contracted to mightn't, mostly in tag questions and in other questions expressing doubt (Mightn't I come in if I took my boots off?).
Usage of specific verbs:
The meaning of the negated form depends on the usage of the modal. When possibility is indicated, the negation effectively applies to the main verb rather than the modal: That may/might not be means "That may/might not-be," i.e. "That may fail to be true." But when permission is being expressed, the negation applies to the modal or entire verb phrase: You may not go now means "You are not permitted to go now" (except in rare, spoken cases where not and the main verb are both stressed to indicate that they go together: You may go or not go, whichever you wish).
Usage of specific verbs:
Shall and should The verb shall is used in some varieties of English in place of will, indicating futurity when the subject is first person (I shall, we shall).
With second- and third-person subjects, shall indicates an order, command or prophecy: Cinderella, you shall go to the ball! It is often used in writing laws and specifications: Those convicted of violating this law shall be imprisoned for a term of not less than three years; The electronics assembly shall be able to operate within a normal temperature range.
Usage of specific verbs:
Shall is sometimes used in questions (in the first person) to ask for advice or confirmation of a suggestion: Shall I read now?; What shall we wear?Should is sometimes used as a first-person equivalent for would (in its conditional and "future-in-the-past" uses), in the same way that shall can replace will. Should is also used to form a replacement for the present subjunctive in some varieties of English, and also in some conditional sentences with hypothetical future reference – see English subjunctive and English conditional sentences.
Usage of specific verbs:
Should is often used to describe an expected or recommended behavior or circumstance. It can be used to give advice or to describe normative behavior, though without such strong obligatory force as must or have to. Thus You should never lie describes a social or ethical norm. It can also express what will happen according to theory or expectations: This should work. In these uses it is equivalent to ought to.
Usage of specific verbs:
Both shall and should can be used with the perfect infinitive (shall/should have (done)) in their role as first-person equivalents of will and would (thus to form future perfect or conditional perfect structures). Also shall have may express an order with perfect aspect (you shall have finished your duties by nine o'clock). When should is used in this way it usually expresses something which would have been expected, or normatively required, at some time in the past, but which did not in fact happen (or is not known to have happened): I should have done that yesterday ("it would have been expedient, or expected of me, to do that yesterday").
Usage of specific verbs:
The formal negations are shall not and should not, contracted to shan't and shouldn't. The negation effectively applies to the main verb rather than the auxiliary: you should not do this implies not merely that there is no need to do this, but that there is a need not to do this. The logical negation of I should is I ought not to or I am not supposed to.
Usage of specific verbs:
Will and would Will as a tense marker is often used to express futurity (The next meeting will be held on Thursday). Since this is an expression of time rather than modality, constructions with will (or sometimes shall; see above and at shall and will) are often referred to as the future tense of English, and forms like will do, will be doing, will have done and will have been doing are often called the simple future, future progressive (or future continuous), future perfect, and future perfect progressive (continuous). With first-person subjects (I, we), in varieties where shall is used for simple expression of futurity, the use of will indicates particular willingness or determination. (Future events are also sometimes referred to using the present tense (see Uses of English verb forms), or using the going to construction.) Will can express habitual aspect; for example, he will make mistakes may mean that he frequently makes mistakes (here the word will is usually stressed somewhat, and often expresses annoyance).Will also has these uses as a modal: It can express strong probability with present time reference, as in That will be John at the door.
Usage of specific verbs:
It can be used to give an indirect order, as in You will do it right now.Modal uses of the preterite form would include: Would is used in some conditional sentences.
Usage of specific verbs:
Expression of politeness, as in I would like to... (to politely state a preference) and Would you (be so kind as to) do this? (for "Please do this").As a tense marker would is used as Future of the past, as in I knew I would graduate two years later. This is a past form of future will as described above under § Past forms. (It is sometimes replaced by should in the first person in the same way that will is replaced by shall.)As an aspect marker, would is used for Expression of habitual aspect in past time, as in Back then, I would eat early and would walk to school.Both will and would can be used with the perfect infinitive (will have, would have), either to form the future perfect and conditional perfect forms already referred to, or to express perfect aspect in their other meanings (e.g. there will have been an arrest order, expressing strong probability).
Usage of specific verbs:
The negated forms are will not (often contracted to won't) and would not (often contracted to wouldn't). In the modal meanings of will the negation is effectively applied to the main verb phrase and not to the modality (e.g. when expressing an order, you will not do it expresses an order not to do it, rather than just the absence of an order to do it). For contracted forms of will and would themselves, see § Contractions and reduced pronunciation above.
Usage of specific verbs:
Must and have to The modal must expresses obligation or necessity: You must use this form; We must try to escape. It can also express a conclusion reached by indirect evidence (e.g. Sue must be at home).
An alternative to must is the expression have to or has to depending on the pronoun (in the present tense sometimes have got to), which is often more idiomatic in informal English when referring to obligation. This also provides other forms in which must is defective (see § Replacements for defective forms above) and enables simple negation (see below).
When used with the perfect infinitive (i.e. with have and the past participle), must has only an epistemic flavor: Sue must have left means that the speaker concludes that Sue has left. To express obligation or necessity in the past, had to or some other synonym must be used.
Usage of specific verbs:
The formal negation of must is must not (contracted to mustn't). However the negation effectively applies to the main verb, not the modality: You must not do this means that you are required not to do this, not just that you are not required to do this. To express the lack of requirement or obligation, the negative of have to or need (see below) can be used: You don't have to do this; You needn't do this.
Usage of specific verbs:
The above negative forms are not usually used in the sense of a factual conclusion; here it is common to use can't to express confidence that something is not the case (as in It can't be here or, with the perfect, Sue can't have left).
Usage of specific verbs:
Mustn't can nonetheless be used as a simple negative of must in tag questions and other questions expressing doubt: We must do it, mustn't we? Mustn't he be in the operating room by this stage? Ought to and had better Ought is used with meanings similar to those of should expressing expectation or requirement. The principal grammatical difference is that ought is used with the to-infinitive rather than the bare infinitive, hence we should go is equivalent to we ought to go. Because of this difference of syntax, ought is sometimes excluded from the class of modal verbs, or is classed as a semi-modal.
Usage of specific verbs:
The reduced pronunciation of ought to (see § Contractions and reduced pronunciation above) is sometimes given the eye dialect spelling oughtta.
Ought can be used with perfect infinitives in the same way as should (but again with the insertion of to): you ought to have done that earlier.
Usage of specific verbs:
The grammatically negated form is ought not or oughtn't, equivalent in meaning to shouldn't (but again used with to). The expression had better has similar meaning to should and ought when expressing recommended or expedient behavior: I had better get down to work (it can also be used to give instructions with the implication of a threat: you had better give me the money or else). The had of this expression is similar to a modal: it governs the bare infinitive, it is defective in that it is not replaceable by any other form of the verb have, and it behaves syntactically as an auxiliary verb. For this reason the expression had better, considered as a kind of compound verb, is sometimes classed along with the modals or as a semi-modal.
Usage of specific verbs:
The had of had better can be contracted to 'd, or in some informal usage (especially American) can be omitted. The expression can be used with a perfect infinitive: you'd better have finished that report by tomorrow. There is a negative form hadn't better, used mainly in questions: Hadn't we better start now? It is more common for the infinitive to be negated by means of not after better: You'd better not do that (meaning that you are strongly advised not to do that).
Usage of specific verbs:
Dare and need The verbs dare and need can be used both as modals and as ordinary conjugated (non-modal) verbs. As non-modal verbs they can take a to-infinitive as their complement (I dared to answer her; He needs to clean that), although dare may also take a bare infinitive (He didn't dare go). In their uses as modals they govern a bare infinitive, and are usually restricted to questions and negative sentences.
Usage of specific verbs:
Examples of the modal use of dare, followed by equivalents using non-modal dare, where appropriate: Dare he do it? ("Does he dare to do it?") I daren't (or dare not) try. ("I don't dare to try") How dare you! (idiomatic expression of outrage) I dare say. (another idiomatic expression, here exceptionally without negation or question syntax)The modal use of need is close in meaning to must expressing necessity or obligation. The negated form need not (needn't) differs in meaning from must not, however; it expresses lack of necessity, whereas must not expresses prohibition. Examples: Need I continue? ("Do I need to continue? Must I continue?") You needn't water the grass ("You don't have to water the grass"; compare the different meaning of You mustn't water...)Modal need can also be used with the perfect infinitive: Need I have done that? It is most commonly used here in the negative, to denote that something that was done was (from the present perspective) not in fact necessary: You needn't have left that tip.
Usage of specific verbs:
Used to The past tense verbal expression used to expresses past states or past habitual actions, usually with the implication that they are no longer so. It is followed by the present tense (that is, the full expression consists of the verb used plus the to-infinitive). Thus the statement I used to go to college means that the speaker formerly habitually went to college, and normally implies that this is no longer the case.
Usage of specific verbs:
While used to does not express modality, it has some similarities with modal auxiliaries in that it is defective in form and can follow auxiliary-verb syntax: it is possible to form questions like Used he to come here? and negatives like He used not (rarely usedn't) to come here. More common, however, (though not the most formal style) is the syntax that treats used as a past tense of an ordinary verb, and forms questions and negatives using did: Did he use(d) to come here? He didn't use(d) to come here.Note the difference in pronunciation between the ordinary noun use /juːs/ and the verb forms described here: /juːst/.
Usage of specific verbs:
The past tense verbal use of used to should not be confused with the present participial use of the same expression, meaning "familiar with", as in I am used to this, we must get used to the cold. When the present participial form is followed by a verb, the present participle is used: I am used to going to college in the mornings.
Deduction:
In English, modal verbs as must, have, got and could/can are used to express deduction and contention. These modal verbs state how sure the speaker is about something.
You're shivering—you must be cold.
Someone must have taken the key: it is not here.
I didn't order ten books. This has to be a mistake.
These aren't mine—they've got to be yours.
It can't be a burglar. All the doors and windows are locked.
Double modals:
In formal standard English usage, since modals are followed by a present tense verb which a defective verb is not this verb, modal verbs usually cannot be used consecutively. That requirement then dictates they can be followed by only non-defective verbs. Might have is acceptable ("have" is not a defective verb), but *might must is not, even though must and have can normally be used interchangeably. Two rules from different grammatical models supposedly disallow the construction. Proponents of Phrase structure grammar usually see the surface clause as allowing only one modal verb, while main verb analysis would dictate that defective verbs occur in finite forms.A greater variety of double modals appears in some regional dialects. In English, for example, phrases such as would dare and should have are sometimes used in conversation and are grammatically correct. The double modal may sometimes be in the future tense, as in We must be able to work with must being the main auxiliary and be as the infinitive. Other examples include You may not dare to run or I would need to have help.
Double modals:
To put double modals in past tense, only the first modal is changed as in I could ought to. Double modals are also referred to as multiple modals.To form questions, the subject and the first verb are swapped if the verb requires no did/do-support, such as Will you be able to write? If the main auxiliary requires did/do-support, the appropriate form of did/do is added to the beginning, as in Did he use to need to fight? If modals are put in the perfect tense, the past participle of the infinitive is used, as in He had been going to swim or You have not been able to skate. In questions, the main verb and subject are swapped, as in Has she had to come? "I might could do something," for instance, is an example of a double modal construction that can be found in varieties of Southern American and Midland American English.
Comparison with other Germanic languages:
Many English modals have cognates in other Germanic languages, albeit with different meanings in some cases. Unlike the English modals, however, these verbs are not generally defective; they can inflect, and have forms such as infinitives, participles and future tenses (for example using the auxiliary werden in German). Examples of such cognates include: In German: mögen, müssen, können, sollen, wollen; cognates of may, must, can, shall, and will. Although German shares five modal verbs with English, their meanings are often quite different. Mögen does not mean "to be allowed" but "may" as epistemic modal and "to like" as a normal verb followed by a noun. It can be followed by an infinitive with the meaning of "to have a desire to". Wollen means "will" only in the sense of "to want to" and is not used to form the future tense, for which werden is used instead. Müssen, können, and sollen are used similarly as English "must", "can", and "shall". Note, however, that the negation of müssen is a literal one in German, not an inverse one as in English. This is to say that German ich muss ("I must") means "I need to", and ich muss nicht (literally the same as "I must not") accordingly means "I don't need to." In English, "to have to" behaves the same way, whereas English "must" expresses an interdiction when negated. brauchen (need) is sometimes used like a modal verb, especially negated (Er braucht nicht kommen. "He need not come.").
Comparison with other Germanic languages:
In Dutch: mogen, moeten, kunnen, zullen, willen; cognates of may, must, can, shall, and will.
In Danish: måtte, kunne, ville, skulle, cognates of may/must, can, will, shall. They generally have the same corresponding meanings in English, with the exception of ville, which usually means "to want to" (but which can also mean "will").
Comparison with other Germanic languages:
In Swedish: må (past tense: måtte), måsta, kunna, vilja, ska(ll), cognates of may/might, must, can, will, shall. They generally have the same corresponding meanings in English, with the exception of vilja, which means "to want to".Since modal verbs in other Germanic languages are not defective, the problem of double modals (see above) does not arise: the second modal verb in such a construction simply takes the infinitive form, as would any non-modal verb in the same position. Compare the following translations of English "I want to be able to dance", all of which translate literally as "I want can dance" (except German, which translates as "I want dance can"): German: Ich will tanzen können.
Comparison with other Germanic languages:
Dutch: Ik wil kunnen dansen.
Danish: Jeg vil kunne danse.
Swedish: Jag vill kunna dansa. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ARL6IP4**
ARL6IP4:
ADP-ribosylation-like factor 6 interacting protein 4 (ARL6IP4), also called SRp25 is the product of the ARL6IP4 gene located on chromosome 12q24. 31. Its function is unknown.
Structure:
It is 360 amino acids in length. It is expressed ubiquitously but only in G1/S phase of the cell cycle. The human and mouse mRNAs of this protein have 77% homology.Two types of amino acid clusters have been observed, a serine cluster and a basic cluster.
Function:
Its function(s) are unknown. However, due to sequence homology of its protein with SR splicing factors, it is widely believed that the protein is nuclear and may have a role in splicing regulation. The protein is believed to be a mediator in the RAC1 signalling pathway.
RNA editing:
The pre-mRNA of the ARL6IP4 gene product is subject to RNA Editing.
RNA editing:
Type A to I RNA editing is catalyzed by a family of adenosine deaminases acting on RNA (ADARs) that specifically recognize adenosines within double-stranded regions of pre-mRNAs and deaminate them to inosine. Inosines are recognised as guanosine by cellular translational machinery. ADAR 1 and ADAR 2 are the only enzymatically active members. ADAR3 is thought to have a regulatory role in the brain. ADAR1 and ADAR 2 are widely expressed in tissues while ADAR 3 is restricted to the brain. The double stranded regions of RNA are formed by base-pairing between residues in the region close to the editing site with residues usually in a neighboring intron but can be an exonic sequence. The region that base pairs with the editing region is known as an Editing Complementary Sequence (ECS).
RNA editing:
Location Editing occurs at a K/R editing site within amino acid position 225 of the final protein. Using RT-PCR and sequencing of 100 individual clones, 7% of isoform 3 of the protein showed a G instead of an A at this position during sequencing. Other minor editing sites may be potentially present including some in the same exon as the major editing site. As is the case of IGFBP7, pre-mRNA, editing is unusual as the RNA fold back structure is made up off exonic sequence only.
RNA editing:
Effects on protein structure Editing at this site results in a codon changed from a Lysine to an Arginine. This occurs in a highly basic region of the protein.
RNA editing:
Effects on protein function The function of the unedited protein is largely uncharacterised. Therefore, the effect of editing on the pre-mRNA on the proteins function is also unknown. The amino acid change is conservative and is unlikely to massively alter protein function. However, the editing site may be important since the amino acid being altered is a Lysine, which may be involved in the regulation of protein expression. Lysines can be sites of post-translational modification and the conversion of Lysine to an Arginine could affect post-translational modification. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Topic-prominent language**
Topic-prominent language:
A topic-prominent language is a language that organizes its syntax to emphasize the topic–comment structure of the sentence. The term is best known in American linguistics from Charles N. Li and Sandra Thompson, who distinguished topic-prominent languages, such as Korean and Japanese, from subject-prominent languages, such as English.
In Li and Thompson's (1976) view, topic-prominent languages have morphology or syntax that highlights the distinction between the topic and the comment (what is said about the topic). Topic–comment structure may be independent of the syntactic ordering of subject, verb and object.
Common features:
Many topic-prominent languages share several syntactic features that have arisen because the languages have sentences that are structured around topics, rather than subjects and objects: They tend to downplay the role of the passive voice, if a passive construction exists at all, since the main idea of passivization is to turn an object into a subject in languages whose subject is understood to be the topic by default.
Common features:
They rarely have expletives or "dummy subjects" (pleonastic pronouns) like English it in It's raining.
They often have sentences with so-called "double subjects", actually a topic plus a subject. For example, the following sentence patterns are common in topic-prominent languages:Mandarin Japanese They do not have articles, which are another way of indicating old vs. new information.
Common features:
The distinction between subject and object is not reliably marked.The Lolo–Burmese language Lisu has been described as highly topic-prominent, and Sara Rosen has demonstrated that "while every clause has an identifiable topic, it is often impossible to distinguish subject from direct object or agent from patient. There are no diagnostics that reliably identify subjects (or objects) in Lisu." This ambiguity is demonstrated in the following example:
Examples:
Examples of topic-prominent languages include East Asian languages such as Chinese, Japanese, Korean, Vietnamese, Malay, Indonesian, Singaporean English and Malaysian English. Turkish, Hungarian, Somali, and Native American languages like the Siouan languages are also topic-prominent. Modern linguistic studies have shown that Brazilian Portuguese is a topic-prominent or topic- and subject-prominent language (see Brazilian Portuguese#Topic-prominent language). American Sign Language is also considered to be topic-prominent.
Examples:
Mandarin Chinese *Remark: Mandarin Chinese sentences are predominantly SVO, but the language allows the object to be promoted to the topic of the sentence, resulting in an apparently OSV word order.
Japanese Lakota Turkish | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reid vapor pressure**
Reid vapor pressure:
Reid vapor pressure (RVP) is a common measure of the volatility of gasoline and other petroleum products. It is defined as the absolute vapor pressure exerted by the vapor of the liquid and any dissolved gases/moisture at 37.8 °C (100 °F) as determined by the test method ASTM-D-323, which was first developed in 1930 and has been revised several times (the latest version is ASTM D323-15a). The test method measures the vapor pressure of gasoline, volatile crude oil, jet fuels, naphtha, and other volatile petroleum products but is not applicable for liquefied petroleum gases. ASTM D323-15a requires that the sample be chilled to 0 to 1 degrees Celsius and then poured into the apparatus; for any material that solidifies at this temperature, this step cannot be performed. RVP is commonly reported in kilopascals (kPa) or pounds per square inch (psi) and represents volatization at atmospheric pressure because ASTM-D-323 measures the gauge pressure of the sample in a non-evacuated chamber.
Reid vapor pressure:
The matter of vapor pressure is important relating to the function and operation of gasoline-powered, especially carbureted, vehicles and is also important for many other reasons. High levels of vaporization are desirable for winter starting and operation and lower levels are desirable in avoiding vapor lock during summer heat. Fuel cannot be pumped when there is vapor in the fuel line (summer) and winter starting will be more difficult when liquid gasoline in the combustion chambers has not vaporized. Thus, oil refineries manipulate the Reid vapor pressure seasonally specifically to maintain gasoline engine reliability.
Reid vapor pressure:
The Reid vapor pressure (RVP) can differ substantially from the true vapor pressure (TVP) of a liquid mixture, since (1) RVP is the vapor pressure measured at 37.8 °C (100 °F) and the TVP is a function of the temperature; (2) RVP is defined as being measured at a vapor-to-liquid ratio of 4:1, whereas the TVP of mixtures can depend on the actual vapor-to-liquid ratio; (3) RVP will include the pressure associated with the presence of dissolved water and air in the sample (which is excluded by some but not all definitions of TVP); and (4) the RVP method is applied to a sample which has had the opportunity to volatilize somewhat prior to measurement: i.e., the sample container is required to be only 70-80% full of liquid (so that whatever volatilizes into the container headspace is lost prior to analysis); the sample then again volatilizes into the headspace of the D323 test chamber before it is heated to 37.8 degrees Celsius. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Unconventional (oil & gas) reservoir**
Unconventional (oil & gas) reservoir:
Unconventional (oil & gas) reservoirs, or unconventional resources (resource plays) are accumulations where oil & gas phases are tightly bound to the rock fabric by strong capillary forces, requiring specialised measures for evaluation and extraction.
Conventional reservoir:
Oil and gas are generated naturally at depths of around 4 or 5 kms below Earth’s surface. Being lighter than the water, which saturates rocks below the water table, the oil and gas percolate up through aquifer pathways towards Earth's surface (through time) by buoyancy. Some of the oil and gas percolate all the way to the surface as natural seepages, either on land or on the sea floor. The rest remain trapped underground where the oil and gas are prevented from reaching the surface by geological barriers, in a range of trap geometries. In this way, underground pockets of oil & gas accumulate by displacing water in porous rock, which, if permeable, are referred to as conventional reservoirs. A well drilled into these reservoirs normally flow oil and gas through natural buoyancy, driven to the well bore where pressure differences are relatively high. Where the pressures are low, flow can be assisted with pumps (e.g. nodding donkeys).
History:
In the early days of the oil industry, there was no need for stimulation to improve recovery efficiency, because supply vastly outstripped demand and leaving "difficult" oil in the ground was economically expedient. Two world wars, followed by huge economic growth resulted in surging demand for cheap portable energy, while the availability of new conventional oil and gas resources declined. The industry initially sought to enhance recovery of trapped oil and gas, using techniques like restricted, or low volume hydraulic fracturing to stimulate the reservoir further, thereby reducing the volume of oil and gas left in the ground to an economic minimum. By the turn of the millennium, a new kind of energy resource was required, particularly by the USA, who were driven to achieve energy independence. The USA turned to unconventional reservoirs to achieve their goals, which had been known about for decades but had previously been too costly to be economically attractive. Today, unconventional reservoirs include basin-centered gas, shale gas, coalbed methane (CBM), gas hydrates, tar sands, light tight oil and oil shale, mostly from North America.
Essential differences between conventional and unconventional reservoirs:
The distinction between conventional and unconventional resources reflects differences in the qualities of the reservoir and/or the physical properties of the oil and gas (i.e. permeability and/or viscosity). These characteristics significantly impact predictability (risk to find, appraise and develop) and in turn the methods of extraction from those reservoirs such as fracking.
Essential differences between conventional and unconventional reservoirs:
Conventional oil & gas accumulations are concentrated by buoyancy driven aquifer pathways into discrete geological traps, which are detectable from the surface. These traps constitute relatively small but high resource density fields. Most conventional oil or gas fields initially flow naturally by buoyancy alone into the well bore, with their limits defined by fluid mechanics measurable from the well bore (e.g. fluid pressure, OWC/GWC etc.). In general, the technical and commercial risk associated with discrete conventional reservoirs can be reduced using relatively inexpensive remote techniques such as reflection seismology and extracted with relatively few appraisal and development wells.Unconventional reservoirs, in contrast, are regionally dispersed over large areas with no indicative trap geometry that can be used for predictive purposes. The oil and gas in unconventional reservoirs are generally low density resources, frequently trapped in the rock by strong capillary forces incapable of flowing naturally through buoyancy. The limits of an unconventional field are therefore usually defined by relatively expensive well testing for delivery. Extraction from unconventional reservoirs requires changing the physical properties of the reservoir, or the flow characteristics of the fluid, using techniques such as fracking or steam injection. The technical and commercial risk associated with unconventional reservoirs is generally higher than conventional reservoirs owing to the lack of predictability of the trap extent and of the reservoir quality, which requires extensive well placement and testing to determine the economic reserves/well limit defined by well delivery.
Essential differences between conventional and unconventional reservoirs:
Environmental differences As with all forms of fossil fuel, there are established issues with greenhouse gas emissions through export (distribution) as well as consumption (combustion), which are identical whether the oil or gas are derived from conventional or unconventional reservoirs. Their carbon footprints, however, are radically different: conventional reservoirs use the natural energy in the environment to flow oil and gas to the surface unaided; unconventional reservoirs require putting energy into the ground for extraction, either as heat (e.g. tar sands and oil shales) or as pressure (e.g. shale gas and CBM). The artificial transfer of heat and pressure require the use of large volumes of fresh water creating supply and disposal issues. The distribution of the resource over large areas creates land use issues, with implications for local communities on infrastructure, freight traffic and local economies. Impact on the environment is an unavoidable consequence of all human activity but the difference between the impact of conventional reservoirs compared with unconventional is significant, measurable and predictable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Platform-independent GUI library**
Platform-independent GUI library:
A PIGUI (Platform Independent Graphical User Interface) package is a software library that a programmer uses to produce GUI code for multiple computer platforms. The package presents subroutines and/or objects (along with a programming approach) which are independent of the GUIs that the programmer is targeting. For software to qualify as PIGUI it must support several GUIs under at least two different operating systems (e.g. just supporting OPEN LOOK and X11 on two Unix boxes doesn't count). The package does not necessarily provide any additional portability features. Native look and feel is a desirable feature, but is not essential for PIGUIs.
Considerations:
Using a PIGUI has limitations, such as the PIGUI only deals with the GUI aspects of the program so the programmer responsible for other portability issues, most PIGUIs slow the execution of the resulting code, and programmers are largely limited to the feature set provided by the PIGUI.
Considerations:
Dependence on a PIGUI can lead to project difficulties since fewer people know how to code any specific PIGUI than do a platform-specific GUI, limiting the number of people who can give advanced help, and if the vendor goes out of business there may be no further support, including future OS enhancements, though availability of source code can ease but not eliminate this problem. Also, bugs in any package, including the PIGUI, filter down to production code.
Alternative approaches:
Web browsers offer a convenient alternative for many applications. Web browsers utilize HTML as a presentation layer for applications hosted on a central server, and web browsers are available for pretty much every platform. However, some applications do not lend themselves well to the web paradigm, requiring a local application with GUI capabilities. Where such applications must support multiple platforms, PIGUI can be more appropriate.
Alternative approaches:
Instead of using a PIGUI, developers could partition their applications into GUI and non-GUI objects, and implement the GUI objects in the native API. Then, when porting, only the GUI objects need to be rewritten for the new platform. There are some software developers who recommend this course of action, as it produces a better fit on each platform and eliminates the overhead often associated with PIGUI toolkits. Obviously, this may require more effort in both the initial development and in ongoing maintenance (no single base of source code). It also means learning how to code for every target platform, which is not (usually) a trivial task, hence the market for PIGUI packages.
User interface approaches:
Most, if not all, PIGUI packages take one of three approaches to providing platform independence. The two most common approaches are the `layered' and the `emulated' user interface but an up-and-coming approach is `API emulated' interface.
User interface approaches:
Packages using a layered interface access native, third party, GUI-building toolkits to provide the look-and-feel compliance for each particular GUI. Layered user interfaces have the advantage that, since they depend on other products which concentrate on a single GUI, they have to provide less software (and, hence, are usually less expensive) than emulated interfaces. Layered interfaces are also more likely to get the native look-and-feel correct on all platforms.
User interface approaches:
In an emulated user interface, the PIGUI's resultant code produces low-level calls and all the look-and-feel compliance is handled by the PIGUI software itself (e.g., for OpenWindows support, the software would NOT produce an XView program that must be compiled with the XView toolkit; the software would produce code that interfaces directly with X intrinsics). To provide an emulated user interface, a package provider has to develop a lot of extra code for look-and-feel support. Emulated user interfaces have the advantage that someone on a X11 workstation, for example, can see how the Macintosh-style UI will look (since the look-and-feel is part of the product). Emulated interfaces have the opportunity to provide a faster GUI than does a layered interface; in addition, it does not require purchase of (or learn how to use) other packages to build GUI software.
User interface approaches:
A third approach to platform independence is emulating one of the supported target's APIs (usually, the Microsoft Windows API) to target other GUIs. With one of these products, one would program using the emulated API and the code would be (to the extent to which the product provides portability) portable to other GUIs.
Features:
PIGUI packages are pretty similar in their basic functionality; they each provide subroutines or objects that allow the user to build windows, buttons (regular as well as radio buttons and check boxes), menus, and the like. Some areas of differentiation are: support for the platforms needed, the choice of implementation language, availability of source code, support for printers and other devices, support for various character encoding schemes including Unicode, capability to support draw-package-like features, bitmap (and icon) support, the approach to platform independence, nifty high-level widgets, and price (complete price including royalties and distribution charges), | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SPINT1**
SPINT1:
Kunitz-type protease inhibitor 1 is an enzyme that in humans is encoded by the SPINT1 gene.The protein encoded by this gene is a member of the Kunitz family of serine protease inhibitors. The protein is a potent inhibitor specific for HGF activator and is thought to be involved in the regulation of the proteolytic activation of HGF in injured tissues. Alternative splicing results in multiple variants encoding different isoforms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JAUS Tool Set**
JAUS Tool Set:
The JAUS Tool Set (JTS) is a software engineering tool for the design of software services used in a distributed computing environment. JTS provides a Graphical User Interface (GUI) and supporting tools for the rapid design, documentation, and implementation of service interfaces that adhere to the Society of Automotive Engineers' standard AS5684A, the JAUS Service Interface Design Language (JSIDL). JTS is designed to support the modeling, analysis, implementation, and testing of the protocol for an entire distributed system.
Overview:
The JAUS Tool Set (JTS) is a set of open source software specification and development tools accompanied by an open source software framework to develop Joint Architecture for Unmanned Systems (JAUS) designs and compliant interface implementations for simulations and control of robotic components per SAE-AS4 standards. JTS consists of the components: GUI based Service Editor: The Service Editor (referred to as the GUI in this document) provides a user friendly interface with which a system designer can specify and analyze formal specifications of Components and Services defined using the JAUS Service Interface Definition Language (JSIDL).
Overview:
Validator: A syntactic and semantic validator provides on-the-fly validation of specifications entered (or imported) by the user with respect to JSIDL syntax and semantics is integrated into the GUI.
Specification Repository: A repository (or database) that is integrated into the GUI that allows for the storage of and encourages the reuse of existing formal specifications.
C++ Code Generator: The Code Generator automatically generates C++ code that has a 1:1 mapping to the formal specifications. The generated code includes all aspects of the service, including the implementations of marshallers and unmarshallers for messages, and implementations of finite-state machines for protocol behavior that are effectively decoupled from application behavior.
Document Generator: The Document Generator automatically generates documentation for sets of Service Definitions. Documents may be generated in several formats.
Software Framework: The software framework implements the transport layer specification AS5669A, and provides the interfaces necessary to integrate the auto-generated C++ code with the transport layer implementation. Present transport options include UDP and TCP in wired or wireless networks, as well as serial connections. The transport layer itself is modular, and allows end-users to add additional support as needed.
Overview:
Wireshark Plugin: The Wireshark plugin implements a plugin to the popular network protocol analyzer called Wireshark. This plugin allows for the live capture and offline analysis of JAUS message-based communication at runtime. A built-in repository facilitates easy reuse of service interfaces and implementations traffic across the wire.The JAUS Tool Set can be downloaded from www.jaustoolset.org User documentation and community forum are also available at the site.
Release history:
Following a successful Beta test, Version 1.0 of the JAUS Tool Set was released in July 2010. The initial offering focused on core areas of User Interface, HTML document generation, C++ code generation, and the software framework. The Version 1.1 update was released in October 2010. In addition to bug fixes and UI improvements, this version offered several important upgrades including enhancement to the Validator, Wireshark plug-in, and generated code.
Release history:
The JTS 2.0 release is scheduled for the second quarter of 2011 and further refines the Tool Set functionality: Protocol Validation: Currently, JTS provides validation for message creation, to ensure users cannot create invalid messages specifications. That capability does not currently exist for protocol definitions, but is being added. This will help ensure that users create all necessary elements of a service definition, and reduce user error.
Release history:
C# and Java Code Generation: Currently, JTS generates cross-platform C++ code. However, other languages including Java and C# are seeing a dramatic increase in their use in distributed systems, particularly in the development of graphical clients to embedded services.
Release history:
MS Word Document Generation: HTML and JSIDL output is supported, but native Office-Open-XML (OOXML) based MS Word generation has advantages in terms of output presentation, and ease of use for integration with other documents. Therefore, we plan to integrate MS Word service document generation.In addition, the development team has several additional goals that are not-yet-scheduled for a particular release window: Protocol Verification: This involves converting the JSIDL definition of a service into a PROMELA model, for validation by the SPIN model checking tool. Using PROMELA to model client and server interfaces will allow developers to formally validate JAUS services.
Release history:
End User Experience: We plan to conduct formal User Interface testing. This involves defining a set of tasks and use cases, asking users with various levels of JAUS experience to accomplish those tasks, and measuring performance and collecting feedback, to look for areas where the overall user experience can be improved.
Release history:
Improved Service Re-Use: JSIDL allows for inheritance of protocol descriptions, much like object-oriented programming languages allow child classes to re-use and extend behaviors defined by the parent class. At present, the generated code 'flattens' these state machines into a series of nested states which gives the correct interface behavior, but only if each single leaf (child) service is generated within its own component. This limits service re-use and can lead to a copy-and-paste of the same implementation across multiple components. The team is evaluating other inheritance solutions that would allow for multiple leaf (child) services to share access to a common parent, but at present the approach is sufficient to address the requirements of the JAUS Core Service Set.
Domains and application:
The JAUS Tool Set is based on the JAUS Service Interface Definition Language (JSIDL), which was originally developed for application within the unmanned systems, or robotics, communities. As such, JTS has quickly gained acceptance as a tool for generation of services and interfaces compliant with the SAE AS-4 "JAUS" publications. Although usage statistics are not available, the Tool Set has been downloaded by representatives of US Army, Navy, Marines, and numerous defense contractors. It was also used in a commercial product called the JAUS Expansion Module sold by DeVivo AST, Inc.
Domains and application:
Since the JSIDL schema is independent of the data being exchanged, however, the Tool Set can be used for the design and implementation of a Service Oriented Architecture for any distributed systems environment that uses binary encoded message exchange. JSIDL is built on a two-layered architecture that separates the application layer and the transport layer, effectively decoupling the data being exchanges from the details of how that data moves from component to component.
Domains and application:
Furthermore, since the schema itself is widely generic, it's possible to define messages for any number of domains including but not limited to industrial control systems, remote monitoring and diagnostics, and web-based applications.
Licensing:
JTS is released under the open source BSD license. The JSIDL Standard is available from the SAE. The Jr Middleware on which the Software Framework (Transport Layer) is based is open source under LGPL. Other packages distributed with JTS may have different licenses.
Sponsors:
Development of the JAUS Tool Set was sponsored by several United States Department of Defense organizations: Office of Under Secretary of Defense for Acquisition, Technology & Logistics / Unmanned Warfare.
Navy Program Executive Officer Littoral and Mine Navy Program Executive Officer Unmanned Aviation and Strike Weapons Office of Naval Research Air Force Research Lab | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GTx Incorporated**
GTx Incorporated:
GTx, Inc. is a pharmaceutical company that is working on drugs in the selective estrogen receptor modulator (SERM) and selective androgen receptor modulator (SARM) classes. Its drugs in development include enobosarm (ostarine) and GTx-758.
GTx Incorporated:
The company was founded in Memphis in 1997 by Mitch Steiner and Marc S. Hanover. The company was originally called Genotherapeutics, changed its name to GTx, Inc. in 2001, and reincorporated in Delaware in 2003. The company licensed toremifene from Orion Corporation, and licensed andarine, enobosarm and prostarine from the University of Tennessee Research Foundation; the SARM compounds from Tennessee had been invented by Karen Veverka and Michael Whitt and each later joined the company. The company held its IPO in February 2004.In 2006 GTx signed a partnership with Ipsen to develop toremifene, a selective estrogen receptor modulator to prevent prostate cancer and to prevent bone loss in men with prostate cancer; the FDA rejected the application to market the drug for this use in 2009, and Ipsen terminated the arrangement in 2011. In 2012 GTx sold its rights to toremifene to ProStrakan, a subsidiary of Kyowa Hakko Kirin, for around $19 million, and terminated its agreement with Orion.By 2007 enobosarm was in a Phase II trial, and that year Gtx signed an exclusive license agreement for its SARM program with Merck; Merck bought $30M in GtX stock, paid an upfront fee of $40M, and agreed to fund $15M in research over the next three years. The agreement also included royalties on any product brought to market and around $400M in biodollars. The companies ended the deal in 2010.In August 2013 GTx announced that enobosarm had failed in two Phase III clinical trials to treat wasting in people with lung cancer. In October 2013 the company laid off around 60% of its 88-person workforce, and Steiner resigned 6 months later. The company had invested around $35 million in the development of the drug. The company said at that time that is planned to pursue approval of enobosarm in Europe; the company was also still developing GTx-758 for castration-resistant prostate cancer.In 2016 GTx began Phase II trials, to see if enosobarm might be effective to treat stress urinary incontinence in women.In June 2019, GTx combined with Oncternal Therapeutics in a reverse merger. The combined company will operate under the name Oncternal Therapeutics, Inc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tetrytol**
Tetrytol:
Tetrytol is a high explosive, comprising a mixture of tetryl and TNT. Typically, the proportion of ingredients (by weight) is 65%, 70%, 75% or 80% tetryl to 35%, 30%, 25% or 20% TNT. Tetryl and TNT do form a eutectic with a setting point of 67.5 °C, which consists of 55% tetryl and 45% TNT. Hence, cast tetrytol charges consist of solidified suspensions of crystalline tetryl in the solid tetryl-TNT-eutectic. Tetrytol is more sensitive than TNT and less sensitive than tetryl to impact. The detonation velocity of unconfined cast cylindrical charges (1 inch diameter) of tetrytol is between 7290 and 7410 m/s with an average of 7350 m/s for tetrytol 75/25 and 7340 m/s for tetrytol 65/35. For comparison, cylindrical charges of cast pure TNT of similar dimensions are reported to detonate with a velocity of between 6680 and 6990 m/s.
Tetrytol:
Applications of tetrytol are usually military in nature e.g. burster tubes for chemical weapons (e.g. nerve agent shells), blocks of demolition explosives and cast shaped charges.Dry tetrytol is compatible with copper, brass, aluminum, magnesium, stainless steel, mild steel coated with acid proof paint and mild steel plated with copper, cadmium, zinc or nickel. Magnesium-aluminum alloys are slightly affected by dry tetrytol. Wet tetrytol is compatible with stainless steel and mild steel coated with acid proof black paint. Copper, brass, aluminum, magnesium, magnesium-aluminum alloy, mild steel and mild steel plated with cadmium, copper, zinc or nickel are slightly affected by wet tetrytol.When stored below 65 °C (149 °F) Tetrytol does not change stability, acid content, sensitivity or brisance. However, temperatures at 65 °C or above will allow the formation of an oily extrudate and distortion of blocks. Although tetryl undergoes partial decomposition on melting, the melting of tetrytol does not have the same effect. Even when Tetrytol is melted and solidified numerous times it causes no change in freezing point, sensitivity to impact or 100 °C vacuum stability test value.Tetrytol has been discontinued by the U.S. due to the exudation and low stability at elevated storage temperatures. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Media Space**
Media Space:
Media Space is a 642-square-metre (6,910 sq ft) exhibition space at the London Science Museum, developed in association with the National Media Museum.
Media Space:
Opened in September 2013, the space comprises an extensive gallery surrounded by cultural spaces for display and participation, for mixing across the arts, sciences and creative industries. It also intended to be a showcase for the National Media Museum collections in photography, as well as cinematography and broadcast technology, and an arena where audiences will engage with how new technologies have impacted on today’s creative industries. It is intended for adult audiences.
Exhibitions:
2013/2014: Only in England: Photographs by Tony Ray-Jones and Martin Parr. Photographs by Tony Ray-Jones and Martin Parr. With material from the National Media Museum's Ray-Jones archive curated by Martin Parr and Greg Hobson.
2015: Revelations: Experiments in Photography. Toured to National Media Museum, Bradford. Curated by Greg Hobson and Ben Burbridge.
2015/2016: Julia Margaret Cameron: Influence and Intimacy. Photographs by Julia Margaret Cameron.
2015/2016: Gathered Leaves: Photographs by Alec Soth. Photographs and books by Alec Soth. Curated by Kate Bush. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phosphite ester**
Phosphite ester:
In organic chemistry, a phosphite ester or organophosphite usually refers to an organophosphorous compound with the formula P(OR)3. They can be considered as esters of an unobserved tautomer phosphorous acid, H3PO3, with the simplest example being trimethylphosphite, P(OCH3)3. Some phosphites can be considered esters of the dominant tautomer of phosphorous acid (HP(O)(OH)2). The simplest representative is dimethylphosphite with the formula HP(O)(OCH3)2. Both classes of phosphites are usually colorless liquids.
Synthesis:
From PCl3Phosphite esters are typically prepared by treating phosphorus trichloride with an alcohol. For alkyl alcohols the displaced chloride ion can attack the phosphite, causing dealkylation to give a dialkylphosphite and an organochlorine compound. The overall reaction is as follows: PCl3 + 3 C2H5OH → (C2H5O)2P(O)H + 2 HCl + C2H5ClAlternatively, when the alcoholysis is conducted in the presence of proton acceptors (typically an amine base), one obtains the C3-symmetric trialkyl derivatives: PCl3 + 3 C2H5OH + 3 R3N → (C2H5O)3P + 3 R3NHClA base is not essential when using aromatic alcohols such as phenols, as they are not susceptible to attack by chloride, however it does catalyse the esterification reaction and is therefore often included.
Synthesis:
By transesterificationPhosphite esters can also be prepared by transesterification, as they undergo alcohol exchange upon heating with other alcohols. This process is reversible and can be used to produce mixed alkyl phosphites. Alternatively, if the phosphite of a volatile alcohol is used, such as trimethyl phosphite, then the by product (methanol) can be removed by distillation, allowing the reaction to be driven to completion.
Reactions and applications of tris(organo)phosphites:
Reactions Phosphites are oxidized to phosphate esters: P(OR)3 + [O] → OP(OR)3This reaction underpins the commercial use of some phosphite esters as stabilizers in polymers.Alkyl phosphite esters are used in the Perkow reaction for the formation of vinyl phosphonates, and in the Michaelis–Arbuzov reaction to form phosphonates. Aryl phosphite esters may not undergo these reactions and hence are commonly used as stabilizers in halogen-bearing polymers such as PVC.
Reactions and applications of tris(organo)phosphites:
Phosphite esters may be used as reducing agents in more specialised cases. For example, triethylphosphite is known to reduce certain hydroperoxides to alcohols formed by autoxidation (scheme). In this process the phosphite is converted to a phosphate ester. This reaction type is also utilized in the Wender Taxol total synthesis.
Homogeneous catalysis Phosphite esters are Lewis bases and hence can form coordination complexes with various metal ions. Representative phosphite ligands include trimethylphosphite ((MeO)3P), triethylphosphite ((EtO)3P), trimethylolpropane phosphite, and triphenylphosphite ((PhO)3P). Phosphites exhibit a smaller ligand cone angles than the structurally related phosphine ligand family. Phosphite ligands are components of industrial catalysts for hydroformylation and hydrocyanation.
Chemistry of HP(O)(OR)2:
Diorganophosphites are derivatives of phosphorus(V) and can be viewed as the di-esters of phosphorous acid ((HO)2P(O)H). They exhibit tautomerism, however, the equilibrium overwhelmingly favours the right-hand (phosphonate-like) form: (RO)2POH ⇌ (RO)2P(O)HThe P-H bond is the site of high reactivity in these compounds (for example in the Atherton–Todd reaction and Hirao coupling), whereas in tri-organophosphites the lone pair on phosphorus is the site of high reactivity. Diorganophosphites do however undergo transesterification. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Michael Athans**
Michael Athans:
Michael Athans (born Michael Athanassiades in Drama, Greece, May 3, 1937 - May 26, 2020) was a Greek-American control theorist and a Professor Emeritus in the Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. He was a Fellow of the IEEE (1973) and a Fellow of the AAAS (1977). He was the recipient of numerous awards for his contributions in the field of control theory. A pioneer in the field of control theory, he helped shape modern control theory and spearheaded the field of multivariable control system design and the field of robust control. Athans was a member of the technical staff at Lincoln Laboratory from 1961 to 1964, and a Department of Electrical Engineering and Computer Science faculty member from 1964 to 1998. Upon retirement, Athans moved to Lisbon, Portugal, where he was an Invited Research Professor in the Institute for Systems and Robotics, Instituto Superior Técnico where he received a honoris causa doctorate from the Universidade Técnica de Lisboa in 2011.
Education:
Athans received his B.S., M.S., and Ph.D. in Electrical Engineering from the University of California, Berkeley in 1958, 1959, and 1961, respectively.
Academic career:
From 1961 to 1964, Athans was employed as a member of the technical staff at the MIT Lincoln Laboratory, Lexington, Mass. where he conducted research in optimal control and estimation theory. From 1964, until his early retirement in 1998, he was a faculty member in the MIT Electrical Engineering and Computer Sciences department, where he held the rank of Professor. He also was the director of the MIT Laboratory for Information and Decision Systems (LIDS) from 1974 to 1981. In 1978 he co-founded ALPHATECH Inc., Burlington, Mass., where he served as Chairman of the Board of Directors. He has also consulted for numerous other industrial organizations and government panels. In 1995 he was Visiting Professor in the Department of Electrical and Computer Engineering at the National Technical University of Athens, Greece. From 1997 to 2011 he was an Invited Research Professor in the Institute for Systems and Robotics, Instituto Superior Técnico, Lisbon, Portugal.Athans is the co-author of Optimal Control (McGraw Hill, 1966), Systems, Networks and Computation: Basic Concepts (McGraw Hill, 1972) and Systems, Networks and Computation: Multivariable Methods (McGraw Hill, 1974). In 1974 he developed 65 color TV lectures and study guides on Modern Control Theory. In addition he authored or co-authored over 350 technical papers and reports. His research interests and contributions span the areas of optimum system and estimation theory, robust and adaptive multivariable control systems, and the application of these methodologies to defense, large space structures, IVHS transportation systems, aerospace, marine, automotive, power, manufacturing, economic, and military C3 systems. His last research interests focused on dynamic models of the human immune system and robust adaptive control methodologies.
Academic career:
In 1964 Athans was the first recipient of the American Automatic Control Council's Donald P. Eckman Award "for outstanding contributions to the field of automatic control". In 1969 he was the first recipient of the Frederick E. Terman Award of the American Society for Engineering Education as "the outstanding young electrical engineering educator." In 1980 he received the second Education Award of the American Control Council for his "outstanding contributions and distinguished leadership in automatic control education." In 1973 he was elected Fellow of the IEEE and in 1977 Fellow of the AAAS. In 1983 he was elected Distinguished Member of the IEEE Control Systems Society. He received the 1993 H.W. Bode Prize from the IEEE Control Systems Society, which also included the delivery of the Bode Plenary Lecture at the 1993 IEEE Conference on Decision and Control. He was the recipient of the Richard E. Bellman Control Heritage Award of the American Automatic Control Council "In Recognition of a Distinguished Career in Automatic Control; As a Leader and Champion of Innovative Research; As a Contributor to Fundamental Knowledge in Optimal, Adaptive, Robust, Decentralized and Distributed Control; and as a Mentor to his Students" presented in June 1995 at the American Control Conference. In 1996 he was awarded honorary doctorates from the National Technical University of Athens, Greece, and from the Technical University of Crete, Chania, Crete, Greece. In July 2002 he was awarded the Ktisivos Award, “In recognition of contributions to control and estimation theory, awarded by the Mediterranean Control and Automation Association. He was the recipient of a Polish Academy of Sciences Medal, “For contributions to Control Theory” in Warsaw, Poland, on June 30, 2005. In 2006 the Institute of Electrical and Electronics Engineers (IEEE) elected him Life Fellow.Athans served in numerous committees of IEEE, IFAC, AACC and AAAS; he was president of the IEEE Control Systems Society from 1972 to 1974. In addition he was a member of AIAA, Phi Beta Kappa, Eta Kappa Nu, and Sigma Xi. He served as Associate Editor of the IEEE Transactions on Automatic Control, Journal of Dynamic Systems and Control, and the IFAC journal Automatica.
Awards:
American Automatic Control Council Donald P. Eckman Award "for outstanding contributions to the field of automatic control" in 1964.
American Society for Engineering Education Frederick Emmons Terman Award as "the outstanding young electrical engineering educator" in 1969.
American Control Council's Education Award for "outstanding contributions and distinguished leadership in automatic control education" in 1980.
IEEE Control Systems Society's 1993 Hendrik Wade Bode Prize.
American Automatic Control Council's Richard E. Bellman Control Heritage Award in 1995.
Honoris causa doctorate from Universidade Técnica de Lisboa in 2011. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.