text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**CR-4056**
CR-4056:
CR-4056 is an analgesic drug candidate with a novel mechanism of action, acting as a ligand for the imidazoline receptor I2. It showed promising results in animal studies against various types of neuropathic pain, and has reached Phase II human clinical trials as a potential treatment for pain associated with osteoarthritis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inelastic collision**
Inelastic collision:
An inelastic collision, in contrast to an elastic collision, is a collision in which kinetic energy is not conserved due to the action of internal friction.
In collisions of macroscopic bodies, some kinetic energy is turned into vibrational energy of the atoms, causing a heating effect, and the bodies are deformed.
Inelastic collision:
The molecules of a gas or liquid rarely experience perfectly elastic collisions because kinetic energy is exchanged between the molecules' translational motion and their internal degrees of freedom with each collision. At any one instant, half the collisions are – to a varying extent – inelastic (the pair possesses less kinetic energy after the collision than before), and half could be described as “super-elastic” (possessing more kinetic energy after the collision than before). Averaged across an entire sample, molecular collisions are elastic.Although inelastic collisions do not conserve kinetic energy, they do obey conservation of momentum. Simple ballistic pendulum problems obey the conservation of kinetic energy only when the block swings to its largest angle.
Inelastic collision:
In nuclear physics, an inelastic collision is one in which the incoming particle causes the nucleus it strikes to become excited or to break up. Deep inelastic scattering is a method of probing the structure of subatomic particles in much the same way as Rutherford probed the inside of the atom (see Rutherford scattering). Such experiments were performed on protons in the late 1960s using high-energy electrons at the Stanford Linear Accelerator (SLAC). As in Rutherford scattering, deep inelastic scattering of electrons by proton targets revealed that most of the incident electrons interact very little and pass straight through, with only a small number bouncing back. This indicates that the charge in the proton is concentrated in small lumps, reminiscent of Rutherford's discovery that the positive charge in an atom is concentrated at the nucleus. However, in the case of the proton, the evidence suggested three distinct concentrations of charge (quarks) and not one.
Formula:
The formula for the velocities after a one-dimensional collision is: where va is the final velocity of the first object after impact vb is the final velocity of the second object after impact ua is the initial velocity of the first object before impact ub is the initial velocity of the second object before impact ma is the mass of the first object mb is the mass of the second object CR is the coefficient of restitution; if it is 1 we have an elastic collision; if it is 0 we have a perfectly inelastic collision, see below.In a center of momentum frame the formulas reduce to: For two- and three-dimensional collisions the velocities in these formulas are the components perpendicular to the tangent line/plane at the point of contact.
Formula:
If assuming the objects are not rotating before or after the collision, the normal impulse is: where n→ is the normal vector.
Assuming no friction, this gives the velocity updates:
Perfectly inelastic collision:
A perfectly inelastic collision occurs when the maximum amount of kinetic energy of a system is lost. In a perfectly inelastic collision, i.e., a zero coefficient of restitution, the colliding particles stick together. In such a collision, kinetic energy is lost by bonding the two bodies together. This bonding energy usually results in a maximum kinetic energy loss of the system. It is necessary to consider conservation of momentum: (Note: In the sliding block example above, momentum of the two body system is only conserved if the surface has zero friction. With friction, momentum of the two bodies is transferred to the surface that the two bodies are sliding upon. Similarly, if there is air resistance, the momentum of the bodies can be transferred to the air.) The equation below holds true for the two-body (Body A, Body B) system collision in the example above. In this example, momentum of the system is conserved because there is no friction between the sliding bodies and the surface.
Perfectly inelastic collision:
where v is the final velocity, which is hence given by The reduction of total kinetic energy is equal to the total kinetic energy before the collision in a center of momentum frame with respect to the system of two particles, because in such a frame the kinetic energy after the collision is zero. In this frame most of the kinetic energy before the collision is that of the particle with the smaller mass. In another frame, in addition to the reduction of kinetic energy there may be a transfer of kinetic energy from one particle to the other; the fact that this depends on the frame shows how relative this is. The reduction of kinetic energy Er is hence: With time reversed we have the situation of two objects pushed away from each other, e.g. shooting a projectile, or a rocket applying thrust (compare the derivation of the Tsiolkovsky rocket equation).
Partially inelastic collisions:
Partially inelastic collisions are the most common form of collisions in the real world. In this type of collision, the objects involved in the collisions do not stick, but some kinetic energy is still lost. Friction, sound and heat are some ways the kinetic energy can be lost through partial inelastic collisions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Champ (food)**
Champ (food):
Champ (brúitín in Irish) is an Irish dish of mashed potatoes with scallions, butter and milk.
Description:
Champ is made by combining mashed potatoes and chopped scallions with butter, milk and, optionally, salt and pepper. It was sometimes made with stinging nettle rather than scallions. In some areas the dish is also called "poundies".Champ is similar to another Irish dish, colcannon, which uses kale or cabbage in place of scallions. Champ is popular in Ulster, whilst colcannon is more so in the other three provinces of Ireland. It was customary to make champ with the first new potatoes harvested.The word champ has also been adopted into the popular Hiberno-English phrases, to be "as thick as champ", meaning to be stupid, ill-tempered or sullen.
Samhain:
The dish is associated with Samhain, and would be served on that night. In many parts of Ireland, it was tradition to offer a portion of champ to the fairies by placing a dish of champ with a spoon at the foot of a hawthorn. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Juice Cubes**
Juice Cubes:
Juice Cubes is a match 3 puzzle video game developed by Pocket PlayLab and published by Rovio Stars, as its second title, for iOS and Android. It is also available as a Facebook app, so it may be played on PC with a web browser.
Gameplay:
The gameplay of Juice Cubes is to connect at least 3 fruits of the same color, while trying to finish certain goals, e.g. getting certain numbers of points or removing sand tiles from the grid. Whenever the player connects four or more cubes, a bomb fruit appears that can destroy fruits on a row or column when matched upon. When diagonally formed, it will create a bomb fruit that destroys fruit in a 3×3 area. When players connect 8 or more fruits, they create a fruit that removes from the grid the fruit of the same color the special fruit is connected with. As of April 2023, there are 910 levels. Additional lives or in-game powerups may be purchased with real money. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ammonium formate**
Ammonium formate:
Ammonium formate, NH4HCO2, is the ammonium salt of formic acid. It is a colorless, hygroscopic, crystalline solid.
Reductive amination:
Acetone can be transformed into isopropylamine as follows: CH3C(O)CH3 + 2 HCO2− +NH4 → (CH3)2CHNHCHO + 2 H2O + NH3 + CO2 (CH3)2CHNHCHO + H2O → (CH3)2CHNH2 + HCO2H
Uses:
Pure ammonium formate decomposes into formamide and water when heated, and this is its primary use in industry. Formic acid can also be obtained by reacting ammonium formate with a dilute acid, and since ammonium formate is also produced from formic acid, it can serve as a way of storing formic acid.
Uses:
Ammonium formate can also be used in palladium on carbon (Pd/C) reduction of functional groups. In the presence of Pd/C, ammonium formate decomposes to hydrogen, carbon dioxide, and ammonia. This hydrogen gas is adsorbed onto the surface of the palladium metal, where it can react with various functional groups. For example, alkenes can be reduced to alkanes, formaldehyde to methanol, and nitro compounds to amines. Activated single bonds to heteroatoms can also be replaced by hydrogens (hydrogenolysis).
Uses:
Ammonium formate can be used for reductive amination of aldehydes and ketones (Leuckart reaction), by the following reaction: Ammonium formate can be used as a mobile phase additive in high performance liquid chromatography (HPLC), and is suitable for use with liquid chromatography-mass spectrometry (LC/MS). The pKa values of formic acid and the ammonium ion are 3.8 and 9.2, respectively.
Reactions:
When heated, ammonium formate eliminates water, forming formamide. Upon further heating, it forms hydrogen cyanide (HCN) and water. A side reaction of this is the decomposition of formamide to carbon monoxide (CO) and ammonia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AllClear ID**
AllClear ID:
AllClear ID (aka AllClear and formerly Debix) provides products and services meant to protect people and their personal information from threats related to identity theft. AllClear ID's main service providers include technology and customer service teams.
Data breach response services:
The breach response services from AllClear ID include notification, call center & customer support, and identity protection products. Notification provides access to identity protection services. The call center provides a team experienced in managing the anxiety of breach victims to answer questions about the incident, reassure individuals, and explain the identity protection services offered. Products are available to mitigate risk from different types of breaches including compromised credit cards, passwords, health information, and Social Security numbers. AllClear ID has worked with large companies to manage sensitive and highly-visible breach responses including The Home Depot, P.F. Chang's, Michael's–Aaron Brothers, The UPS Store, Dairy Queen, Albertson's–SuperValu, and Anthem BCBS.
Child identity theft research:
In April 2011 AllClear ID released a report with Richard Power, a distinguished fellow at Carnegie Mellon University CyLab, on the prevalence of child ID theft. Using the data supplied by AllClear ID, Power completed the largest report ever done on child identity theft. From the database of over 40,000 children, Power found that 4,311 had someone else using their Social Security numbers.The Today Show led a follow-up investigation, interviewing victims of child identity theft. Investigators found some of the thieves who were still living and working using a child's Social Security number.In July 2011, CEO Bo Holland, along with leaders from the Social Security Administration, Identity Theft 911, The Identity Theft Resource Center, and more, spoke at Stolen Futures, the FTC forum on Child Identity Theft. There he presented the findings from the CyLab report on child identity theft, as well as findings from follow up data sampling since the report release.In May 2012, AllClear ID released a follow-up report on child ID theft data involving 27,000 minors. This report further confirmed the growing problem of child identity theft, indicating that children were targeted at a rate 35 times greater than that of adults
Awards and recognition:
2010 – Debix was recognized as an AlwaysOn Global 250 winner "signifying leadership amongst its peers and game-changing approaches and technologies that are likely to disrupt existing markets and entrenched players in the Global Silicon Valley".
2011 – AllClear ID Pro was ranked second overall, with Identity Guard placing first. (reference needed as old reference invalid) In the category of Restoration, AllClear ID tied for first alongside Identity Force and Royal.
August 2011 – Awarded "Best in Resolution" by Javelin Strategy & Research.
February 2012 – Awarded 5 Stevie Awards for Sales & Customer Service: Customer Service Department of the Year, Contact Center of the Year, Best Use of Technology in Customer Service, Front-Line Customer Service Professional of the Year (Investigator Christy McCarley), Customer Service Leader of the Year (VP of Customer Services & Chief Investigator Jamie May).
February 2013 – Awarded 5 Stevie Awards for Customer Service: Contact Center of the Year, Best Use of Technology in Customer Service, Front-Line Customer Service Professional of the Year, Contact Center Manager of the Year, and Customer Service Department of the Year.
February 2014 – Awarded 5 Stevie Awards for Customer Service: Young Customer Service Professional of the Year, Customer Service Department of the Year, Innovation in Customer Service, Contact Center of the Year, Customer Service Professional of the Year.
History:
2004: Founded by Bo Holland, originally named Debix, Inc. After working in the financial industry, Holland used his knowledge of how banks and institutions handled credit requests to create Debix's identity protection network. Holland was previously founder and CEO of Works, Inc., which was acquired by Bank of America in 2005. Works is an electronic payment solutions provider, and Holland invented the patent-pending technology that enables large organizations to approve and control payments for operating expenses via credit cards.
History:
April 2011: Carnegie Mellon CyLab and AllClear ID released "Child Identity Theft" research reporting that child identity theft is a faster-growing crime than adult identity theft.
April 2011: Debix introduced AllClear ID, the first free identity theft protection service for families. AllClear ID offers a free service which monitors data for stolen personal information and provides free identity repair in addition to a premium product.
May 2011: Partnered with Sony for PlayStation Network outage in April.
In July 2011, Debix was granted U.S. Patent No. 7,983,979 for multi-band, multi-factor authentication design.
July 2011: Bo Holland presents Child Identity Theft research to Federal Trade Commission.
March 2012: Debix company name changed to AllClear ID, Inc.
May 2012: Released "Child Identity Theft" research reporting "Criminals are targeting the youngest children. 15% of victims were five years old and younger, an increase of 105% over the 2011 findings".
August 2014: AllClear ID Plus offered to victims of the Home Depot Credit Card breach of 2014.
February 2015: AllClear ID Secure and Pro offered to victims of the Anthem Inc. data breach of 2015.
January 2018: AllClear ID offered to victims of the Guaranteed Rate Data Security Breach of September 14, 2017.
April 2018: AllClear ID offered to Delta Air Lines victims of the [24]7.ai data breach in September – October 2017.
April 2018: Massachusetts State Tax Department/ Child Support Division exposed private data of 6,100 people due to an apparent coding error in the COMETS HD system by the company Accenture. The software vendor covers all cost of AllClear ID to the affected people for 24 months. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GPR101**
GPR101:
Probable G-protein coupled receptor 101 is a protein that in humans is encoded by the GPR101 gene.G protein-coupled receptors (GPCRs, or GPRs) contain 7 transmembrane domains and transduce extracellular signals through heterotrimeric G proteins.
Clinical significance:
A duplication event in GPR101 is implicated in cases of gigantism and acromegaly. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Median palatal cyst**
Median palatal cyst:
The median palatal cyst is a rare cyst that may occur anywhere along the median palatal raphe. It may produce swelling because of infection and is treated by excision or surgical removal.
Some investigators now believe that this cyst represents a more posterior presentation of a nasopalatine duct cyst, rather than a separate cystic degeneration of epithelial rests at the line of fusion of the palatine shelves. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Compulsive behavior**
Compulsive behavior:
Compulsive behavior (or compulsion) is defined as performing an action persistently and repetitively. Compulsive behaviors could be an attempt to make obsessions go away. The act is usually a small, restricted and repetitive behavior, yet not disturbing in a pathological way. Compulsive behaviors are a need to reduce apprehension caused by internal feelings a person wants to abstain from or control. A major cause of compulsive behavior is said to be obsessive–compulsive disorder (OCD). "The main idea of compulsive behavior is that the likely excessive activity is not connected to the purpose to which it appears directed." There are many different types of compulsive behaviors including shopping, hoarding, eating, gambling, trichotillomania and picking skin, itching, checking, counting, washing, sex, and more. Also, there are cultural examples of compulsive behavior.
Disorders in which it is seen:
Addiction and obsessive–compulsive disorder (OCD) feature compulsive behavior as core features. Addiction is simply a compulsion toward a rewarding stimulus. Whereas in OCD, a compulsion is a facet of the disorder. The most common compulsions for people with OCD are washing and checking.While not all compulsive behaviors are addictions, some such as compulsive sexual behavior have been identified as behavioral addictions.
Occurrence:
About 50 million people in the world today appear to have some type of obsessive-compulsive disorder. Affected people are often more secretive than other people with psychological problems, so the more serious psychological disorders are diagnosed more often. Many who exhibit compulsive behavior will claim it is not a problem and may endure the condition for years before seeking help.
Types:
Shopping Compulsive shopping is characterized by excessive shopping that causes impairment in a person's life such as financial issues or not being able to commit to a family. The prevalence rate for this compulsive behavior is 5.8% worldwide, and a majority of the people who are affected by this type of behavior are women (approximately 80%). There is no proven treatment for this type of compulsive behavior.
Types:
Hoarding Hoarding is characterized by excessive saving of possessions and having problems when throwing these belongings away. Major features of hoarding include not being able to use the capacity of one's living quarters efficiently, having difficulty moving throughout the home due to the massive amount of possessions, as well as having blocked exits that can pose a danger to the hoarder and their family and guests. Items that are typically saved by hoarders include clothes, newspapers, containers, junk mail, books, and craft items. Hoarders believe these items will be useful in the future or they are too sentimental to throw them away. Other reasons include fear of losing important documents and information and object characteristics.
Types:
Eating Compulsive overeating is the inability to control one's amount of nutritional intake, resulting in excessive weight gain. This overeating is usually a coping mechanism to deal with issues in the individual's life such as stress. Most compulsive over-eaters know that what they are doing is not good for them. The compulsive behavior usually develops in early childhood. People who struggle with compulsive eating usually do not have proper coping skills to deal with the emotional issues that cause their overindulgence in food. They indulge in binges, periods of varying duration in which they eat and/or drink without pause until the compulsion passes or they are unable to consume any more. These binges are usually accompanied by feelings of guilt and shame about using food to avoid emotional stress. This compulsive behavior can have deadly side effects including, but not limited to, binge eating, depression, withdrawal from activities due to weight, and spontaneous dieting. Though this is a very serious compulsive behavior, getting treatment and a proper diet plan can help individuals overcome these behaviors.
Types:
In Eating disorders (like anorexia nervosa and bulimia nervosa) person is preoccupied with weight, body and caloric intake. In this, there are certain behaviors, which are maladaptive and persistent and could be viewed as compulsive behaviors. For instance, restricting what the person eats, vomiting, abusing laxatives and over-exercising.
But if a person engages in such type of behaviors, he will see them as a necessary mechanism for controlling weight. He will not view these behaviors as problematic.
Binge eating may be considered as compulsive behaviors. In this, a person may realize that he is overeating and regret it after some time.
Types:
Gambling Compulsive gambling is characterized by having the desire to gamble and not being able to resist said desires. The gambling leads to serious personal and social issues in the individual's life. This compulsive behavior usually begins in early adolescence for men and between the ages of 20-40 for women. People who have issues controlling compulsions to gamble usually have an even harder time resisting when they are having a stressful time in life. People who gamble compulsively tend to run into issues with family members, the law, and the places and people they gamble with. The majority of the issues with this compulsive behavior are due to lack of money to continue gambling or pay off debt from previous gambling. Compulsive gambling can be helped with various forms of treatment such as Cognitive Behavioral Therapy, Self-help or Twelve-step programs, and potentially medication.
Types:
Body-focused repetitive behaviours Trichotillomania is classified as a compulsive picking of hair of the body. It can be from any place on the body that has hair. This picking results in bald spots. Most people who have mild trichotillomania can overcome it via concentration and more self-awareness.Those with compulsive skin picking have issues with picking, rubbing, digging, or scratching the skin. These activities are usually to get rid of unwanted blemishes or marks on the skin. These compulsions also tend to leave abrasions and irritation on the skin. This can lead to infection or other issues in healing. These acts tend to be prevalent in times of anxiety, boredom, or stress. Reviews recommend behavioral interventions such as habit reversal training and decoupling.
Types:
Checking, counting, washing, and repeating Compulsive checking can include compulsively checking items such as locks, switches, and appliances. This type of compulsion usually deals with checking whether harm to oneself or others is possible. Usually, most checking behaviors occur due to wanting to keep others and the individual safe; this condition is also known as obsessive-compulsive behavior.
Types:
People with compulsive counting tend to have a specific number that is of importance in the situation they are in. When a number is considered significant, the individual has a desire to do the behavior such as wiping one's face off the number of times that is significant. Compulsive counting can include instances of counting things such as steps, items, behaviors, and mental counting.Compulsive washing is usually found in individuals that have a fear of contamination. People that have compulsive hand washing behaviors wash their hands repeatedly throughout the day. These hand washings can be ritualized and follow a pattern. People that have problems with compulsive hand washing tend to have problems with chapped or red hands due to the excessive amount of washing done each day.Compulsive repeating is characterized by doing the same activity multiple times over. These activities can include re-reading a part of a book multiple times, re-writing something multiple times, repeating routine activities, or saying the same phrase over and over.
Types:
Sexual behavior This type of compulsive behavior is characterized by feelings, thoughts, and behaviors about anything related to sex. These thoughts have to be pervasive and cause problems in health, occupation, socialization, or other parts of life. These feelings, thoughts, and behaviors can include normal sexual behaviors or behaviors that are considered illegal and/or morally and culturally unacceptable. This disorder is also known as hypersexuality, hypersexual disorder, nymphomania or sexual addiction. Controversially, some scientists have characterized compulsive sexual behavior as sexual addiction, although no such condition is recognized by mainstream medical diagnostic manuals.
Types:
Talking Compulsive talking goes beyond the bounds of what is considered to be a socially acceptable amount of talking. The two main factors in determining if someone is a compulsive talker are talking in a continuous manner, only stopping when the other person starts talking, and others perceiving their talking as a problem. Personality traits that have been positively linked to this compulsion include assertiveness, willingness to communicate, self-perceived communication competence, and neuroticism. Studies have shown that most people who are talkaholics are aware of the amount of talking they do, are unable to stop, and do not see it as a problem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Uranium–lead dating**
Uranium–lead dating:
Uranium–lead dating, abbreviated U–Pb dating, is one of the oldest and most refined of the radiometric dating schemes. It can be used to date rocks that formed and crystallised from about 1 million years to over 4.5 billion years ago with routine precisions in the 0.1–1 percent range.The method is usually applied to zircon. This mineral incorporates uranium and thorium atoms into its crystal structure, but strongly rejects lead when forming. As a result, newly-formed zircon crystals will contain no lead, meaning that any lead found in the mineral is radiogenic. Since the exact rate at which uranium decays into lead is known, the current ratio of lead to uranium in a sample of the mineral can be used to reliably determine its age.
Uranium–lead dating:
The method relies on two separate decay chains, the uranium series from 238U to 206Pb, with a half-life of 4.47 billion years and the actinium series from 235U to 207Pb, with a half-life of 710 million years.
Decay routes:
Uranium decays to lead via a series of alpha and beta decays, in which 238U and its daughter nuclides undergo a total of eight alpha and six beta decays, whereas 235U and its daughters only experience seven alpha and four beta decays.The existence of two 'parallel' uranium–lead decay routes (238U to 206Pb and 235U to 207Pb) leads to multiple feasible dating techniques within the overall U–Pb system. The term U–Pb dating normally implies the coupled use of both decay schemes in the 'concordia diagram' (see below).
Decay routes:
However, use of a single decay scheme (usually 238U to 206Pb) leads to the U–Pb isochron dating method, analogous to the rubidium–strontium dating method.
Finally, ages can also be determined from the U–Pb system by analysis of Pb isotope ratios alone. This is termed the lead–lead dating method. Clair Cameron Patterson, an American geochemist who pioneered studies of uranium–lead radiometric dating methods, used it to obtain one of the earliest estimates of the age of the Earth.
Mineralogy:
Although zircon (ZrSiO4) is most commonly used, other minerals such as monazite (see: monazite geochronology), titanite, and baddeleyite can also be used.
Where crystals such as zircon with uranium and thorium inclusions cannot be obtained, uranium–lead dating techniques have also been applied to other minerals such as calcite / aragonite and other carbonate minerals. These types of minerals often produce lower-precision ages than igneous and metamorphic minerals traditionally used for age dating, but are more commonly available in the geologic record.
Mechanism:
During the alpha decay steps, the zircon crystal experiences radiation damage, associated with each alpha decay. This damage is most concentrated around the parent isotope (U and Th), expelling the daughter isotope (Pb) from its original position in the zircon lattice.
In areas with a high concentration of the parent isotope, damage to the crystal lattice is quite extensive, and will often interconnect to form a network of radiation damaged areas. Fission tracks and micro-cracks within the crystal will further extend this radiation damage network.
These fission tracks act as conduits deep within the crystal, providing a method of transport to facilitate the leaching of lead isotopes from the zircon crystal.
Computation:
Under conditions where no lead loss or gain from the outside environment has occurred, the age of the zircon can be calculated by assuming exponential decay of uranium. That is Nn=Noe−λt where Nn=U is the number of uranium atoms measured now.
No is the number of uranium atoms originally - equal to the sum of uranium and lead atoms U+Pb measured now.
λ=λU is the decay rate of Uranium.
t is the age of the zircon, which one wants to determine.This gives U=(U+Pb)e−λUt, which can be written as 1.
Computation:
The more commonly used decay chains of Uranium and Lead gives the following equations: (The notation Pb ∗ , sometimes used in this context, refers to radiogenic lead. For zircon, the original lead content can be assumed to be zero, and the notation can be ignored.) These are said to yield concordant ages (t from each equation 1 and 2). It is these concordant ages, plotted over a series of time intervals, that result in the concordant line.Loss (leakage) of lead from the sample will result in a discrepancy in the ages determined by each decay scheme. This effect is referred to as discordance and is demonstrated in Figure 1. If a series of zircon samples has lost different amounts of lead, the samples generate a discordant line. The upper intercept of the concordia and the discordia line will reflect the original age of formation, while the lower intercept will reflect the age of the event that led to open system behavior and therefore the lead loss; although there has been some disagreement regarding the meaning of the lower intercept ages.
Computation:
Undamaged zircon retains the lead generated by radioactive decay of uranium and thorium up to very high temperatures (about 900 °C), though accumulated radiation damage within zones of very high uranium can lower this temperature substantially. Zircon is very chemically inert and resistant to mechanical weathering – a mixed blessing for geochronologists, as zones or even whole crystals can survive melting of their parent rock with their original uranium–lead age intact. Thus, zircon crystals with prolonged and complicated histories can contain zones of dramatically different ages (usually with the oldest zone forming the core, and the youngest zone forming the rim of the crystal), and so are said to demonstrate "inherited characteristics". Unraveling such complexities (which can also exist within other minerals, depending on their maximum lead-retention temperature) generally requires in situ micro-beam analysis using, for example, ion microprobe (SIMS), or laser ICP-MS. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ENOX2**
ENOX2:
ENOX2 is a gene located on the long arm of the X chromosome in humans. The gene encodes the protein Ecto-NOX disulfide-thiol exchanger 2, a member of the NOX family of NADPH oxidases.Ecto-NOX disulfide-thiol exchanger 2 is a growth-related cell surface protein. It was identified because it reacts with the monoclonal antibody K1 in cells, such as the ovarian carcinoma line OVCAR-3, also expressing the CAKI surface glycoprotein. The encoded protein has two enzymatic activities: catalysis of hydroquinone or NADH oxidation, and protein disulfide interchange. The two activities alternate with a period length of about 24 minutes. The encoded protein also displays prion-like properties. Two transcript variants encoding different isoforms have been found for this gene.
Gene Location:
The human ENOX2 gene is located on the long (q) arm of the X chromosome in humans, at region 2 band 6 sub band 1, from base pair 130,622,330 to 130,903,317 (build GRCh38.p7) (map). The gene is conserved in chimpanzee, Rhesus monkey, dog, mouse, rat, chicken, and zebrafish.
Function:
ENOX2 and related NOX proteins exhibit two distinct oscillating functions: the oxidation of NADH to NAD+ and a protein disulfide isomerase-like activity, unprecedented in the biochemical literature. Regarding NADH oxidation, the protein has a specific activity of 10-20μmol/min/mg of protein with a turnover number of 200-500. The oscillations are independent of temperature, with a period of 24 minutes, completing 60 cycles in a 24-hour day. The period of oscillation changes to 22 and 26 minutes in the cancer related (tNOX) and age-related (arNOX) forms respectively. This regular oscillation is attributed to the maintenance of biological clock Interactions NADH activity of ENOX2 has been shown to be stimulated by various hormones and growth factors, including insulin, EGF, transferrin, lactoferrin, vasopressin and glucagon. This stimulation is not seen in protein samples recovered from cancer cells, suggesting the regular NADH oxidase activity of ENOX2 is decoupled in cancer. ENOX2 also has a number of protein-protein interactions, with ENOX1 and SOX2, among others.
Function:
Cell Growth Numerous studies in the 1990s correlated NADH oxidase activity with cell growth. Conditions which stimulated cell growth also stimulated NADH oxidase activity and conditions that inhibited cell growth inhibited NADH oxidase activity. Further experimental evidence showed that the rate of cell enlargement oscillates within the 24 minute oscillation of ENOX function. Maximum cell growth rates correspond to the portion of the ENOX cycle involved in protein dulsulfide bridge formation. Theories suggest that ENOX is responsible for the breakup and formation of disulfide bonds in membrane proteins, thus maximum cell growth coincides with maximum protein disulfide interchange activity.
Role In Disease:
Cancer The cancer associated, drug responsive variant of ENOX, tNOX, arises as a splice variant and is found on the cell surface of human cancers. tNOX exhibits a periodicity of 22 minutes, compared to the native 24 minutes and can be inhibited by a number of anticancer drugs, without affecting the native ENOX. These properties of tNOX are being used to develop early detection and intervention mechanisms for human cancers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Single-occupancy vehicle**
Single-occupancy vehicle:
A single-occupancy vehicle (SOV) is a privately operated vehicle whose only occupant is the driver. The drivers of SOVs use their vehicles primarily for personal travel, daily commuting and for running errands. The types of vehicles include, but are not limited to, sport utility vehicles (SUVs), light-duty trucks, and any combination thereof, along with all the various van and car sizes, but would generally be taken to exclude human-powered vehicles such as bicycles. This term is used by transportation engineers and planners. SOVs contrast with high-occupancy vehicles (HOV), which have two or more occupants. Keep in mind that SOV in this context refers to a status and usage, not the vehicle type. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fearless Photog**
Fearless Photog:
Fearless Photog is a character created for Mattel's Masters of the Universe toyline. A heroic warrior with a robotic camera-shaped head, he has the ability to ‘focus in’ on his enemies and drain their strength. His chest plate displays silhouettes of his defeated enemies.
Development:
In 1986, Mattel sponsored a contest for children to send in designs for new characters. Five finalists were chosen, and people were then allowed to vote for their favorite. In the last page of the Spring 1986 issue of The Masters of the Universe Magazine, they announced 12-year-old Nathan Bitner as the winner of the contest with his submission of Fearless Photog. Nathan was awarded a scholarship for 100,000 dollars, plus a five-day trip to Disneyland.Despite the contest's premise, however, Fearless Photog never actually went into production. While claiming they no longer have the rights to produce an action figure of this character, Mattel gave it a small nod in 2011 on the bio of its Masters of the Universe Classics figure Captain Glenn. At San Diego Comic-Con International 2011, Mattel later revealed that Fearless Photog would finally receive a figure as the first entry in their six-figure Masters of the Universe Classics 30th Anniversary series. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Categorical distribution**
Categorical distribution:
In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can take on one of K possible categories, with the probability of each category separately specified. There is no innate underlying ordering of these outcomes, but numerical labels are often attached for convenience in describing the distribution, (e.g. 1 to K). The K-dimensional categorical distribution is the most general distribution over a K-way event; any other discrete distribution over a size-K sample space is a special case. The parameters specifying the probabilities of each possible outcome are constrained only by the fact that each must be in the range 0 to 1, and all must sum to 1.
Categorical distribution:
The categorical distribution is the generalization of the Bernoulli distribution for a categorical random variable, i.e. for a discrete variable with more than two possible outcomes, such as the roll of a dice. On the other hand, the categorical distribution is a special case of the multinomial distribution, in that it gives the probabilities of potential outcomes of a single drawing rather than multiple drawings.
Terminology:
Occasionally, the categorical distribution is termed the "discrete distribution". However, this properly refers not to one particular family of distributions but to a general class of distributions.
Terminology:
In some fields, such as machine learning and natural language processing, the categorical and multinomial distributions are conflated, and it is common to speak of a "multinomial distribution" when a "categorical distribution" would be more precise. This imprecise usage stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-K" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range 1 to K; in this form, a categorical distribution is equivalent to a multinomial distribution for a single observation (see below).
Terminology:
However, conflating the categorical and multinomial distributions can lead to problems. For example, in a Dirichlet-multinomial distribution, which arises commonly in natural language processing models (although not usually with this name) as a result of collapsed Gibbs sampling where Dirichlet distributions are collapsed out of a hierarchical Bayesian model, it is very important to distinguish categorical from multinomial. The joint distribution of the same variables with the same Dirichlet-multinomial distribution has two different forms depending on whether it is characterized as a distribution whose domain is over individual categorical nodes or over multinomial-style counts of nodes in each particular category (similar to the distinction between a set of Bernoulli-distributed nodes and a single binomial-distributed node). Both forms have very similar-looking probability mass functions (PMFs), which both make reference to multinomial-style counts of nodes in a category. However, the multinomial-style PMF has an extra factor, a multinomial coefficient, that is a constant equal to 1 in the categorical-style PMF. Confusing the two can easily lead to incorrect results in settings where this extra factor is not constant with respect to the distributions of interest. The factor is frequently constant in the complete conditionals used in Gibbs sampling and the optimal distributions in variational methods.
Formulating distributions:
A categorical distribution is a discrete probability distribution whose sample space is the set of k individually identified items. It is the generalization of the Bernoulli distribution for a categorical random variable.
Formulating distributions:
In one formulation of the distribution, the sample space is taken to be a finite sequence of integers. The exact integers used as labels are unimportant; they might be {0, 1, ..., k − 1} or {1, 2, ..., k} or any other arbitrary set of values. In the following descriptions, we use {1, 2, ..., k} for convenience, although this disagrees with the convention for the Bernoulli distribution, which uses {0, 1}. In this case, the probability mass function f is: f(x=i∣p)=pi, where p=(p1,…,pk) , pi represents the probability of seeing element i and ∑i=1kpi=1 Another formulation that appears more complex but facilitates mathematical manipulations is as follows, using the Iverson bracket: f(x∣p)=∏i=1kpi[x=i], where [x=i] evaluates to 1 if x=i , 0 otherwise. There are various advantages of this formulation, e.g.: It is easier to write out the likelihood function of a set of independent identically distributed categorical variables.
Formulating distributions:
It connects the categorical distribution with the related multinomial distribution.
Formulating distributions:
It shows why the Dirichlet distribution is the conjugate prior of the categorical distribution, and allows the posterior distribution of the parameters to be calculated.Yet another formulation makes explicit the connection between the categorical and multinomial distributions by treating the categorical distribution as a special case of the multinomial distribution in which the parameter n of the multinomial distribution (the number of sampled items) is fixed at 1. In this formulation, the sample space can be considered to be the set of 1-of-K encoded random vectors x of dimension k having the property that exactly one element has the value 1 and the others have the value 0. The particular element having the value 1 indicates which category has been chosen. The probability mass function f in this formulation is: f(x∣p)=∏i=1kpixi, where pi represents the probability of seeing element i and ∑ipi=1 This is the formulation adopted by Bishop.
Properties:
The distribution is completely given by the probabilities associated with each number i: pi=P(X=i) , i = 1,...,k, where ∑ipi=1 . The possible sets of probabilities are exactly those in the standard (k−1) -dimensional simplex; for k = 2 this reduces to the possible probabilities of the Bernoulli distribution being the 1-simplex, 1.
The distribution is a special case of a "multivariate Bernoulli distribution" in which exactly one of the k 0-1 variables takes the value one.
Properties:
E[x]=p Let X be the realisation from a categorical distribution. Define the random vector Y as composed of the elements: Yi=I(X=i), where I is the indicator function. Then Y has a distribution which is a special case of the multinomial distribution with parameter n=1 . The sum of n independent and identically distributed such random variables Y constructed from a categorical distribution with parameter p is multinomially distributed with parameters n and p.
Properties:
The conjugate prior distribution of a categorical distribution is a Dirichlet distribution. See the section below for more discussion.
The sufficient statistic from n independent observations is the set of counts (or, equivalently, proportion) of observations in each category, where the total number of trials (=n) is fixed.
The indicator function of an observation having a value i, equivalent to the Iverson bracket function [x=i] or the Kronecker delta function δxi, is Bernoulli distributed with parameter pi.
Bayesian inference using conjugate prior:
In Bayesian statistics, the Dirichlet distribution is the conjugate prior distribution of the categorical distribution (and also the multinomial distribution). This means that in a model consisting of a data point having a categorical distribution with unknown parameter vector p, and (in standard Bayesian style) we choose to treat this parameter as a random variable and give it a prior distribution defined using a Dirichlet distribution, then the posterior distribution of the parameter, after incorporating the knowledge gained from the observed data, is also a Dirichlet. Intuitively, in such a case, starting from what is known about the parameter prior to observing the data point, knowledge can then be updated based on the data point, yielding a new distribution of the same form as the old one. As such, knowledge of a parameter can be successively updated by incorporating new observations one at a time, without running into mathematical difficulties.
Bayesian inference using conjugate prior:
Formally, this can be expressed as follows. Given a model concentration hyperparameter Dir Cat (K,p) then the following holds: number of occurrences of category so that Dir Dir (K,c1+α1,…,cK+αK) This relationship is used in Bayesian statistics to estimate the underlying parameter p of a categorical distribution given a collection of N samples. Intuitively, we can view the hyperprior vector α as pseudocounts, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vector c) in order to derive the posterior distribution.
Bayesian inference using conjugate prior:
Further intuition comes from the expected value of the posterior distribution (see the article on the Dirichlet distribution): E[pi∣X,α]=ci+αiN+∑kαk This says that the expected probability of seeing a category i among the various discrete distributions generated by the posterior distribution is simply equal to the proportion of occurrences of that category actually seen in the data, including the pseudocounts in the prior distribution. This makes a great deal of intuitive sense: if, for example, there are three possible categories, and category 1 is seen in the observed data 40% of the time, one would expect on average to see category 1 40% of the time in the posterior distribution as well.
Bayesian inference using conjugate prior:
(This intuition is ignoring the effect of the prior distribution. Furthermore, the posterior is a distribution over distributions. The posterior distribution in general describes the parameter in question, and in this case the parameter itself is a discrete probability distribution, i.e. the actual categorical distribution that generated the data. For example, if 3 categories in the ratio 40:5:55 are in the observed data, then ignoring the effect of the prior distribution, the true parameter – i.e. the true, underlying distribution that generated our observed data – would be expected to have the average value of (0.40,0.05,0.55), which is indeed what the posterior reveals. However, the true distribution might actually be (0.35,0.07,0.58) or (0.42,0.04,0.54) or various other nearby possibilities. The amount of uncertainty involved here is specified by the variance of the posterior, which is controlled by the total number of observations – the more data observed, the less uncertainty about the true parameter.) (Technically, the prior parameter αi should actually be seen as representing αi−1 prior observations of category i . Then, the updated posterior parameter ci+αi represents ci+αi−1 posterior observations. This reflects the fact that a Dirichlet distribution with α=(1,1,…) has a completely flat shape — essentially, a uniform distribution over the simplex of possible values of p. Logically, a flat distribution of this sort represents total ignorance, corresponding to no observations of any sort. However, the mathematical updating of the posterior works fine if we ignore the ⋯−1 term and simply think of the α vector as directly representing a set of pseudocounts. Furthermore, doing this avoids the issue of interpreting αi values less than 1.) MAP estimation The maximum-a-posteriori estimate of the parameter p in the above model is simply the mode of the posterior Dirichlet distribution, i.e., argmaxpp(p∣X)=αi+ci−1∑i(αi+ci−1),∀iαi+ci>1 In many practical applications, the only way to guarantee the condition that ∀iαi+ci>1 is to set αi>1 for all i.
Bayesian inference using conjugate prior:
Marginal likelihood In the above model, the marginal likelihood of the observations (i.e. the joint distribution of the observations, with the prior parameter marginalized out) is a Dirichlet-multinomial distribution: p(X∣α)=∫pp(X∣p)p(p∣α)dp=Γ(∑kαk)Γ(N+∑kαk)∏k=1KΓ(ck+αk)Γ(αk) This distribution plays an important role in hierarchical Bayesian models, because when doing inference over such models using methods such as Gibbs sampling or variational Bayes, Dirichlet prior distributions are often marginalized out. See the article on this distribution for more details.
Bayesian inference using conjugate prior:
Posterior predictive distribution The posterior predictive distribution of a new observation in the above model is the distribution that a new observation x~ would take given the set X of N categorical observations. As shown in the Dirichlet-multinomial distribution article, it has a very simple form: p(x~=i∣X,α)=∫pp(x~=i∣p)p(p∣X,α)dp=ci+αiN+∑kαk=E[pi∣X,α]∝ci+αi.
Bayesian inference using conjugate prior:
There are various relationships among this formula and the previous ones: The posterior predictive probability of seeing a particular category is the same as the relative proportion of previous observations in that category (including the pseudo-observations of the prior). This makes logical sense — intuitively, we would expect to see a particular category according to the frequency already observed of that category.
Bayesian inference using conjugate prior:
The posterior predictive probability is the same as the expected value of the posterior distribution. This is explained more below.
Bayesian inference using conjugate prior:
As a result, this formula can be expressed as simply "the posterior predictive probability of seeing a category is proportional to the total observed count of that category", or as "the expected count of a category is the same as the total observed count of the category", where "observed count" is taken to include the pseudo-observations of the prior.The reason for the equivalence between posterior predictive probability and the expected value of the posterior distribution of p is evident with re-examination of the above formula. As explained in the posterior predictive distribution article, the formula for the posterior predictive probability has the form of an expected value taken with respect to the posterior distribution: p(x~=i∣X,α)=∫pp(x~=i∣p)p(p∣X,α)dp=Ep∣X,α[p(x~=i∣p)]=Ep∣X,α[pi]=E[pi∣X,α].
Bayesian inference using conjugate prior:
The crucial line above is the third. The second follows directly from the definition of expected value. The third line is particular to the categorical distribution, and follows from the fact that, in the categorical distribution specifically, the expected value of seeing a particular value i is directly specified by the associated parameter pi. The fourth line is simply a rewriting of the third in a different notation, using the notation farther up for an expectation taken with respect to the posterior distribution of the parameters.
Bayesian inference using conjugate prior:
Observe data points one by one and each time consider their predictive probability before observing the data point and updating the posterior. For any given data point, the probability of that point assuming a given category depends on the number of data points already in that category. In this scenario, if a category has a high frequency of occurrence, then new data points are more likely to join that category — further enriching the same category. This type of scenario is often termed a preferential attachment (or "rich get richer") model. This models many real-world processes, and in such cases the choices made by the first few data points have an outsize influence on the rest of the data points.
Bayesian inference using conjugate prior:
Posterior conditional distribution In Gibbs sampling, one typically needs to draw from conditional distributions in multi-variable Bayes networks where each variable is conditioned on all the others. In networks that include categorical variables with Dirichlet priors (e.g. mixture models and models including mixture components), the Dirichlet distributions are often "collapsed out" (marginalized out) of the network, which introduces dependencies among the various categorical nodes dependent on a given prior (specifically, their joint distribution is a Dirichlet-multinomial distribution). One of the reasons for doing this is that in such a case, the distribution of one categorical node given the others is exactly the posterior predictive distribution of the remaining nodes.
Bayesian inference using conjugate prior:
That is, for a set of nodes X , if the node in question is denoted as xn and the remainder as X(−n) , then p(xn=i∣X(−n),α)=ci(−n)+αiN−1+∑iαi∝ci(−n)+αi where ci(−n) is the number of nodes having category i among the nodes other than node n.
Sampling:
There are a number of methods, but the most common way to sample from a categorical distribution uses a type of inverse transform sampling: Assume a distribution is expressed as "proportional to" some expression, with unknown normalizing constant. Before taking any samples, one prepares some values as follows: Compute the unnormalized value of the distribution for each category.
Sum them up and divide each value by this sum, in order to normalize them.
Impose some sort of order on the categories (e.g. by an index that runs from 1 to k, where k is the number of categories).
Convert the values to a cumulative distribution function (CDF) by replacing each value with the sum of all of the previous values. This can be done in time O(k). The resulting value for the first category will be 0.Then, each time it is necessary to sample a value: Pick a uniformly distributed number between 0 and 1.
Locate the greatest number in the CDF whose value is less than or equal to the number just chosen. This can be done in time O(log(k)), by binary search.
Return the category corresponding to this CDF value.If it is necessary to draw many values from the same categorical distribution, the following approach is more efficient. It draws n samples in O(n) time (assuming an O(1) approximation is used to draw values from the binomial distribution).
Sampling:
function draw_categorical(n) // where n is the number of samples to draw from the categorical distribution r = 1 s = 0 for i from 1 to k // where k is the number of categories v = draw from a binomial(n, p[i] / r) distribution // where p[i] is the probability of category i for j from 1 to v z[s++] = i // where z is an array in which the results are stored n = n - v r = r - p[i] shuffle (randomly re-order) the elements in z return z Sampling via the Gumbel distribution In machine learning it is typical to parametrize the categorical distribution, p1,…,pk via an unconstrained representation in Rk , whose components are given by: log pi+α where α is any real constant. Given this representation, p1,…,pk can be recovered using the softmax function, which can then be sampled using the techniques described above. There is however a more direct sampling method that uses samples from the Gumbel distribution. Let g1,…,gk be k independent draws from the standard Gumbel distribution, then c=argmaxi(γi+gi) will be a sample from the desired categorical distribution. (If ui is a sample from the standard uniform distribution, then log log ui) is a sample from the standard Gumbel distribution.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fitness approximation**
Fitness approximation:
Fitness approximation aims to approximate the objective or fitness functions in evolutionary optimization by building up machine learning models based on data collected from numerical simulations or physical experiments. The machine learning models for fitness approximation are also known as meta-models or surrogates, and evolutionary optimization based on approximated fitness evaluations are also known as surrogate-assisted evolutionary approximation. Fitness approximation in evolutionary optimization can be seen as a sub-area of data-driven evolutionary optimization.
Approximate models in function optimization:
Motivation In many real-world optimization problems including engineering problems, the number of fitness function evaluations needed to obtain a good solution dominates the optimization cost. In order to obtain efficient optimization algorithms, it is crucial to use prior information gained during the optimization process. Conceptually, a natural approach to utilizing the known prior information is building a model of the fitness function to assist in the selection of candidate solutions for evaluation. A variety of techniques for constructing such a model, often also referred to as surrogates, metamodels or approximation models – for computationally expensive optimization problems have been considered.
Approximate models in function optimization:
Approaches Common approaches to constructing approximate models based on learning and interpolation from known fitness values of a small population include: Low-degree polynomials and regression models Fourier surrogate modeling Artificial neural networks including Multilayer perceptrons Radial basis function networks Support vector machinesDue to the limited number of training samples and high dimensionality encountered in engineering design optimization, constructing a globally valid approximate model remains difficult. As a result, evolutionary algorithms using such approximate fitness functions may converge to local optima. Therefore, it can be beneficial to selectively use the original fitness function together with the approximate model.
Adaptive fuzzy fitness granulation:
Adaptive fuzzy fitness granulation (AFFG) is a proposed solution to constructing an approximate model of the fitness function in place of traditional computationally expensive large-scale problem analysis like (L-SPA) in the Finite element method or iterative fitting of a Bayesian network structure.
Adaptive fuzzy fitness granulation:
In adaptive fuzzy fitness granulation, an adaptive pool of solutions, represented by fuzzy granules, with an exactly computed fitness function result is maintained. If a new individual is sufficiently similar to an existing known fuzzy granule, then that granule's fitness is used instead as an estimate. Otherwise, that individual is added to the pool as a new fuzzy granule. The pool size as well as each granule's radius of influence is adaptive and will grow/shrink depending on the utility of each granule and the overall population fitness. To encourage fewer function evaluations, each granule's radius of influence is initially large and is gradually shrunk in latter stages of evolution. This encourages more exact fitness evaluations when competition is fierce among more similar and converging solutions. Furthermore, to prevent the pool from growing too large, granules that are not used are gradually eliminated.
Adaptive fuzzy fitness granulation:
Additionally, AFFG mirrors two features of human cognition: (a) granularity (b) similarity analysis. This granulation-based fitness approximation scheme is applied to solve various engineering optimization problems including detecting hidden information from a watermarked signal in addition to several structural optimization problems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Absorption (psychology)**
Absorption (psychology):
Absorption is a disposition or personality trait in which a person becomes absorbed in their mental imagery, particularly fantasy. This trait thus correlates highly with a fantasy prone personality. The original research on absorption was by American psychologist Auke Tellegen. The construct of absorption was developed in order to relate individual differences in hypnotisability to broader aspects of personality. Absorption has a variable correlation with hypnotisability (r = 0.13–0.89) perhaps because in addition to broad personality dispositions, situational factors play an important role in performance on tests of hypnotic susceptibility. Absorption is one of the traits assessed in the Multidimensional Personality Questionnaire.
Measurement:
Absorption is most commonly measured by the Tellegen Absorption Scale (TAS). Several versions of this scale are available, the most recent being by Graham Jamieson, who provides a copy of his modified scale. The TAS comprises nine content clusters or subscales: responsiveness to engaging stimuli responsiveness to inductive stimuli imagistic thought ability to summon vivid and suggestive images cross-modal experiences—e. g.: synesthesia absorption in thoughts and imaginings vivid memories of the past episodes of expanded awareness altered states of consciousnessA 1991 study by Glisky et al. concluded that responsiveness to the engaging or inductive stimuli subscales of the TAS were more strongly related to hypnotisability than were imagistic thought, episodes of expanded awareness, or absorption in thoughts and imaginings.A revised version of the TAS has been included in Tellegen's Multidimensional Personality Questionnaire (MPQ) in which it is considered both a primary and a broad trait. In the MPQ, absorption has two subscales called "sentient" and "prone to imaginative and altered states" respectively.
Measurement:
Tellegen has assigned copyright of TAS to the University of Minnesota Press (UMP). It was generally believed from the 1990s that the TAS was now in the public domain, and various improved versions were circulated. However, recently the UMP has reasserted its copyright, and regards these later versions to be unauthorised, and also disputes whether these versions are in fact improvements.
Relationship to other personality traits:
Absorption is strongly correlated with openness to experience. Studies using factor analysis have suggested that the fantasy, aesthetics, and feelings facets of the NEO PI-R Openness to Experience scale are closely related to absorption and predict hypnotisability, whereas the remaining three facet scales of ideas, actions, and values are largely unrelated to these constructs. Absorption is unrelated to extraversion or neuroticism. One study found a positive correlation between absorption and need for cognition. Absorption has a strong relationship to self-transcendence in the Temperament and Character Inventory.
Emotional experience:
Absorption can facilitate the experience of both positive and negative emotions. Positive experiences facilitated by absorption include the enjoyment of music, art, and natural beauty (e.g. sunsets) and pleasant forms of daydreaming. Absorption has also been linked to forms of maladjustment, such as nightmare frequency and anxiety sensitivity (fear of one's own anxiety symptoms), and dissociative symptoms. Absorption may act to amplify minor somatic symptoms, leading to an increased risk of conditions associated with hypersensitivity to internal bodily sensations, such as somatoform disorders and panic disorder. People may have a particular risk of the aforementioned problems when they are prone to both high absorption and to personality traits associated with negative emotionality.
Altered states of consciousness:
A core feature of absorption is an experience of focused attention wherein: "objects of absorbed attention acquire an importance and intimacy that are normally reserved for the self and may, therefore, acquire a temporary self-like quality. These object identifications have mystical overtones." This capacity for focused attention facilitates the experience of altered states of consciousness. In addition to individual differences in hypnotizability, absorption is associated with differential responses to other procedures for inducing altered states of consciousness, including meditation, marijuana use, and biofeedback. A review of studies on differential response to the drug psilocybin found that absorption had the largest effect of all the psychological variables assessed on the intensity of individual experiences of altered states of consciousness. Absorption was strongly associated with overall consciousness alteration and with mystical-type experiences and visual effects induced by psilocybin. Researchers have suggested that individual differences in both absorption and responsiveness to hallucinogenic drugs could be related to the binding potential of serotonin receptors (specifically 5-HT2A) which are the main site of action of classic hallucinogens, such as LSD and psilocybin.A series of studies has found that people higher in absorption have a greater propensity towards having religious experiences (also known as spiritual experiences), which may have a sensory-like character (e.g., reporting the Holy Spirit "rush" through them). Higher levels of absorption have been found to predict people reporting more and stronger mystical experiences when wearing a placebo version of a God helmet—that is, a helmet that supposedly induces spiritual experiences through magnetic stimulation of the temporal lobes of the brain but in fact provides no magnetic stimulation. Furthermore, in most studies people higher in absorption report experiencing greater levels of awe when viewing vast landscapes, art exhibitions, and other potentially awe-inducing things. Given these findings on spiritual experiences, placebo god helmets, and awe, the authors of a 2019 research paper suggest that higher levels of absorption may give individuals a greater "talent" for "experienc[ing] as real what must be imagined". The authors argue that this is a key aspect of most religious or spiritual traditions, while noting that they are not necessarily dismissing the reality of what is reported in spiritual experiences.
Altered states of consciousness:
Dream recall Research has found that frequency of dream recall is associated with absorption and related personality traits, such as openness to experience and proneness to dissociation. A proposed explanation is the continuity model of human consciousness. This model proposes that people who are prone to vivid and unusual experiences during the day, such as fantasy and daydreaming, will tend to have vivid and memorable dream content, and hence will be more likely to remember their dreams. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nomological network**
Nomological network:
A nomological network (or nomological net) is a representation of the concepts (constructs) of interest in a study, their observable manifestations, and the interrelationships between these. The term "nomological" derives from the Greek, meaning "lawful", or in philosophy of science terms, "law-like". It was Cronbach and Meehl's view of construct validity that in order to provide evidence that a measure has construct validity, a nomological network must be developed for its measure. The necessary elements of a nomological network are: At least two constructs; One or more theoretical propositions, specifying linkages between constructs, for example: "As age increases, memory loss increases".
Nomological network:
Correspondence rules, allowing each construct to be measured empirically. Such a rule is said to "operationalize" the construct, as for example in the operationalization: "Age" is measured by asking "how old are you?" Empirical linkages represent hypotheses before data collection, empirical generalizations after data collection.Validity evidence based on nomological validity is a general form of construct validity. It is the degree to which a construct behaves as it should within a system of related constructs (the nomological network).Nomological networks are used in theory development and use a modernist approach. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Breakaway rim**
Breakaway rim:
A breakaway rim is a basketball rim that contains a hinge and a spring at the point where it attaches to the backboard so that it can bend downward when a player dunks a basketball, and then quickly snaps back into a horizontal position when the player releases it. It allows players to dunk the ball without shattering the backboard, and it reduces the possibility of wrist injuries. Breakaway rims were invented in the mid-1970s and are now an essential element of high-level basketball.
Breakaway rim:
In the early days of basketball, dunking was considered ungentlemanly, and was rarely used outside of practice or warm-up drills. A broken backboard or distorted rim could delay a game for hours. During the 1970s, however, players like Julius Erving and David Thompson of the American Basketball Association popularized the dunk with their athletic flights to the basket, increasing the demand for flexible rims.While several men claim to have created the breakaway rim, Arthur Ehrat is recognized as the inventor by the Smithsonian Institution's Lemelson Center for the Study of Invention & Innovation. A resident of Lowder, Illinois, Ehrat worked at a grain elevator for most of his life and barely knew anything about basketball. In 1975, his nephew, an assistant basketball coach at Saint Louis University, asked him to help design a rim that could support slam dunks. Using a spring from a John Deere cultivator, Ehrat designed a rim that could bend and spring back after 125 pounds of force were applied to it. He called his device "The Rebounder". In 1982, the US patent office accepted his 1976 application to patent a "deformation-preventing swingable mount for basketball goals". The breakaway rim was first used by the NCAA during the 1978 Final Four in St. Louis. Although Darryl Dawkins shattered two backboards with his dunks in 1979, the old-style bolted rim structure was not phased out of the NBA until the 1981–82 season, when breakaway rims debuted as a uniform equipment upgrade. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MicroDNA**
MicroDNA:
MicroDNA is the most abundant subtype of Extrachromosomal Circular DNA (eccDNA) in humans, typically ranging from 200-400 base pairs in length and enriched in non-repetitive genomic sequences with a high density of exons. Additionally, microDNA has been found to come from regions with CpG-islands which are commonly found within the 5' and 3' UTRs. Being produced from regions of active transcription, it is hypothesized that microDNA may be formed as a by-product of transcriptional DNA damage repair. MicroDNA is also thought to arise from other DNA repair pathways, mainly due to the parental sequences of microDNA having 2- to 15 bp direct repeats at the ends, resulting in replication slippage repair. While only recently discovered, the role microDNA plays in and out of the cell is still not completely understood. However, microDNA is currently thought to affect cellular homeostasis through transcription factor binding and have been used as a cancer biomarker.
Discovery:
MicroDNA was discovered through protocols similar to that of eccDNA extraction. Specifically, eccDNA clones were generated through multiple displacement amplification and sequenced with Sanger sequencing, leading to microDNA's discovery. Now with high-throughput sequencing being a more common practice, the complete genomic sequence of mammalian eccDNA has been obtained through the sequencing of the rolling amplification products of eccDNA. Computational methods were then used to identify junctional sequences in the DNA. The peaks found at lengths of 180 and 380 bp were discovered as microDNA and characterized by their CpG-islands and flanking 2- to 15 bp direct repeats.Since its discovery, microDNA has been identified in all tissue types and various samples, including mouse tissues and human cancer cell lines. However, different species have unique genomic sites that specifically produce microDNA. Because there are common genomic spots that produce microDNA in multiple cell and tissue types within a given species, there is evidence that they may not be produced solely as a DNA synthesis by-product. However, studies have revealed separate clustering of microDNA extracted from cell-lines of different tissues, suggesting that formation may be linked to cell-lineage and unique transcriptional environments found in different cell types.
Biogenesis:
While the formation of microDNA is still uncertain, it has been linked to transcriptional activity and multiple DNA repair pathways. As microDNA is produced from areas of high transcription activity/exon density, it could be formed from DNA repair during transcription. Interestingly, triple-stranded DNA:RNA hybrids formed during transcription, termed R-loops, tend to form at CpG-islands within the 5' and 3' UTRs, similar to microDNA. R-loops are correlated with DNA damage and genetic instability which is suggestive that microDNA may form from the single-stranded DNA (ssDNA) loop during the DNA damage response for R-loops.
Biogenesis:
In DNA replication of short direct repeats (as found in the flanking regions of microDNA gene sources), it is possible for DNA loops to form, on the parent or product strand, through replication slippage. To repair this, the mismatch repair (MMR) pathway can remove the loop and upon ligation of the repeating ends, single-strand microDNA can be produced. The ss microDNA is then converted to double-stranded DNA; this process is still unknown. It is important to note that if the loop is formed on the newly replicated strand, there is no consequential deletion in the genome while microdeletions can form from excisions in the template strand. To understand the role MMR may have in microDNA biogenesis, analysis of microDNA abundance was performed in DT40 cells upon removal of MSH3, an essential protein in MMR. The resulting microDNA from the DT40 MSH3-/- cell line had a higher enrichment of CpG-islands compared to the wild-type as well as an over 80% reduction of double-stranded microDNA. Thus, it is hypothesized that the MMR pathway is essential for microDNA production from non-CpG islands in the genome while CpG enriched microDNA are formed by a different repair pathway. Again, because of the microhomology on the template genome, if there is a DNA break or a pause in replication (replication fork stalling), the newly synthesized DNA can circularize into ss microDNA. This means when the template DNA is repaired after the creation of the microDNA, there is no deletion.MicroDNA created through the MMR pathway and replication fork stalling is a result of errors in DNA replication, however, there is evidence of microDNA being present in non-dividing cells as well. This means that some microDNA is produced through repair pathways that also occur in quiescent cells, such as from 5' ends of LINE1 elements that are known to transpose. To move around the genome, DNA transposons require transposase to remove the transposon from its original site and catalyze its insertion elsewhere in the genome. Thus, the transposon is created by two double-stranded DNA breaks, also creating a microdeletion in the DNA. This dsDNA fragment can be circularized through microhomology-mediated circularization, creating a ds microDNA.
Implications:
Transcription Factor Binding Being 200-400 bp long, microDNA is too small to encode proteins, however, they may be important for molecular sponging. Transcription factors often bind to promoter or regulatory sequences at the 5' end of DNA to initiate transcription. These transcription factors can also bind to their respective recognition sites on microDNA because the microDNA often originates from the 5' UTRs of its parental gene, therefore, acting as a sponge for transcription factors. This means microDNA can indirectly control gene expression and transcription homeostasis.
Implications:
Cancer Applications In general, nucleic acid molecules that are found in the bloodstream, termed circulating or cell-free, are a relatively new disease biomarker being investigated, including for diagnosis and progression of cancer. These molecules, such as cell-free DNA (cfDNA), are released into the blood upon cell death and in cases of cancer, can be identified based on the known mutations in oncogenes.Recent studies have extended the use of cell-free nucleic acids as cancer biomarkers to microDNA. The cfmicroDNA was obtained from human and mouse serum and because of their similarities to cell-derived microDNA, as described above, it was concluded that cfmicroDNA is produced in the cell. Similarly, when comparing lung tissue pre- and post-tumor removal, there was no found difference in circulating microDNA key characteristics other than an unexpected trend of longer circulating microDNA sequences in cancer patients pre-tumor removal. The length of cfmicroDNA was found to be shorter post-surgery.Cell-free DNA is quickly cleared from the blood, making it a difficult cancer biomarker. However, because circular DNA is not susceptible to DNA breakage by RNAse and exonuclease, it is more stable than linear DNA. In combination with the observed lengthening of cfmicroDNA in cancer patient serum, this makes circulating microDNA a good cancer biomarker for both diagnosis and progression after treatment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**USB4**
USB4:
USB4 (official style), sometimes referred to as USB 4.0, is a technical specification that the USB Implementers Forum (USB-IF) released on 29 August 2019. USB4 is based on the Thunderbolt 3 protocol specification, which Intel has donated to the USB-IF, but is aligned with the Thunderbolt 4 specification. The USB4 architecture can share a single, high-speed link with multiple hardware endpoints dynamically, best serving each transfer by data type and application.
USB4:
In contrast to prior USB protocol standards, USB4 mandates the exclusive use of the Type-C connector and USB Power Delivery (USB-PD) specification. USB4 products must support 20 Gbit/s throughput and can support 40 Gbit/s throughput, but because of tunneling, even nominal 20 Gbit/s can result in higher effective data rates in USB4, compared to USB 3.2, when sending mixed data. In contrast to USB 3.2, it allows tunneling of DisplayPort and PCI Express.
USB4:
Support of interoperability with Thunderbolt 3 products is optional for USB4 hosts and USB4 peripheral devices; but is mandatory for USB4 hubs on all of their downstream facing ports (DFP), and for USB4-based docks, on their upstream facing port (UFP) in addition to all of their downstream facing ports. On the other hand, support for USB4 is required in Thunderbolt 4.The USB4 specification was updated on 18 October 2022 by the USB Implementers Forum, adding a new 80 Gbit/s bi-directional mode and 120 Gbit/s asymmetric mode.
History:
USB4 was announced in March 2019. The USB4 specification version 1.0, released 29 August 2019, uses "Universal Serial Bus 4" and specifically "USB4", that is the short name branding is deliberately without a separating space versus the prior versions. Several news reports before the release of that version use the terminology "USB 4.0" and "USB 4". Even after publication of rev. 1.0, some sources write "USB 4", claiming "to reflect the way readers search".On 1 September 2022, the USB Promoter Group announced the pending release of the USB4 Version 2.0 specification, and the specification was subsequently released on 18 October 2022.At time of publication of version 1.0, promoter companies having employees that participated in the USB4 Specification technical work group were: Apple Inc., Hewlett-Packard, Intel, Microsoft, Renesas Electronics, STMicroelectronics, and Texas Instruments.
History:
Goals stated in the USB4 specification are increasing bandwidth, helping to converge the USB-C connector ecosystem, and "minimize end-user confusion". Some of the key areas to achieve this are using a single USB-C connector type, while retaining compatibility with existing USB and Thunderbolt products.On 29 April 2020, DisplayPort Alt Mode version 2.0 was released, supporting DisplayPort 2.0 over USB4.
Data transfer modes:
USB4 by itself does not provide any generic data transfer mechanism or device classes like USB 3.x, but serves mostly as a way to tunnel other protocols like USB 3.2, DisplayPort, and optionally PCIe. While it does provide a native Host-to-Host protocol, as the name implies it is only available between two connected hosts; it is used to implement Host IP Networking. With the USB4 1.0 specification, when the host and device do not support optional PCIe tunneling, the non-display bandwidth is limited to mandatory USB 3.2 10 Gbit/s, with optional support for USB 3.2 20 Gbit/s. The USB4 2.0 specification named this USB3 Gen X tunneling and introduced optional support for a new USB3 Gen T tunneling that extends the USB3 protocol to be able to use the maximum available bandwidth. USB4 V2.0 specifies tunneling of: USB 3.2 ("Enhanced SuperSpeed") Tunneling DisplayPort 2.1-based Tunneling PCI Express (PCIe)-based TunnelingUSB4 also requires support of DisplayPort Alternate Mode. That means, DP can be sent via USB4 tunneling or by DP Alternate Mode. USB 4 supports DisplayPort 2.0 over its alternative mode. DisplayPort 2.0 can support 8K resolution at 60 Hz with HDR10 color and can use up to 80 Gbit/s which is the same amount available to USB data, but just unidirectional.Legacy USB (1–2) is always supported using the dedicated wires in the USB-C connector.
Data transfer modes:
Some transfer modes are supported by all USB4 devices, support for others is optional. The requirements for supported modes depend on the type of device.
Although USB4 is required to support dual-lane modes, it uses single-lane operations during initialization of a dual-lane link; single-lane link can also be used as a fallback mode in case of a lane bonding error.
In Thunderbolt compatibility mode, the lanes are driven slightly faster at 10.3125 Gbit/s (for Gen 2) and 20.625 Gbit/s (for Gen 3), as required by Thunderbolt specifications (these are called legacy speeds and rounded speeds). After removal of 64b/66b encoding, those also become round, 20.625/66*64 = 20.000 Gbit/s.
Power delivery:
USB4 requires USB Power Delivery (USB PD). A USB4 connection needs to negotiate a USB PD contract before being established. A USB4 source must at least provide 7.5 W (5 V, 1.5 A) per port. A USB4 sink must require less than 250 mA (default), 1.5 A, or 3 A @ 5 V of power (depending on USB-C resistor configuration) before USB PD negotiation. With USB PD, up to 240 W of power is possible with 'Extended power range' (5 A at 48 V). For 'Standard Power range' up to 100 W is possible (5 A at 20 V).
Thunderbolt 3 compatibility:
The USB4 specification states that a design goal is to "Retain compatibility with existing ecosystem of USB and Thunderbolt products." Compatibility with Thunderbolt 3 is required for USB4 hubs; it is optional for USB4 hosts and USB4 peripheral devices. Compatible products need to implement 40 Gbit/s mode, at least 15 W of supplied power, and the different clock; implementers need to sign the license agreement and register a Vendor ID with Intel.
Pinout:
USB4 has 24 pins in a symmetrical USB type C shell. USB4 has 12 A pins on the top and 12 B pins on the bottom.USB4 has two lanes of differential SuperSpeed pairs. Lane one uses TX1+, TX1-, RX1+, RX1- and lane two uses TX2+, TX2-, RX2+, RX2-. USB4 transfers data at 20 Gbit/s per lane. USB4 also keeps the differential D+ and D- for USB 2.0 transfer.The CC configuration channels have the roles of creating a relationship between attached ports, detecting plug orientation due to the reversible USB type C shell, discovering the VBUS power supply pins, determining the lane ordering of the SuperSpeed lanes, and finally the USB protocol makes the CC configuration channel responsible for entering USB4 operation.
Software support:
USB4 is supported by: Linux kernel 5.6, released on 29 March 2020 macOS Big Sur (11.0), released on 12 November 2020 Windows 11, released on 5 October 2021
Hardware support:
During CES 2020, USB-IF and Intel stated their intention to allow USB4 products that support all the optional functionality as Thunderbolt 4 products. The first products compatible with USB4 were Intel's Tiger Lake processors, with more devices appearing around the end of 2020.Brad Saunders, CEO of the USB Promoter Group, anticipates that most PCs with USB4 will support Thunderbolt 3, but for phones the manufacturers are less likely to implement Thunderbolt 3 support.On 3 March 2020, Cypress Semiconductor announced new Type-C power (PD) controllers supporting USB4, CCG6DF as dual port and CCG6SF as single-port.In November 2020, Apple unveiled MacBook Air (M1, 2020), MacBook Pro (13-inch, M1, 2020), and Mac mini (M1, 2020) featuring two USB4 ports.
Hardware support:
List of Apple devices featuring USB4 ports include: MacBook Air (M2, 2022) MacBook Pro (13-inch, M2, 2022) iMac (24-inch, M1, 2021) MacBook Pro (13-inch, M1, 2020) MacBook Air (M1, 2020) Mac mini (M1, 2020)AMD also stated that Zen3+ (Rembrandt) processors will support USB4 and released products do have this feature after a chipset driver update. However, AMD has only announced support for USB 3.2 Gen 2x2 in Zen 4 processors that were released in September 2022. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Triangle mesh**
Triangle mesh:
In computer graphics, a triangle mesh is a type of polygon mesh. It comprises a set of triangles (typically in three dimensions) that are connected by their common edges or vertices.
Triangle mesh:
Many graphics software packages and hardware devices can operate more efficiently on triangles that are grouped into meshes than on a similar number of triangles that are presented individually. This is typically because computer graphics do operations on the vertices at the corners of triangles. With individual triangles, the system has to operate on three vertices for every triangle. In a large mesh, there could be eight or more triangles meeting at a single vertex - by processing those vertices just once, it is possible to do a fraction of the work and achieve an identical effect.
Triangle mesh:
In many computer graphics applications it is necessary to manage a mesh of triangles. The mesh components are vertices, edges, and triangles. An application might require knowledge of the various connections between the mesh components. These connections can be managed independently of the actual vertex positions. This document describes a simple data structure that is convenient for managing the connections. This is not the only possible data structure. Many other types exist and have support for various queries about meshes.
Representation:
Various methods of storing and working with a mesh in computer memory are possible. With the OpenGL and DirectX APIs there are two primary ways of passing a triangle mesh to the graphics hardware, triangle strips and index arrays.
Representation:
Triangle strip One way of sharing vertex data between triangles is the triangle strip. With strips of triangles each triangle shares one complete edge with one neighbour and another with the next. Another way is the triangle fan which is a set of connected triangles sharing one central vertex. With these methods vertices are dealt with efficiently resulting in the need to only process N+2 vertices in order to draw N triangles.
Representation:
Triangle strips are efficient, however the drawback is that it may not be obvious how or convenient to translate an arbitrary triangle mesh into strips.
Representation:
The Data Structure The data structure representing the mesh provides support for two basic operations: inserting triangles and removing triangles. It also supports an edge collapse operation that is useful in triangle decimation schemes. The structure provides no support for the vertex positions, but it does assume that each vertex is assigned a unique integer identifier, typically the index of that vertex in an array of contiguous vertex positions. A mesh vertex is defined by a single integer and is denoted by hvi. A mesh edge is defined by a pair of integers hv0,v1i, each integer corresponding to an end point of the edge. To support edge maps, the edges are stored so that v0 = min(v0,v1). A triangle component is defined by a triple of integers hv0,v1,v2i, each integer corresponding to a vertex of the triangle. To support triangle maps, the triangles are stored so that v0 = min(v0,v1,v2). Observe that hv0,v1,v2i and hv0,v2,v1i are treated as different triangles. An application requiring double–sided triangles must insert both triples into the data structure. For the sake of avoiding constant reminders about order of indices, in the remainder of the document the pair/triple information does not imply the vertices are ordering in any way (although the implementation does handle the ordering).
Representation:
Connectivity between the components is completely determined by the set of triples representing the triangles. A triangle t = hv0,v1,v2i has vertices v0, v1, and v2. It has edges e0 = hv0,v1i, e1 = hv1,v2i, and e2 = hv2,v0i. The inverse connections are also known. Vertex v0 is adjacent to edges e0 and e2 and to triangle t. Vertex v1 is adjacent to edges e0 and e1 and to triangle t. Vertex v2 is adjacent to edges e1 and e2 and to triangle t. All three edges e0, e1, and e2 are adjacent to t.
Representation:
How much of this information a data structure stores is dependent on the needs of an application. Moreover, the application might want to have additional information stored at the components. The information stored at a vertex, edge, or triangle is referred to as the vertex attribute, edge attribute, or triangle attribute. The abstract representations of these for the simple data structure described here are Vertex = <integer>; // v Edge = <integer, integer>; // v0, v1 Triangle <integer,integer,integer>; // v0, v1, v2 VData = <application-specific vertex data>; EData = <application-specific edge data>; TData = <application-specific triangle data>; VAttribute = <VData, set<Edge>,set<Triangle>>; // data, eset, tset EAttribute = <EData, set<Triangle>>; // data, tset TAttribute = <TData>; // data VPair = pair<Vertex,VAttribute>; EPair = pair<Edge,EAttribute>; TPair = pair<Triangle,TAttribute>; VMap = map<VPair>; EMap = map<EPair>; TMap = map<TPair>; Mesh = <VMap,EMap,TMap>; // vmap, emap, tmap The maps support the standard insertion and removal functions for a hash table. Insertion occurs only if the item does not already exist. Removal occurs only if the item does exist.
Representation:
Edge Collapse This operation involves identifying an edge hvk, vti where vk is called the keep vertex and vt is called the throw vertex. The triangles that share this edge are removed from the mesh. The vertex vt is also removed from the mesh. Any triangles that shared vt have that vertex replaced by vk. Figure 1 shows a triangle mesh and a sequence of three edge collapses applied to the mesh.
Representation:
Index array With index arrays, a mesh is represented by two separate arrays, one array holding the vertices, and another holding sets of three indices into that array which define a triangle. The graphics system processes the vertices first and renders the triangles afterwards, using the index sets working on the transformed data. In OpenGL, this is supported by the glDrawElements() primitive when using Vertex Buffer Object (VBO).
Representation:
With this method, any arbitrary set of triangles sharing any arbitrary number of vertices can be stored, manipulated, and passed to the graphics API, without any intermediary processing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hidden attractor**
Hidden attractor:
In the bifurcation theory, a bounded oscillation that is born without loss of stability of stationary set is called a hidden oscillation. In nonlinear control theory, the birth of a hidden oscillation in a time-invariant control system with bounded states means crossing a boundary, in the domain of the parameters, where local stability of the stationary states implies global stability (see, e.g. Kalman's conjecture). If a hidden oscillation (or a set of such hidden oscillations filling a compact subset of the phase space of the dynamical system) attracts all nearby oscillations, then it is called a hidden attractor. For a dynamical system with a unique equilibrium point that is globally attractive, the birth of a hidden attractor corresponds to a qualitative change in behaviour from monostability to bi-stability. In the general case, a dynamical system may turn out to be multistable and have coexisting local attractors in the phase space. While trivial attractors, i.e. stable equilibrium points, can be easily found analytically or numerically, the search of periodic and chaotic attractors can turn out to be a challenging problem (see, e.g. the second part of Hilbert's 16th problem).
Classification of attractors as being hidden or self-excited:
To identify a local attractor in a physical or numerical experiment, one needs to choose an initial system’s state in attractor’s basin of attraction and observe how the system’s state, starting from this initial state, after a transient process, visualizes the attractor. The classification of attractors as being hidden or self-excited reflects the difficulties of revealing basins of attraction and searching for the local attractors in the phase space.
Classification of attractors as being hidden or self-excited:
Definition.An attractor is called a hidden attractor if its basin of attraction does not intersect with a certain open neighbourhood of equilibrium points; otherwise it is called a self-excited attractor.
Classification of attractors as being hidden or self-excited:
The classification of attractors as being hidden or self-excited was introduced by G. Leonov and N. Kuznetsov in connection with the discovery of the hidden Chua attractor for the first time in 2009 year. Similarly, an arbitrary bounded oscillation, not necessarily having an open neighborhood as the basin of attraction in the phase space, is classified as a self-excited or hidden oscillation.
Classification of attractors as being hidden or self-excited:
Self-excited attractors For a self-excited attractor, its basin of attraction is connected with an unstable equilibrium and, therefore, the self-excited attractors can be found numerically by a standard computational procedure in which after a transient process, a trajectory, starting in a neighbourhood of an unstable equilibrium, is attracted to the state of oscillation and then traces it (see, e.g. self-oscillation process). Thus, self-excited attractors, even coexisting in the case of multistability, can be easily revealed and visualized numerically. In the Lorenz system, for classical parameters, the attractor is self-excited with respect to all existing equilibria, and can be visualized by any trajectory from their vicinities; however, for some other parameter values there are two trivial attractors coexisting with a chaotic attractor, which is a self-excited one with respect to the zero equilibrium only. Classical attractors in Van der Pol, Beluosov–Zhabotinsky, Rössler, Chua, Hénon dynamical systems are self-excited.
Classification of attractors as being hidden or self-excited:
A conjecture is that the Lyapunov dimension of a self-excited attractor does not exceed the Lyapunov dimension of one of the unstable equilibria, the unstable manifold of which intersects with the basin of attraction and visualizes the attractor.
Classification of attractors as being hidden or self-excited:
Hidden attractors Hidden attractors have basins of attraction which are not connected with equilibria and are “hidden” somewhere in the phase space. For example, the hidden attractors are attractors in the systems without equilibria: e.g. rotating electromechanical dynamical systems with Sommerfeld effect (1902), in the systems with only one equilibrium, which is stable: e.g. counterexamples to the Aizerman's conjecture (1949) and Kalman's conjecture (1957) on the monostability of nonlinear control systems. One of the first related theoretical problems is the second part of Hilbert's 16th problem on the number and mutual disposition of limit cycles in two-dimensional polynomial systems where the nested stable limit cycles are hidden periodic attractors. The notion of a hidden attractor has become a catalyst for the discovery of hidden attractors in many applied dynamical models.In general, the problem with hidden attractors is that there are no general straightforward methods to trace or predict such states for the system’s dynamics (see, e.g.). While for two-dimensional systems, hidden oscillations can be investigated using analytical methods (see, e.g., the results on the second part of Hilbert's 16th problem), for the study of stability and oscillations in complex nonlinear multidimensional systems, numerical methods are often used.
Classification of attractors as being hidden or self-excited:
In the multi-dimensional case the integration of trajectories with random initial data is unlikely to provide a localization of a hidden attractor, since a basin of attraction may be very small, and the attractor dimension itself may be much less than the dimension of the considered system.
Therefore, for the numerical localization of hidden attractors in multi-dimensional space, it is necessary to develop special analytical-numerical computational procedures, which allow one to choose initial data in the attraction domain of the hidden oscillation (which does not contain neighborhoods of equilibria), and then to perform trajectory computation.
Classification of attractors as being hidden or self-excited:
There are corresponding effective methods based on homotopy and numerical continuation: a sequence of similar systems is constructed, such that for the first (starting) system, the initial data for numerical computation of an oscillating solution (starting oscillation) can be obtained analytically, and then the transformation of this starting oscillation in the transition from one system to another is followed numerically.
Theory of hidden oscillations:
The classification of attractors as self-exited or hidden ones was a fundamental premise for the emergence of the theory of hidden oscillations, which represents the modern development of Andronov’s theory of oscillations. It is key to determining the exact boundaries of the global stability, parts of which are classified by N. Kuznetsov as trivial (i.e., determined by local bifurcations) or as hidden (i.e., determined by non-local bifurcations and by the birth of hidden oscillations).
Books:
Chaotic Systems with Multistability and Hidden Attractors (Eds.: Wang, Kuznetsov, Chen), Springer, 2021 (doi:10.1007/978-3-030-75821-9) Nonlinear Dynamical Systems with Self-Excited and Hidden Attractors (Eds.: Pham, Vaidyanathan, Volos et al.), Springer, 2018 (doi:10.1007/978-3-319-71243-7)
Selected lectures:
N.Kuznetsov, Invited lecture The theory of hidden oscillations and stability of dynamical systems, Int. Workshop on Applied Mathematics, Czech Republic, 2021 Afraimovich Award's plenary lecture: N. Kuznetsov The theory of hidden oscillations and stability of dynamical systems. Int. Conference on Nonlinear Dynamics and Complexity, 2021 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solar eclipse of December 13, 1974**
Solar eclipse of December 13, 1974:
A partial solar eclipse occurred on December 13, 1974. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.
Related eclipses:
Eclipses in 1974 A partial lunar eclipse on Tuesday, 4 June 1974.
A total solar eclipse on Thursday, 20 June 1974.
A total lunar eclipse on Friday, 29 November 1974.
A partial solar eclipse on Friday, 13 December 1974.
Solar eclipses of 1971–1974 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit.Note: Partial solar eclipses on February 25, 1971 and August 20, 1971 occur in the next lunar year set.
Metonic series The metonic series repeats eclipses every 19 years (6939.69 days), lasting about 5 cycles. Eclipses occur in nearly the same calendar date. In addition, the octon subseries repeats 1/5 of that or every 3.8 years (1387.94 days). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kaniadakis distribution**
Kaniadakis distribution:
In statistics, a Kaniadakis distribution (also known as κ-distribution) is a statistical distribution that emerges from the Kaniadakis statistics. There are several families of Kaniadakis distributions related to different constraints used in the maximization of the Kaniadakis entropy, such as the κ-Exponential distribution, κ-Gaussian distribution, Kaniadakis κ-Gamma distribution and κ-Weibull distribution. The κ-distributions have been applied for modeling a vast phenomenology of experimental statistical distributions in natural or artificial complex systems, such as, in epidemiology, quantum statistics, in astrophysics and cosmology, in geophysics, in economy, in machine learning.The κ-distributions are written as function of the κ-deformed exponential, taking the form exp κ(−βEi+βμ) enables the power-law description of complex systems following the consistent κ-generalized statistical theory., where exp κ(x)=(1+κ2x2+κx)1/κ is the Kaniadakis κ-exponential function.
Kaniadakis distribution:
The κ-distribution becomes the common Boltzmann distribution at low energies, while it has a power-law tail at high energies, the feature of high interest of many researchers.
List of κ-statistical distributions:
Supported on the whole real line The Kaniadakis Gaussian distribution, also called the κ-Gaussian distribution. The normal distribution is a particular case when 0.
The Kaniadakis double exponential distribution, as known as Kaniadakis κ-double exponential distribution or κ-Laplace distribution. The Laplace distribution is a particular case when 0.
Supported on semi-infinite intervals, usually [0,∞) The Kaniadakis Exponential distribution, also called the κ-Exponential distribution. The exponential distribution is a particular case when 0.
The Kaniadakis Gamma distribution, also called the κ-Gamma distribution, which is a four-parameter ( κ,α,β,ν ) deformation of the generalized Gamma distribution.
The κ-Gamma distribution becomes a ...
κ-Exponential distribution of Type I when α=ν=1 κ-Erlang distribution when α=1 and ν=n= positive integer.
κ-Half-Normal distribution, when α=2 and ν=1/2 Generalized Gamma distribution, when α=1 In the limit κ→0 , the κ-Gamma distribution becomes a ...
List of κ-statistical distributions:
Erlang distribution, when α=1 and ν=n= positive integer; Chi-Squared distribution, when α=1 and ν= half integer; Nakagami distribution, when α=2 and ν>0 Rayleigh distribution, when α=2 and ν=1 Chi distribution, when α=2 and ν= half integer; Maxwell distribution, when α=2 and ν=3/2 Half-Normal distribution, when α=2 and ν=1/2 Weibull distribution, when α>0 and ν=1 Stretched Exponential distribution, when α>0 and ν=1/α
Common Kaniadakis distributions:
κ-Exponential distribution κ-Gaussian distribution κ-Gamma distribution κ-Weibull distribution κ-Logistic distribution κ-Erlang distribution κ-Distribution Type IV The Kaniadakis distribution of Type IV (or κ-Distribution Type IV) is a three-parameter family of continuous statistical distributions.The κ-Distribution Type IV distribution has the following probability density function: exp κ(−βxα) valid for x≥0 , where 0≤|κ|<1 is the entropic index associated with the Kaniadakis entropy, β>0 is the scale parameter, and α>0 is the shape parameter.
Common Kaniadakis distributions:
The cumulative distribution function of κ-Distribution Type IV assumes the form: exp κ(−βxα) The κ-Distribution Type IV does not admit a classical version, since the probability function and its cumulative reduces to zero in the classical limit κ→0 Its moment of order m given by E[Xm]=(2κβ)−m/α1+κm2αΓ(1κ+mα)Γ(1−m2α)Γ(1κ+m2α) The moment of order m of the κ-Distribution Type IV is finite for m<2α | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Usability**
Usability:
Usability can be described as the capacity of a system to provide a condition for its users to perform the tasks safely, effectively, and efficiently while enjoying the experience. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use.The object of use can be a software application, website, book, tool, machine, process, vehicle, or anything a human interacts with. A usability study may be conducted as a primary job function by a usability analyst or as a secondary job function by designers, technical writers, marketing personnel, and others. It is widely used in consumer electronics, communication, and knowledge transfer objects (such as a cookbook, a document or online help) and mechanical objects such as a door handle or a hammer.
Usability:
Usability includes methods of measuring usability, such as needs analysis and the study of the principles behind an object's perceived efficiency or elegance. In human-computer interaction and computer science, usability studies the elegance and clarity with which the interaction with a computer program or a web site (web usability) is designed. Usability considers user satisfaction and utility as quality components, and aims to improve user experience through iterative design.
Introduction:
The primary notion of usability is that an object designed with a generalized users' psychology and physiology in mind is, for example: More efficient to use—takes less time to accomplish a particular task Easier to learn—operation can be learned by observing the object More satisfying to useComplex computer systems find their way into everyday life, and at the same time the market is saturated with competing brands. This has made usability more popular and widely recognized in recent years, as companies see the benefits of researching and developing their products with user-oriented methods instead of technology-oriented methods. By understanding and researching the interaction between product and user, the usability expert can also provide insight that is unattainable by traditional company-oriented market research. For example, after observing and interviewing users, the usability expert may identify needed functionality or design flaws that were not anticipated. A method called contextual inquiry does this in the naturally occurring context of the users own environment. In the user-centered design paradigm, the product is designed with its intended users in mind at all times. In the user-driven or participatory design paradigm, some of the users become actual or de facto members of the design team.The term user friendly is often used as a synonym for usable, though it may also refer to accessibility. Usability describes the quality of user experience across websites, software, products, and environments. There is no consensus about the relation of the terms ergonomics (or human factors) and usability. Some think of usability as the software specialization of the larger topic of ergonomics. Others view these topics as tangential, with ergonomics focusing on physiological matters (e.g., turning a door handle) and usability focusing on psychological matters (e.g., recognizing that a door can be opened by turning its handle). Usability is also important in website development (web usability). According to Jakob Nielsen, "Studies of user behavior on the Web find a low tolerance for difficult designs or slow sites. People don't want to wait. And they don't want to learn how to use a home page. There's no such thing as a training class or a manual for a Web site. People have to be able to grasp the functioning of the site immediately after scanning the home page—for a few seconds at most." Otherwise, most casual users simply leave the site and browse or shop elsewhere.
Introduction:
Usability can also include the concept of prototypicality, which is how much a particular thing conforms to the expected shared norm, for instance, in website design, users prefer sites that conform to recognised design norms.
Definition:
ISO defines usability as "The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use." The word "usability" also refers to methods for improving ease-of-use during the design process. Usability consultant Jakob Nielsen and computer science professor Ben Shneiderman have written (separately) about a framework of system acceptability, where usability is a part of "usefulness" and is composed of: Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design? Efficiency: Once users have learned the design, how quickly can they perform tasks? Memorability: When users return to the design after a period of not using it, how easily can they re-establish proficiency? Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors? Satisfaction: How pleasant is it to use the design?Usability is often associated with the functionalities of the product (cf. ISO definition, below), in addition to being solely a characteristic of the user interface (cf. framework of system acceptability, also below, which separates usefulness into usability and utility). For example, in the context of mainstream consumer products, an automobile lacking a reverse gear could be considered unusable according to the former view, and lacking in utility according to the latter view. When evaluating user interfaces for usability, the definition can be as simple as "the perception of a target user of the effectiveness (fit for purpose) and efficiency (work or time required to use) of the Interface". Each component may be measured subjectively against criteria, e.g., Principles of User Interface Design, to provide a metric, often expressed as a percentage. It is important to distinguish between usability testing and usability engineering. Usability testing is the measurement of ease of use of a product or piece of software. In contrast, usability engineering (UE) is the research and design process that ensures a product with good usability. Usability is a non-functional requirement. As with other non-functional requirements, usability cannot be directly measured but must be quantified by means of indirect measures or attributes such as, for example, the number of reported problems with ease-of-use of a system.
Definition:
Intuitive interaction or intuitive use The term intuitive is often listed as a desirable trait in usable interfaces, sometimes used as a synonym for learnable. In the past, Jef Raskin discouraged using this term in user interface design, claiming that easy to use interfaces are often easy because of the user's exposure to previous similar systems, thus the term 'familiar' should be preferred. As an example: Two vertical lines "||" on media player buttons do not intuitively mean "pause"—they do so by convention. This association between intuitive use and familiarity has since been empirically demonstrated in multiple studies by a range of researchers across the world, and intuitive interaction is accepted in the research community as being use of an interface based on past experience with similar interfaces or something else, often not fully conscious, and sometimes involving a feeling of "magic" since the course of the knowledge itself may not be consciously available to the user . Researchers have also investigated intuitive interaction for older people, people living with dementia, and children.Some have argued that aiming for "intuitive" interfaces (based on reusing existing skills with interaction systems) could lead designers to discard a better design solution only because it would require a novel approach and to stick with boring designs. However, applying familiar features into a new interface has been shown not to result in boring design if designers use creative approaches rather than simple copying. The throwaway remark that "the only intuitive interface is the nipple; everything else is learned." is still occasionally mentioned. Any breastfeeding mother or lactation consultant will tell you this is inaccurate and the nipple does in fact require learning on both sides. In 1992, Bruce Tognazzini even denied the existence of "intuitive" interfaces, since such interfaces must be able to intuit, i.e., "perceive the patterns of the user's behavior and draw inferences." Instead, he advocated the term "intuitable," i.e., "that users could intuit the workings of an application by seeing it and using it". However, the term intuitive interaction has become well accepted in the research community over the past 20 or so years and, although not perfect, it should probably be accepted and used.
ISO standards:
ISO/TR 16982:2002 standard ISO/TR 16982:2002 ("Ergonomics of human-system interaction—Usability methods supporting human-centered design") is an International Standards Organization (ISO) standard that provides information on human-centered usability methods that can be used for design and evaluation. It details the advantages, disadvantages, and other factors relevant to using each usability method. It explains the implications of the stage of the life cycle and the individual project characteristics for the selection of usability methods and provides examples of usability methods in context. The main users of ISO/TR 16982:2002 are project managers. It therefore addresses technical human factors and ergonomics issues only to the extent necessary to allow managers to understand their relevance and importance in the design process as a whole. The guidance in ISO/TR 16982:2002 can be tailored for specific design situations by using the lists of issues characterizing the context of use of the product to be delivered. Selection of appropriate usability methods should also take account of the relevant life-cycle process. ISO/TR 16982:2002 is restricted to methods that are widely used by usability specialists and project managers. It does not specify the details of how to implement or carry out the usability methods described.
ISO standards:
ISO 9241 standard ISO 9241 is a multi-part standard that covers a number of aspects of people working with computers. Although originally titled Ergonomic requirements for office work with visual display terminals (VDTs), it has been retitled to the more generic Ergonomics of Human System Interaction. As part of this change, ISO is renumbering some parts of the standard so that it can cover more topics, e.g. tactile and haptic interaction. The first part to be renumbered was part 10 in 2006, now part 110.
ISO standards:
IEC 62366 IEC 62366-1:2015 + COR1:2016 & IEC/TR 62366-2 provide guidance on usability engineering specific to a medical device.
Designing for usability:
Any system or device designed for use by people should be easy to use, easy to learn, easy to remember (the instructions), and helpful to users. John Gould and Clayton Lewis recommend that designers striving for usability follow these three design principles Early focus on end users and the tasks they need the system/device to do Empirical measurement using quantitative or qualitative measures Iterative design, in which the designers work in a series of stages, improving the design each time Early focus on users and tasks The design team should be user-driven and it should be in direct contact with potential users. Several evaluation methods, including personas, cognitive modeling, inspection, inquiry, prototyping, and testing methods may contribute to understanding potential users and their perceptions of how well the product or process works. Usability considerations, such as who the users are and their experience with similar systems must be examined. As part of understanding users, this knowledge must "...be played against the tasks that the users will be expected to perform." This includes the analysis of what tasks the users will perform, which are most important, and what decisions the users will make while using your system. Designers must understand how cognitive and emotional characteristics of users will relate to a proposed system. One way to stress the importance of these issues in the designers' minds is to use personas, which are made-up representative users. See below for further discussion of personas. Another more expensive but more insightful method is to have a panel of potential users work closely with the design team from the early stages.
Designing for usability:
Empirical measurement Test the system early on, and test the system on real users using behavioral measurements. This includes testing the system for both learnability and usability. (See Evaluation Methods). It is important in this stage to use quantitative usability specifications such as time and errors to complete tasks and number of users to test, as well as examine performance and attitudes of the users testing the system. Finally, "reviewing or demonstrating" a system before the user tests it can result in misleading results. The emphasis of empirical measurement is on measurement, both informal and formal, which can be carried out through a variety of evaluation methods.
Designing for usability:
Iterative design Iterative design is a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Based on the results of testing the most recent iteration of a design, changes and refinements are made. This process is intended to ultimately improve the quality and functionality of a design. In iterative design, interaction with the designed system is used as a form of research for informing and evolving a project, as successive versions, or iterations of a design are implemented. The key requirements for Iterative Design are: identification of required changes, an ability to make changes, and a willingness to make changes. When a problem is encountered, there is no set method to determine the correct solution. Rather, there are empirical methods that can be used during system development or after the system is delivered, usually a more inopportune time. Ultimately, iterative design works towards meeting goals such as making the system user friendly, easy to use, easy to operate, simple, etc.
Evaluation methods:
There are a variety of usability evaluation methods. Certain methods use data from users, while others rely on usability experts. There are usability evaluation methods for all stages of design and development, from product definition to final design modifications. When choosing a method, consider cost, time constraints, and appropriateness. For a brief overview of methods, see Comparison of usability evaluation methods or continue reading below. Usability methods can be further classified into the subcategories below.
Evaluation methods:
Cognitive modeling methods Cognitive modeling involves creating a computational model to estimate how long it takes people to perform a given task. Models are based on psychological principles and experimental studies to determine times for cognitive processing and motor movements. Cognitive models can be used to improve user interfaces or predict problem errors and pitfalls during the design process. A few examples of cognitive models include: Parallel design With parallel design, several people create an initial design from the same set of requirements. Each person works independently, and when finished, shares concepts with the group. The design team considers each solution, and each designer uses the best ideas to further improve their own solution. This process helps generate many different, diverse ideas, and ensures that the best ideas from each design are integrated into the final concept. This process can be repeated several times until the team is satisfied with the final concept.
Evaluation methods:
GOMS GOMS stands for goals, operator, methods, and selection rules. It is a family of techniques that analyzes the user complexity of interactive systems. Goals are what the user must accomplish. An operator is an action performed in pursuit of a goal. A method is a sequence of operators that accomplish a goal. Selection rules specify which method satisfies a given goal, based on context.
Evaluation methods:
Human processor model Sometimes it is useful to break a task down and analyze each individual aspect separately. This helps the tester locate specific areas for improvement. To do this, it is necessary to understand how the human brain processes information. A model of the human processor is shown below.
Many studies have been done to estimate the cycle times, decay times, and capacities of each of these processors. Variables that affect these can include subject age, aptitudes, ability, and the surrounding environment. For a younger adult, reasonable estimates are: Long-term memory is believed to have an infinite capacity and decay time.
Keystroke level modeling Keystroke level modeling is essentially a less comprehensive version of GOMS that makes simplifying assumptions in order to reduce calculation time and complexity.
Inspection methods These usability evaluation methods involve observation of users by an experimenter, or the testing and evaluation of a program by an expert reviewer. They provide more quantitative data as tasks can be timed and recorded.
Evaluation methods:
Card sorts Card sorting is a way to involve users in grouping information for a website's usability review. Participants in a card sorting session are asked to organize the content from a Web site in a way that makes sense to them. Participants review items from a Web site and then group these items into categories. Card sorting helps to learn how users think about the content and how they would organize the information on the Web site. Card sorting helps to build the structure for a Web site, decide what to put on the home page, and label the home page categories. It also helps to ensure that information is organized on the site in a way that is logical to users.
Evaluation methods:
Tree tests Tree testing is a way to evaluate the effectiveness of a website's top-down organization. Participants are given "find it" tasks, then asked to drill down through successive text lists of topics and subtopics to find a suitable answer. Tree testing evaluates the findability and labeling of topics in a site, separate from its navigation controls or visual design.
Evaluation methods:
Ethnography Ethnographic analysis is derived from anthropology. Field observations are taken at a site of a possible user, which track the artifacts of work such as Post-It notes, items on desktop, shortcuts, and items in trash bins. These observations also gather the sequence of work and interruptions that determine the user's typical day.
Evaluation methods:
Heuristic evaluation Heuristic evaluation is a usability engineering method for finding and assessing usability problems in a user interface design as part of an iterative design process. It involves having a small set of evaluators examining the interface and using recognized usability principles (the "heuristics"). It is the most popular of the usability inspection methods, as it is quick, cheap, and easy. Heuristic evaluation was developed to aid in the design of computer user-interface design. It relies on expert reviewers to discover usability problems and then categorize and rate them by a set of principles (heuristics.) It is widely used based on its speed and cost-effectiveness. Jakob Nielsen's list of ten heuristics is the most commonly used in industry. These are ten general principles for user interface design. They are called "heuristics" because they are more in the nature of rules of thumb than specific usability guidelines.
Evaluation methods:
Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
Match between system and the real world: The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
Error prevention: Even better than good error messages is a careful design that prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Recognition rather than recall: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Flexibility and efficiency of use: Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
Aesthetic and minimalist design: Dialogues should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
Evaluation methods:
Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.Thus, by determining which guidelines are violated, the usability of a device can be determined.
Evaluation methods:
Usability inspection Usability inspection is a review of a system based on a set of guidelines. The review is conducted by a group of experts who are deeply familiar with the concepts of usability in design. The experts focus on a list of areas in design that have been shown to be troublesome for users.
Pluralistic inspection Pluralistic Inspections are meetings where users, developers, and human factors people meet together to discuss and evaluate step by step of a task scenario. As more people inspect the scenario for problems, the higher the probability to find problems. In addition, the more interaction in the team, the faster the usability issues are resolved.
Consistency inspection In consistency inspection, expert designers review products or projects to ensure consistency across multiple products to look if it does things in the same way as their own designs.
Evaluation methods:
Activity Analysis Activity analysis is a usability method used in preliminary stages of development to get a sense of situation. It involves an investigator observing users as they work in the field. Also referred to as user observation, it is useful for specifying user requirements and studying currently used tasks and subtasks. The data collected are qualitative and useful for defining the problem. It should be used when you wish to frame what is needed, or "What do we want to know?" Inquiry methods The following usability evaluation methods involve collecting qualitative data from users. Although the data collected is subjective, it provides valuable information on what the user wants.
Evaluation methods:
Task analysis Task analysis means learning about users' goals and users' ways of working. Task analysis can also mean figuring out what more specific tasks users must do to meet those goals and what steps they must take to accomplish those tasks. Along with user and task analysis, a third analysis is often used: understanding users' environments (physical, social, cultural, and technological environments).
Evaluation methods:
Focus groups A focus group is a focused discussion where a moderator leads a group of participants through a set of questions on a particular topic. Although typically used as a marketing tool, focus groups are sometimes used to evaluate usability. Used in the product definition stage, a group of 6 to 10 users are gathered to discuss what they desire in a product. An experienced focus group facilitator is hired to guide the discussion to areas of interest for the developers. Focus groups are typically videotaped to help get verbatim quotes, and clips are often used to summarize opinions. The data gathered is not usually quantitative, but can help get an idea of a target group's opinion.
Evaluation methods:
Questionnaires/surveys Surveys have the advantages of being inexpensive, require no testing equipment, and results reflect the users' opinions. When written carefully and given to actual users who have experience with the product and knowledge of design, surveys provide useful feedback on the strong and weak areas of the usability of a design. This is a very common method and often does not appear to be a survey, but just a warranty card.
Evaluation methods:
Prototyping methods It is often very difficult for designers to conduct usability tests with the exact system being designed. Cost constraints, size, and design constraints usually lead the designer to creating a prototype of the system. Instead of creating the complete final system, the designer may test different sections of the system, thus making several small models of each component of the system. Prototyping is an attitude and an output, as it is a process for generating and reflecting on tangible ideas by allowing failure to occur early. prototyping helps people to see what could be of communicating a shared vision, and of giving shape to the future. The types of usability prototypes may vary from using paper models, index cards, hand drawn models, or storyboards. Prototypes are able to be modified quickly, often are faster and easier to create with less time invested by designers and are more apt to change design; although sometimes are not an adequate representation of the whole system, are often not durable and testing results may not be parallel to those of the actual system.
Evaluation methods:
The Tool Kit Approach This tool kit is a wide library of methods that used the traditional programming language and it is primarily developed for computer programmers. The code created for testing in the tool kit approach can be used in the final product. However, to get the highest benefit from the tool, the user must be an expert programmer.
Evaluation methods:
The Parts Kit Approach The two elements of this approach include a parts library and a method used for identifying the connection between the parts. This approach can be used by almost anyone and it is a great asset for designers with repetitive tasks.
Animation Language Metaphor This approach is a combination of the tool kit approach and the part kit approach. Both the dialogue designers and the programmers are able to interact with this prototyping tool.
Evaluation methods:
Rapid prototyping Rapid prototyping is a method used in early stages of development to validate and refine the usability of a system. It can be used to quickly and cheaply evaluate user-interface designs without the need for an expensive working model. This can help remove hesitation to change the design, since it is implemented before any real programming begins. One such method of rapid prototyping is paper prototyping.
Evaluation methods:
Testing methods These usability evaluation methods involve testing of subjects for the most quantitative data. Usually recorded on video, they provide task completion time and allow for observation of attitude. Regardless to how carefully a system is designed, all theories must be tested using usability tests. Usability tests involve typical users using the system (or product) in a realistic environment [see simulation]. Observation of the user's behavior, emotions, and difficulties while performing different tasks, often identify areas of improvement for the system.
Evaluation methods:
Metrics While conducting usability tests, designers must use usability metrics to identify what it is they are going to measure, or the usability metrics. These metrics are often variable, and change in conjunction with the scope and goals of the project. The number of subjects being tested can also affect usability metrics, as it is often easier to focus on specific demographics. Qualitative design phases, such as general usability (can the task be accomplished?), and user satisfaction are also typically done with smaller groups of subjects. Using inexpensive prototypes on small user groups provides more detailed information, because of the more interactive atmosphere, and the designer's ability to focus more on the individual user.
Evaluation methods:
As the designs become more complex, the testing must become more formalized. Testing equipment will become more sophisticated and testing metrics become more quantitative. With a more refined prototype, designers often test effectiveness, efficiency, and subjective satisfaction, by asking the user to complete various tasks. These categories are measured by the percent that complete the task, how long it takes to complete the tasks, ratios of success to failure to complete the task, time spent on errors, the number of errors, rating scale of satisfactions, number of times user seems frustrated, etc. Additional observations of the users give designers insight on navigation difficulties, controls, conceptual models, etc. The ultimate goal of analyzing these metrics is to find/create a prototype design that users like and use to successfully perform given tasks. After conducting usability tests, it is important for a designer to record what was observed, in addition to why such behavior occurred and modify the model according to the results. Often it is quite difficult to distinguish the source of the design errors, and what the user did wrong. However, effective usability tests will not generate a solution to the problems, but provide modified design guidelines for continued testing.
Evaluation methods:
Remote usability testing Remote usability testing (also known as unmoderated or asynchronous usability testing) involves the use of a specially modified online survey, allowing the quantification of user testing studies by providing the ability to generate large sample sizes, or a deep qualitative analysis without the need for dedicated facilities. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioral type. The tests are carried out in the user's own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas. There are two types, quantitative or qualitative. Quantitative use large sample sized and task based surveys. These types of studies are useful for validating suspected usability issues. Qualitative studies are best used as exploratory research, in small sample sizes but frequent, even daily iterations. Qualitative usually allows for observing respondent's screens and verbal think aloud commentary (Screen Recording Video, SRV), and for a richer level of insight also include the webcam view of the respondent (Video-in-Video, ViV, sometimes referred to as Picture-in-Picture, PiP) Remote usability testing for mobile devices The growth in mobile and associated platforms and services (e.g.: Mobile gaming has experienced 20x growth in 2010-2012) has generated a need for unmoderated remote usability testing on mobile devices, both for websites but especially for app interactions. One methodology consists of shipping cameras and special camera holding fixtures to dedicated testers, and having them record the screens of the mobile smart-phone or tablet device, usually using an HD camera. A drawback of this approach is that the finger movements of the respondent can obscure the view of the screen, in addition to the bias and logistical issues inherent in shipping special hardware to selected respondents. A newer approach uses a wireless projection of the mobile device screen onto the computer desktop screen of the respondent, who can then be recorded through their webcam, and thus a combined Video-in-Video view of the participant and the screen interactions viewed simultaneously while incorporating the verbal think aloud commentary of the respondents.
Evaluation methods:
Thinking aloud The Think aloud protocol is a method of gathering data that is used in both usability and psychology studies. It involves getting a user to verbalize their thought processes (i.e. expressing their opinions, thoughts, anticipations, and actions) as they perform a task or set of tasks. As a widespread method of usability testing, think aloud provides the researchers with the ability to discover what user really think during task performance and completion.Often an instructor is present to prompt the user into being more vocal as they work. Similar to the Subjects-in-Tandem method, it is useful in pinpointing problems and is relatively simple to set up. Additionally, it can provide insight into the user's attitude, which can not usually be discerned from a survey or questionnaire.
Evaluation methods:
RITE method Rapid Iterative Testing and Evaluation (RITE) is an iterative usability method similar to traditional "discount" usability testing. The tester and team must define a target population for testing, schedule participants to come into the lab, decide on how the users behaviors will be measured, construct a test script and have participants engage in a verbal protocol (e.g., think aloud). However it differs from these methods in that it advocates that changes to the user interface are made as soon as a problem is identified and a solution is clear. Sometimes this can occur after observing as few as 1 participant. Once the data for a participant has been collected the usability engineer and team decide if they will be making any changes to the prototype prior to the next participant. The changed interface is then tested with the remaining users.
Evaluation methods:
Subjects-in-tandem or co-discovery Subjects-in-tandem (also called co-discovery) is the pairing of subjects in a usability test to gather important information on the ease of use of a product. Subjects tend to discuss the tasks they have to accomplish out loud and through these discussions observers learn where the problem areas of a design are. To encourage co-operative problem-solving between the two subjects, and the attendant discussions leading to it, the tests can be designed to make the subjects dependent on each other by assigning them complementary areas of responsibility (e.g. for testing of software, one subject may be put in charge of the mouse and the other of the keyboard.) Component-based usability testing Component-based usability testing is an approach which aims to test the usability of elementary units of an interaction system, referred to as interaction components. The approach includes component-specific quantitative measures based on user interaction recorded in log files, and component-based usability questionnaires.
Evaluation methods:
Other methods Cognitive walkthrough Cognitive walkthrough is a method of evaluating the user interaction of a working prototype or final product. It is used to evaluate the system's ease of learning. Cognitive walkthrough is useful to understand the user's thought processes and decision making when interacting with a system, specially for first-time or infrequent users.
Evaluation methods:
Benchmarking Benchmarking creates standardized test materials for a specific type of design. Four key characteristics are considered when establishing a benchmark: time to do the core task, time to fix errors, time to learn applications, and the functionality of the system. Once there is a benchmark, other designs can be compared to it to determine the usability of the system. Many of the common objectives of usability studies, such as trying to understand user behavior or exploring alternative designs, must be put aside. Unlike many other usability methods or types of labs studies, benchmark studies more closely resemble true experimental psychology lab studies, with greater attention to detail on methodology, study protocol and data analysis.
Evaluation methods:
Meta-analysis Meta-analysis is a statistical procedure to combine results across studies to integrate the findings. This phrase was coined in 1976 as a quantitative literature review. This type of evaluation is very powerful for determining the usability of a device because it combines multiple studies to provide very accurate quantitative support.
Evaluation methods:
Persona Personas are fictitious characters created to represent a site or product's different user types and their associated demographics and technographics. Alan Cooper introduced the concept of using personas as a part of interactive design in 1998 in his book The Inmates Are Running the Asylum, but had used this concept since as early as 1975. Personas are a usability evaluation method that can be used at various design stages. The most typical time to create personas is at the beginning of designing so that designers have a tangible idea of who the users of their product will be. Personas are the archetypes that represent actual groups of users and their needs, which can be a general description of person, context, or usage scenario. This technique turns marketing data on target user population into a few physical concepts of users to create empathy among the design team, with the final aim of tailoring a product more closely to how the personas will use it. To gather the marketing data that personas require, several tools can be used, including online surveys, web analytics, customer feedback forms, and usability tests, and interviews with customer-service representatives.
Benefits:
The key benefits of usability are: Higher revenues through increased sales Increased user efficiency and user satisfaction Reduced development costs Reduced support costs Corporate integration An increase in usability generally positively affects several facets of a company's output quality. In particular, the benefits fall into several common areas: Increased productivity Decreased training and support costs Increased sales and revenues Reduced development time and costs Reduced maintenance costs Increased customer satisfactionIncreased usability in the workplace fosters several responses from employees: "Workers who enjoy their work do it better, stay longer in the face of temptation, and contribute ideas and enthusiasm to the evolution of enhanced productivity." To create standards, companies often implement experimental design techniques that create baseline levels. Areas of concern in an office environment include (though are not necessarily limited to): Working posture Design of workstation furniture Screen displays Input devices Organization issues Office environment Software interfaceBy working to improve said factors, corporations can achieve their goals of increased output at lower costs, while potentially creating optimal levels of customer satisfaction. There are numerous reasons why each of these factors correlates to overall improvement. For example, making software user interfaces easier to understand reduces the need for extensive training. The improved interface tends to lower the time needed to perform tasks, and so would both raise the productivity levels for employees and reduce development time (and thus costs). Each of the aforementioned factors are not mutually exclusive; rather they should be understood to work in conjunction to form the overall workplace environment. In the 2010s, usability is recognized as an important software quality attribute, earning its place among more traditional attributes such as performance, robustness and aesthetic appearance. Various academic programs focus on usability. Several usability consultancy companies have emerged, and traditional consultancy and design firms offer similar services.
Benefits:
There is some resistance to integrating usability work in organisations. Usability is seen as a vague concept, it is difficult to measure and other areas are prioritised when IT projects run out of time or money.
Professional development:
Usability practitioners are sometimes trained as industrial engineers, psychologists, kinesiologists, systems design engineers, or with a degree in information architecture, information or library science, or Human-Computer Interaction (HCI). More often though they are people who are trained in specific applied fields who have taken on a usability focus within their organization. Anyone who aims to make tools easier to use and more effective for their desired function within the context of work or everyday living can benefit from studying usability principles and guidelines. For those seeking to extend their training, the User Experience Professionals' Association offers online resources, reference lists, courses, conferences, and local chapter meetings. The UXPA also sponsors World Usability Day each November. Related professional organizations include the Human Factors and Ergonomics Society (HFES) and the Association for Computing Machinery's special interest groups in Computer Human Interaction (SIGCHI), Design of Communication (SIGDOC) and Computer Graphics and Interactive Techniques (SIGGRAPH). The Society for Technical Communication also has a special interest group on Usability and User Experience (UUX). They publish a quarterly newsletter called Usability Interface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interstellar Wars**
Interstellar Wars:
Interstellar Wars is a 1982 board game published by Attactix.
Gameplay:
Interstellar Wars is a strategic game for two players, focusing on conflict between galactic empires.
Reception:
Tony Watson reviewed Interstellar Wars in Space Gamer No. 66. Watson commented that "As a first SF game from a new company, Interstellar Wars is adequate, but out outstanding. It certainly avoids the appellation of 'turkey' - but it hits wide of the 'classic' mark as well." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Irbesartan**
Irbesartan:
Irbesartan, sold under the brand name Avapro among others, is a medication used to treat high blood pressure, heart failure, and diabetic kidney disease. It is a reasonable initial treatment for high blood pressure. It is taken by mouth. Versions are available as the combination irbesartan/hydrochlorothiazide.Common side effects include dizziness, diarrhea, feeling tired, muscle pain, and heartburn. Serious side effects may include kidney problems, low blood pressure, and angioedema. Use in pregnancy may harm the baby and use when breastfeeding is not recommended. It is an angiotensin II receptor antagonist and works by blocking the effects of angiotensin II.Irbesartan was patented in 1990, and approved for medical use in 1997. It is available as a generic medication. In 2020, it was the 148th most commonly prescribed medication in the United States, with more than 4 million prescriptions.
Structure activity relationship:
Irbesartan has the common structural features seen within the Angiotensin-II Receptor blockers or ARB medications. The medicine has an extended diphenyl group with a tetrazole at the 2-prime position. At the 4'prime position, the molecule has a diazaspiro04-none, which is on a methyl.
Medical uses:
Irbesartan is used for the treatment of hypertension. It may also delay progression of diabetic nephropathy and is also indicated for the reduction of renal disease progression in patients with type 2 diabetes, hypertension and microalbuminuria (>30 mg/24 h) or proteinuria (>900 mg/24 h).
Combination with diuretic Irbesartan is also available in a fixed-dose combination formulation with hydrochlorothiazide, a thiazide diuretic, to achieve an additive antihypertensive effect. Irbesartan/hydrochlorothiazide combination preparations are marketed under various brand names.
Society and culture:
Brand names It was developed by Sanofi Research (part of Sanofi-Aventis). It is jointly marketed by Sanofi-Aventis and Bristol-Myers Squibb under the brand names Aprovel, Karvea, and Avapro.
Recalls In 2018, the US Food and Drug Administration (FDA) reported that some versions of the angiotensin II receptor blocker medicines (including valsartan, losartan, irbesartan and other "-sartan" drugs) contain nitrosamine impurities. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kamaelia**
Kamaelia:
Kamaelia is a free software/open source Python-based systems-development tool and concurrency framework produced by BBC Research & Development.
Kamaelia applications are produced by linking independent components together. These components communicate entirely through "inboxes" and "outboxes" (queues) largely removing the burdens of thread-safety and IPC from the developer. This also makes components reusable in different systems, allows easy unit testing and results in parallelism (between components) by default.
Components are generally implemented as generators - a method more light-weight than allocating a thread to each (though this is also supported). As a result, switching between the execution of components in Kamaelia systems is very fast.
Applications that have been produced using Kamaelia include a Freeview digital video recorder, a network-shared whiteboard, a 3D GUI, an HTTP Server, an audio mixer, a stream multicasting system and a simple BitTorrent client.
License change:
Kamaelia's License changed in July 2010 from the Mozilla tri-license (MPL, GPL and LGPL) to the Apache License, with a note that usage under the old licensing scheme was permitted if necessary (due to license incompatibilities), given the rationale for change was to make the codebase more usable by developers not less. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Malvern Panalytical**
Malvern Panalytical:
Malvern Panalytical is a Spectris plc company. The company is a manufacturer and supplier of laboratory analytical instruments. It has been influential in the development of the Malvern Correlator, and it remains notable for its work in the advancement of particle sizing technology. The company produces technology for materials analysis and principal instruments designed to measure the size, shape and charge of particles. Additional areas of development include equipment for rheology measurements,chemical imaging and chromatography. In 2017, they merged with PANalytical to form Malvern Panalytical Ltd.
History:
Malvern Instruments Ltd. was incorporated in 1971. In 1977, Malvern Instruments was recognised by the Royal Academy of Engineering, jointly with the Royal Signals and Radar Establishment (RSRE), for developing the Malvern Correlator. It also received the MacRobert Award for Outstanding Technical Innovation (1977), the Queen's Award for Technological Achievement (1977), the Queen's Award for Export Achievement (1981), and the Queen's Award for Export & Technology (1988).In 1992, Burnfield acquired Malvern Instruments from Cray Electronics Holdings, and, in 1996, there was the acquisition of A3 Water Solutions GmbH, a Stuttgart-based specialist in the design, marketing, and manufacturing of air and liquid particle counters. In 1997, Malvern was also acquired by the Fairey Aviation Company, and Insitec Inc from Burnfield PLC.The holding company changed its name to Spectris plc in 2001.
History:
In 2003, they acquired Bohlin Instruments Ltd, a Gloucestershire-based manufacturer of rheology and viscosity instruments. They also acquired Spectral Dimensions Inc, a manufacturer of infrared chemical imaging instruments, in 2006. Malvern received the 2006 Queen's Award for International Trade.Viscotek Corp, manufacturer of chromatographic and laboratory equipment and supplies, was acquired in 2008, as well as Reologica Instruments AB, a Lund-based manufacturer of rheology and viscometry instrumentation, in 2010. Malvern received the 2010 Queen's Award for Innovation. The company was also listed as a 2010 winner of the annual Queen's Awards for Enterprise for its work measuring particles in fluids.In 2013, they acquired NanoSight, a Wiltshire-based manufacturer of nanoparticle characterization instruments, and, in 2014, the Northampton-based manufacturer of Thermodynamic analysis instruments, MicroCal Instruments, was acquired from GE Lifesciences.In 2017, they merged with PANalytical to form Malvern Panalytical Ltd. That same year, Malvern Panalytical released their X-ray fluorescence (XRF) spectrometer Epsilon, which was specifically designed for small spot analysis. In 2018, Malvern Panalytical unveiled Empyrean, the first fully automated multipurpose x-ray diffractometer; Claisse LeDoser-12, an Automatic Dispensing Balance; Morphologi Range, a new morphologically-directed raman spectroscopy system, and Epsilon 4, a benchtop x-ray fluorescence spectrometer.Malvern Panalytical launched a new partnership with SCOTT Technology Ltd., a supplier of sample preparation equipment, in 2020. Their contract included engineering a fully automated robotic analytical system, incorporating fusion bead sample preparation, implementing X-ray spectrometry instrumentation, and developing thermogravimetric analysis (TGA) equipment. The company also joined partnership with Concept Life Sciences that year. Netzsch acquired Malvern Panalytical’s rheometer product lines in February 2020. In this acquisition, Malvern Panalytical extended Netzsch’s product portfolio by providing Kinexus rotational rheometers and Rosand capillary rheometers. In September 2020, Malvern Panalytical received the Physikalisch-Technische Bundesanstalt (PTB) type approval, as a “full-protection” X-ray instrument, for its Aeris range of benchtop XRD diffractometers.
Business model:
Malvern Instruments began with a focus on particle sizing. As it grew, this focus changed toward developing a "broad portfolio of analytical solutions". In 2014, the company's CTO expressed the company's focus as "We want to solve analytical bottlenecks".In order to maintain agility and currency in product development, the company built an isolated internal division, the Bioscience Development Initiative based in Columbia, Maryland, which has an entrepreneurial character and freedom from corporate management constraints; the unit aims to rapidly develop technologies in partnership with scientists and engineers from the pharmaceutical and other industries and academia. This unit focuses on the biopharmaceutical sector, specifically formulation of drug products.PANalytical originally began in 1948 as a branch of Philips under the name of Philips Analytical, which developed XRF (X-Ray Fluorescence) and XRD (X-Ray Diffraction) equipment. In 2002, Philips Analytical was officially renamed to PANalytical after Spectris’ acquisition of this x-ray analytical branch. Malvern later merged with PANalytical to become Malvern Panalytical Ltd. in 2017.
Operations:
As part of the materials analysis sector, Malvern Panalytical derives most of its revenue through sales of a range of particle and material characterisation instruments. These systems have applications across many industries including: pharmaceuticals, life sciences, metallurgy, mining, semiconductors, polymer science, protein science and food production.
Products:
Spraytec droplet size system Mastersizer laser diffraction systems Zetasizer particle size systems Wavelength Dispersive X-ray fluorescence (WDXRF) wafer analyzers and spectrometers Morphologi G4 particle characterisation systems Sysmex FPIA-3000 particle characterisation system Near Infrared Chemical Imaging (NIR-CI) systems Rheometry systems Viscotek chromatography systemsFull products include: 2830 ZT Aeris Archimedes ASD range Axios FAST Claisse range CNA range Empyrean range Epsilon range Insitec range Mastersize range MicroCal range NanoSight range OMNISEC Parsum range Spraytec X’Pert Zetasizer Zetium | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hush house**
Hush house:
A hush house is an enclosed, noise-suppressed facility used for testing aircraft systems, including propulsion, mechanics, electronics, pneumatics, and others. Installed or uninstalled jet engines can be run under actual load conditions.
Testing:
A hush house is large enough to accommodate an entire crewed or uncrewed aircraft. Some facilities are also equipped to test additional capabilities such as weight and balance, night vision and low lighting, water intrusion, heat soaking, and wind evaluation.Jet engines can be run while installed in the aircraft, which must be restrained by holdback devices to resist the engine thrust. Uninstalled engines (without the aircraft) can be tested while held in place by thrust frames.
Testing:
The air intake and exhaust systems of indoor engine test cells and hush houses are designed to block the transmission of noise, while optimizing the engine air flows. The engine exhaust, after having been thoroughly mixed with cooling air, is generally discharged through a vertical stack. The gas path incorporates acoustic damping panels (often containing fibrous insulation protected from gas stream erosion by metal mesh) to reduce the sound energy of the gas stream and attenuate the noise transmitted to the surrounding outdoor area.
Testing:
Because the engine exhaust flow is "augmented" with a relatively large flow of cooling air induced by a Venturi effect into the exhaust silencing system, the exhaust muffler of an indoor test facility is generally referred to as an augmenter tube, although the term "detuner" is commonly used in the UK.Some outdoor run-up facilities used to test aircraft engines (installed or uninstalled) may also be outfitted with noise control structures, called Ground Run-Up Enclosures.
Examples:
Marine Corps Air Station Miramar in San Diego California Marine Corps Air Station Iwakuni in Japan Naval Air Station Jacksonville in Jacksonville, Florida Naval Air Station Joint Reserve Base Fort Worth in Fort Worth, Texas Naval Air Station Oceana in Virginia Beach, Virginia Naval Air Station Patuxent River in Maryland | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reliabilism**
Reliabilism:
Reliabilism, a category of theories in the philosophical discipline of epistemology, has been advanced as a theory both of justification and of knowledge. Process reliabilism has been used as an argument against philosophical skepticism, such as the brain in a vat thought experiment.
Process reliabilism is a form of epistemic externalism.
Overview:
A broadly reliabilist theory of knowledge is roughly as follows: One knows that p (p stands for any proposition—e.g., that the sky is blue) if and only if p is true, one believes that p is true, and one has arrived at the belief that p through some reliable process.
A broadly reliabilist theory of justified belief can be stated as follows: One has a justified belief that p if, and only if, the belief is the result of a reliable process.
Moreover, a similar account can be given (and an elaborate version of this has been given by Alvin Plantinga) for such notions as 'warranted belief' or 'epistemically rational belief'.
Overview:
Leading proponents of reliabilist theories of knowledge and justification have included Alvin Goldman, Marshall Swain, Kent Bach and more recently, Alvin Plantinga. Goldman's article "A Causal Theory of Knowing" (Journal of Philosophy, 64 (1967), pp. 357–372) is generally credited as being the first full treatment of the theory, though D. M. Armstrong is also regarded as an important source, and (according to Hugh Mellor) Frank Ramsey was the very first to state the theory, albeit in passing.
Overview:
One classical or traditional analysis of 'knowledge' is justified true belief. In order to have a valid claim of knowledge for any proposition, one must be justified in believing "p" and "p" must be true. Since Gettier proposed his counterexamples the traditional analysis has included the further claim that knowledge must be more than justified true belief. Reliabilist theories of knowledge are sometimes presented as an alternative to that theory: rather than justification, all that is required is that the belief be the product of a reliable process. But reliabilism need not be regarded as an alternative, but instead as a further explication of the traditional analysis. On this view, those who offer reliabilist theories of justification further analyze the 'justification' part of the traditional analysis of 'knowledge' in terms of reliable processes. Not all reliabilists agree with such accounts of justification, but some do.
Objections:
Some find reliabilism of justification objectionable because it entails externalism, which is the view that one can have knowledge, or have a justified belief, despite not knowing (having "access" to) the evidence, or other circumstances, that make the belief justified. Most reliabilists maintain that a belief can be justified, or can constitute knowledge, even if the believer does not know about or understand the process that makes the belief reliable. In defending this view, reliabilists (and externalists generally) are apt to point to examples from simple acts of perception: if one sees a bird in the tree outside one's window and thereby gains the belief that there is a bird in that tree, one might not at all understand the cognitive processes that account for one's successful act of perception; nevertheless, it is the fact that the processes worked reliably that accounts for why one's belief is justified. In short, one finds one holds a belief about the bird, and that belief is justified if any is, but one is not acquainted at all with the processes that led to the belief that justified one's having it. Another of the most common objections to reliabilism, made first to Goldman's reliable process theory of knowledge and later to other reliabilist theories, is the so-called generality problem. For any given justified belief (or instance of knowledge), one can easily identify many different (concurrently operating) "processes" from which the belief results. My belief that there is a bird in the tree outside my window might be accorded a result of the process of forming beliefs on the basis of sense-perception, of visual sense-perception, of visual sense-perception through non-opaque surfaces in daylight, and so forth, down to a variety of different very specifically described processes. Some of these processes might be statistically reliable, while others might not. It would no doubt be better to say, in any case, that we are choosing not which process to say resulted in the belief, but instead how to describe the process, out of the many different levels of generality on which it can be accurately described.
Objections:
An objection in a similar line was formulated by Stephen Stich in The Fragmentation of Reason. Reliabilism usually considers that for generating justified beliefs a process needs to be reliable in a set of relevant possible scenarios. However, according to Stich, these scenarios are chosen in a culturally biased manner. Stich does not defend any alternative theory of knowledge or justification, but instead argues that all accounts of normative epistemic terms are culturally biased and instead only a pragmatic account can be given.
Objections:
Another objection to reliabilism is called the new evil demon problem. The evil demon problem originally motivated skepticism, but can be repurposed to object to reliabilist accounts as follows: If our experiences are controlled by an evil demon, it may be the case that we believe ourselves to be doing things that we are not doing. However, these beliefs are clearly justified. Robert Brandom has called for a clarification of the role of belief in reliabilist theories. Brandom is concerned that unless the role of belief is stressed, reliabilism may attribute knowledge to things that would otherwise be considered incapable of possessing it. Brandom gives the example of a parrot that has been trained to consistently respond to red visual stimuli by saying 'that's red'. The proposition is true, the mechanism that produced it is reliable, but Brandom is reluctant to say that the parrot knows it is seeing red because he thinks it cannot believe that it is. For Brandom, beliefs pertain to concepts: without the latter there can be no former. Concepts are products of the 'game of giving and asking for reasons'. Hence, only those entities capable of reasoning, through language in a social context, can for Brandom believe and thus have knowledge. Brandom may be regarded as hybridising externalism and internalism, allowing knowledge to be accounted for by reliable external process so long as a knower possess some internal understanding of why the belief is reliable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cranioplasty**
Cranioplasty:
Cranioplasty is a surgical operation on the repairing of cranial defects caused by previous injuries or operations, such as decompressive craniectomy. It is performed by filling the defective area with a range of materials, usually a bone piece from the patient or a synthetic material. Cranioplasty is carried out by incision and reflection of the scalp after applying anaesthetics and antibiotics to the patient. The temporalis muscle is reflected, and all surrounding soft tissues are removed, thus completely exposing the cranial defect. The cranioplasty flap is placed and secured on the cranial defect. The wound is then sealed.Cranioplasty was closely related to trephination and the earliest operation is dated to 3000 BC. Currently, the procedure is performed for both cosmetic and functional purposes. Cranioplasty can restore the normal shape of the skull and prevent other complications caused by a sunken scalp, such as the "syndrome of the trephined". Cranioplasty is a risky operation, with potential risks such as bacterial infection and bone flap resorption.
Etymology:
The word cranioplasty can be broken down into two parts: cranio- and -plasty. Cranio- originates from the Ancient Greek word κρανίον, meaning "cranium", while -plasty comes from the Ancient Greek word πλαστός, meaning "moulded" or "fashioned".
Medical uses:
The operation has its cosmetic value as the normal shape of the cranium of patients is restored instead of the presence of a sunken skin flap, which may affect the confidence of patients.It also has its therapeutic value as the operation provides structure to the skull and protection to the brain from physical damage. The surgery restores regular cerebrospinal fluid (CSF) and cerebral blood flow dynamics, along with normal intracranial pressure. Cranioplasty may improve neurological function in some individuals. Furthermore, it can reduce the occurrence of headaches caused by injury or previous surgery.The optimal timing of cranioplasty is controversial among literature. Some literature stated that the time between a craniectomy and a cranioplasty is usually between 6 months to a year, while others stated that the two operations should be more than a year apart.The timing of cranioplasty is affected by multiple factors. Sufficient time is required for the recovery of the incision from the previous operation, as well as to clear any infections (both systemic and cranial). Some findings showed that a greater infection rate is associated with early cranioplasty due to interruption of wound healing, as well as an increased incidence of hydrocephalus. Contrarily, there is evidence of early cranioplasty limiting complications caused by "syndrome of the trephined", including changes in cerebral blood flow and abnormal cerebrospinal fluid hydrodynamics. Other researchers reported no significant difference in infection rate with different operational timings.Contraindications are circumstances that indicate the treatment or operation should not be provided due to potential harm. Contraindications for cranioplasty include the presence of bacterial infection, brain swelling, and hydrocephalus. Cranioplasty is withheld until all contraindications are cleared.
Procedure:
Before the operation, CT scans and MRIs are taken to study the cranial defect. The patient is given antibiotics to prevent bacterial infection.The patient is situated on a foam donut or a horseshoe head holder for the operation. The patient is then anaesthetised and an incision is made following the incision of the previous operation. The scalp and the temporalis muscle is reflected to completely reveal the cranial defect. Significant blood loss is observed as new blood vessels formed in scar tissues are damaged by incision. Any soft tissues at the edge of the defect are removed and the defect is cleaned. The cranioplasty material is placed on the defect and is fixed to the surrounding skull with standard titanium plate and screws. CSF may be drained from the brain to reduce herniation. Small holes may be drilled on the bone graft or the prosthesis to prevent the accumulation of fluid under the repaired defect. Soft tissues, temporalis, and the scalp are then fixed back in place. Subgaleal drain and dressing are applied to control facial swelling.After the operation, a CT scan is taken and patients may stay in intensive care for at least a night for better neurological status observation, or be placed in a regular care unit. The subgaleal drain and dressing are removed before the patient is dispatched.
Procedure:
Children Special considerations to children undergoing cranioplasty are made to accommodate for their growing cranium. Certain materials are more favoured when compared to adult cranioplasty.Autologous bone grafts are the most preferred materials for paediatric cranioplasty, as they are accepted by the host and the bone flap can be integrated into the body of the host. However, autologous bone pieces may be unavailable or unsuitable in certain occasions. The body size of children may be not enough to have bone flaps to be stored in their subcutaneous spaces, while cryopreservation facilities for bone grafts are not widely available. The use of autograft is also associated with a high rate of bone resorption.Synthetic materials are used for paediatric cranioplasty when the use of autografts is not available or not recommended. Hydroxyapatite is another option for children cranioplasty as it allows the expansion of cranium for children and its ability to be moulded smoothly. It is less commonly used than autografts due to its brittle nature, high infection rate, and poor ability to integrate with the human cranium.Bilateral cranioplasties are more prone to complications compared to unilateral cranioplasties in children. This may be explained by its larger scalp wound area, a higher volume of blood loss, and the higher complexity and duration of the operation.
Risks:
Cranioplasty is an operation with a complication risk ranging from 15 to 41%. The cause for such a high risk of complication compared to other neurosurgical operations is unclear. Male patients and older patients are groups with higher rates of complication.Complications occurring after cranioplasty include bacterial infection, bone flap resorption, wound dehiscence, hematoma, seizures, hygroma, and cerebrospinal fluid (CSF) leakage.The risk of bacterial infections in performing cranioplasty ranges from 5 to 12.8%. Multiple factors are affecting the risk of infection, one being the materials used for the operation. Using titanium, whether being custom-made or using a mesh, is associated with a lower infection rate; on the other hand, materials such as methyl methacrylate and autologous bone is associated with a higher infection rate. Another risk factor for bacterial infection is the location of the operation. Bifrontal cranioplasties are associated with significantly higher infection rates and higher rates for reoperation. Other risk factors for infection include previous infections, contact between sinuses and operation site, devascularized scalp (loss of blood supply in the scalp), previous operations, and type of injury.Bone resorption is another complication of cranioplasty with a complication rate of 0.7-17.4%. Bone resorption occurs when the autologous graft does not have blood supply due to devitalisation, or when scar tissues or soft tissues remain on the edge of the cranial defect during cranioplasty. Paediatric patients have a higher risk of resorption, with a resorption rate up to 50%. Bone resorption is more likely to occur in this group of patients when their cranioplasty is carried out over 6 weeks from their previous operation. Fragmented bone flaps, as well as large bone flaps (>70 cm2), are associated with a higher resorption rate.
History:
Ancient history The earliest cranioplasty operation is dated to 3000 BC in the Pre-columbian Peruvian civilisation, where precious metals, gourds, and shells were found next to trepanned skulls in graveyards, suggesting that cranioplasty had been performed. In the Paracas region of present-day Peru, a skull from 2000 BC with a thin plate of gold covering a cranial defect was found. Moreover, defective skulls were found covered with coconut shells or palm leaves in ancient tribes of the Polynesian Islands. Sanan and Haines stated that the materials being used for cranioplasty were associated with the status of the patient.
History:
Modern history Research on cranioplasty was not emphasised among early surgical authors in ancient Asia, Egypt, Greece, and Rome, although there were research and practice on trephination in ancient Greece and Rome. More emphasis was placed on developing skills in applying dressing on an open wound.
History:
The earliest modern description of cranioplasty was written by surgeon Ibrahim bin Abdullah of the Ottoman Empire, in his surgical book Alâim-i Cerrâhîn in 1505. The book mentioned the use of xenografts from Kangal dogs or goats as materials for cranioplasty. Such materials were used due to the accessibility of these animals near battlefields, where the procedure is likely to be performed.The first true description of cranioplasty in Europe was made by Fallopius in the 16th century, stating that the fractured cranium should be removed and be reinserted with a gold plate if the dura was damaged. This was questioned by other practitioners at his time, concerning that surgeons may keep the gold instead of using it for surgery. The first cranioplasty was reported by the Dutch surgeon Job Janszoon van Meekeren. The report described the use of a segment of a canine cranium as a material for cranioplasty on a nobleman in Moscow. The operation was successful, however, the use of canine bone in the operation was not accepted by the church and the man was forced to leave Russia.Since the first operation, bones from more animal species were used as xenografts for cranioplasty. These include dogs, apes, geese, rabbits, calves, eagles, oxen, and buffalos. In 1917, William Wayne Babcock reported the use of "soup bone", a piece of cooked and perforated animal bone as a xenograft.
History:
Development of modern materials The prevalence of head injuries increased in the 20th century with the advancement of armaments, particularly the use of hand grenades in trench warfare during World War I (WWI). Along with a decreased mortality from suffering such injuries due to the development of cell debris removal, wound closing, and the use of antibiotics, cranioplasty techniques were therefore improved. Autografts, allografts and synthetic materials are the main types of materials used for cranioplasty.
History:
Autografts, or autologous grafts, are body tissues taken from the patient. The first successful cranioplasty using an autograft was recorded in 1821, with the bone piece being reinserted into the cranium. The operation achieved partial healing. Subsequently, more studies and operations were carried out with autografts. A successful case of reimplantation of cranial bone was reported by Sir William Macewen in 1885, popularising autografts to be material for cranioplasty. Succeeding operations involved autografts taken from different parts of the patient's body, such as the tibia (leg bone), scapula (shoulder blade), ilium (hip bone), sternum (chest bone), along with fat tissues and fascia.Allografts are tissues from another individual of the same species. The first use of allografts was reported in 1915 with cadaver cartilage by Morestin. Subsequently, another 32 cases of cranioplasties performed with cadaver cartilage were reported by Gosset in 1916. The use of cadaver cartilage was favoured during World War I due to its malleability and resistance to infection. Its use declined because of its lack of significant calcification and strength. Cadaver skull was another type of allograft reported to be used as a cranioplasty material multiple times by Sicard and Dambrin from 1917 to 1919. The material was not favoured due to the high infection rate with its use. In the 1980s, the use of cadaver allograft disc for filling in small holes received a satisfactory result, and there was a resurgence of the use of cadaver bone. However, cadaver bones and allografts, in general, are not the preferred materials in modern operations.The use of methyl methacrylate (PMMA) for cranioplasty was being developed since World War II, and the material is used extensively since 1954, when there is a high demand for cranioplasty due to a large number of injuries. It becomes malleable when an exothermic reaction occurs between its powder form and benzoyl peroxide, allowing it to be moulded to the cranial defect. Advantages of using PMMA is its malleability, low cost, high strength and high durability. Its disadvantages include being vulnerable to infection as bacteria may adhere to its fibrous layer, as well as its brittle nature and having no growth potential.Other common synthetic materials for cranioplasty include titanium and hydroxyapatite. Titanium was first used for cranioplasty in 1965. It can be used as a plate, a mesh, and be 3D printed as a porous form. Titanium is non-ferromagnetic and non-corrosive, making the host free from inflammatory reactions. It is also robust, thus preventing patients from trauma. The use of titanium is associated with a lower infection rate. Disadvantages of using titanium include its high cost, poor malleability, and disruption to CT scan images.Hydroxyapatite is a compound of calcium phosphate being arranged in a hexagonal structure. It bonds well chemically with bones and inflicts little inflammatory reaction by the host, and has good osteointegration. It can be expanded and is used in paediatric cranioplasty. It can be moulded smoothly and has appealing cosmetic results. However, the material is brittle and has low tensile strength, and is only suitable to be used for small cranial defects. Its use is also associated with a high infection rate. Hydroxyapatite is often used with a titanium mesh to prevent fractures and for better osteointegration. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Enchant (software)**
Enchant (software):
Enchant is a free software project developed as part of the AbiWord word processor with the aim of unifying access to the various existing spell-checker software. Enchant wraps a common set of functionality present in a variety of existing products/libraries, and exposes a stable API/ABI for doing so. Where a library doesn't implement some specific functionality, Enchant will emulate it.
Enchant (software):
Enchant is capable of having multiple backends loaded at once. As of January 2021 it has support for 7 backends: Hunspell (spell checker used by LibreOffice, Firefox and Google Chrome) Nuspell (modern spell checker compatible with Hunspell dictionaries) Aspell (intends to replace Ispell) Hspell (Hebrew) Voikko (Finnish) Zemberek (Turkish) AppleSpell (macOS)GNOME LaTeX and gedit rely on the gspell library, which uses Enchant.Enchant is currently licensed under GNU Lesser General Public License (LGPL), with an additional permission notice saying that any plugin backend can be loaded and used by Enchant. This ensures that it can use the native spell checkers on various platforms (Mac OS X, Microsoft Office, Amazon Kindle, etc.), and users can use their favorite third-party product to do the job. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Place-permutation action**
Place-permutation action:
In mathematics, there are two natural interpretations of the place-permutation action of symmetric groups, in which the group elements act on positions or places. Each may be regarded as either a left or a right action, depending on the order in which one chooses to compose permutations. There are just two interpretations of the meaning of "acting by a permutation σ \sigma " but these lead to four variations, depending whether maps are written on the left or right of their arguments. The presence of so many variations often leads to confusion. When regarding the group algebra of a symmetric group as a diagram algebra it is natural to write maps on the right so as to compute compositions of diagrams from left to right.
Maps written on the left:
First we assume that maps are written on the left of their arguments, so that compositions take place from right to left. Let S n {\mathfrak {S}}_{n} be the symmetric group on n n letters, with compositions computed from right to left.
Maps written on the left:
Imagine a situation in which elements of S n {\mathfrak {S}}_{n} act on the “places” (i.e., positions) of something. The places could be vertices of a regular polygon of n n sides, the tensor positions of a simple tensor, or even the inputs of a polynomial of n n variables. So we have n n places, numbered in order from 1 to n n , occupied by n n objects that we can number x 1 , … , x n x_{1},\dots ,x_{n} . In short, we can regard our items as a word x = x 1 ⋯ x n x=x_{1}\cdots x_{n} of length n n in which the position of each element is significant. Now what does it mean to act by “place-permutation” on x x ? There are two possible answers: an element σ ∈ S n \sigma \in {\mathfrak {S}}_{n} can move the item in the j j th place to the σ ( j ) \sigma (j) th place, or it can do the opposite, moving an item from the σ ( j ) \sigma (j) th place to the j j th place.Each of these interpretations of the meaning of an “action” by σ \sigma (on the places) is equally natural, and both are widely used by mathematicians. Thus, when encountering an instance of a "place-permutation" action one must take care to determine from the context which interpretation is intended, if the author does not give specific formulas.
Maps written on the left:
Consider the first interpretation. The following descriptions are all equivalent ways to describe the rule for the first interpretation of the action: For each j j , move the item in the j j th place to the σ ( j ) \sigma (j) th place.
For each j j , move the item in the σ − 1 ( j ) \sigma ^{-1}(j) th place to the j j th place.
Maps written on the left:
For each j j , replace the item in the j j th position by the one that was in the σ − 1 ( j ) \sigma ^{-1}(j) th place.This action may be written as the rule x 1 ⋯ x n ⟶ σ x σ − 1 ( 1 ) ⋯ x σ − 1 ( n ) x_{1}\cdots x_{n}{\overset {\sigma }{\longrightarrow }}x_{\sigma ^{-1}(1)}\cdots x_{\sigma ^{-1}(n)} .
Maps written on the left:
Now if we act on this by another permutation τ \tau then we need to first relabel the items by writing y 1 ⋯ y n = x σ − 1 ( 1 ) ⋯ x σ − 1 ( n ) y_{1}\cdots y_{n}=x_{\sigma ^{-1}(1)}\cdots x_{\sigma ^{-1}(n)} . Then τ \tau takes this to y τ − 1 ( 1 ) ⋯ y τ − 1 ( n ) = x σ − 1 τ − 1 ( 1 ) ⋯ x σ − 1 τ − 1 ( n ) = x ( τ σ ) − 1 ( 1 ) ⋯ x ( τ σ ) − 1 ( n ) .
Maps written on the left:
y_{\tau ^{-1}(1)}\cdots y_{\tau ^{-1}(n)}=x_{\sigma ^{-1}\tau ^{-1}(1)}\cdots x_{\sigma ^{-1}\tau ^{-1}(n)}=x_{(\tau \sigma )^{-1}(1)}\cdots x_{(\tau \sigma )^{-1}(n)}.
This proves that the action is a left action: τ ⋅ ( σ ⋅ x ) = ( τ σ ) ⋅ x \tau \cdot (\sigma \cdot x)=(\tau \sigma )\cdot x .
Now we consider the second interpretation of the action of σ \sigma , which is the opposite of the first. The following descriptions of the second interpretation are all equivalent: For each j j , move the item in the j j th place to the σ − 1 ( j ) \sigma ^{-1}(j) th place.
For each j j , move the item in the σ ( j ) \sigma (j) th place to the j j th place.
Maps written on the left:
For each j j , replace the item in the j j th position by the one that was in the σ ( j ) \sigma (j) th place.This action may be written as the rule x 1 ⋯ x n ⟶ σ x σ ( 1 ) ⋯ x σ ( n ) x_{1}\cdots x_{n}{\overset {\sigma }{\longrightarrow }}x_{\sigma (1)}\cdots x_{\sigma (n)} .
Maps written on the left:
In order to act on this by another permutation τ \tau , again we first relabel the items by writing y 1 ⋯ y n = x σ ( 1 ) ⋯ x σ ( n ) y_{1}\cdots y_{n}=x_{\sigma (1)}\cdots x_{\sigma (n)} . Then the action of τ \tau takes this to y τ ( 1 ) ⋯ y τ ( n ) = x σ τ ( 1 ) ⋯ x σ τ ( n ) = x ( σ τ ) ( 1 ) ⋯ x ( σ τ ) ( n ) .
Maps written on the left:
y_{\tau (1)}\cdots y_{\tau (n)}=x_{\sigma \tau (1)}\cdots x_{\sigma \tau (n)}=x_{(\sigma \tau )(1)}\cdots x_{(\sigma \tau )(n)}.
This proves that our second interpretation of the action is a right action: ( x ⋅ σ ) ⋅ τ = x ⋅ ( σ τ ) (x\cdot \sigma )\cdot \tau =x\cdot (\sigma \tau ) .
Maps written on the left:
Example If σ = ( 1 , 2 , 3 ) \sigma =(1,2,3) is the 3-cycle 1 → 2 → 3 → 1 1\to 2\to 3\to 1 and τ = ( 1 , 3 ) \tau =(1,3) is the transposition 1 → 3 → 1 1\to 3\to 1 , then since we write maps on the left of their arguments we have σ τ = ( 1 , 2 , 3 ) ( 1 , 3 ) = ( 2 , 3 ) , τ σ = ( 1 , 3 ) ( 1 , 2 , 3 ) = ( 1 , 2 ) .
Maps written on the left:
\sigma \tau =(1,2,3)(1,3)=(2,3),\quad \tau \sigma =(1,3)(1,2,3)=(1,2).
Maps written on the left:
Using the first interpretation we have x = x 1 x 2 x 3 ⟶ σ x 3 x 1 x 2 ⟶ τ x 2 x 1 x 3 x=x_{1}x_{2}x_{3}{\overset {\sigma }{\longrightarrow }}x_{3}x_{1}x_{2}{\overset {\tau }{\longrightarrow }}x_{2}x_{1}x_{3} , the result of which agrees with the action of τ σ = ( 1 , 2 ) \tau \sigma =(1,2) on x = x 1 x 2 x 3 x=x_{1}x_{2}x_{3} . So τ ⋅ ( σ ⋅ x ) = ( τ σ ) ⋅ x \tau \cdot (\sigma \cdot x)=(\tau \sigma )\cdot x .
Maps written on the left:
On the other hand, if we use the second interpretation, we have x = x 1 x 2 x 3 ⟶ σ x 2 x 3 x 1 ⟶ τ x 1 x 3 x 2 x=x_{1}x_{2}x_{3}{\overset {\sigma }{\longrightarrow }}x_{2}x_{3}x_{1}{\overset {\tau }{\longrightarrow }}x_{1}x_{3}x_{2} , the result of which agrees with the action of σ τ = ( 2 , 3 ) \sigma \tau =(2,3) on x = x 1 x 2 x 3 x=x_{1}x_{2}x_{3} . So ( x ⋅ σ ) ⋅ τ = x ⋅ ( σ τ ) (x\cdot \sigma )\cdot \tau =x\cdot (\sigma \tau ) .
Maps written on the right:
Sometimes people like to write maps on the right of their arguments. This is a convenient convention to adopt when working with symmetric groups as diagram algebras, for instance, since then one may read compositions from left to right instead of from right to left. The question is: how does this affect the two interpretations of the place-permutation action of a symmetric group? The answer is simple. By writing maps on the right instead of on the left we are reversing the order of composition, so in effect we replace S n {\mathfrak {S}}_{n} by its opposite group S n op {\mathfrak {S}}_{n}^{\text{op}} . This is the same group, but the order of compositions is reversed.
Maps written on the right:
Reversing the order of compositions evidently changes left actions into right ones, and vice versa, changes right actions into left ones. This means that our first interpretation becomes a right action while the second becomes a left one.
Maps written on the right:
In symbols, this means that the action x 1 ⋯ x n ⟶ σ x 1 σ − 1 ⋯ x n σ − 1 x_{1}\cdots x_{n}{\overset {\sigma }{\longrightarrow }}x_{1\sigma ^{-1}}\cdots x_{n\sigma ^{-1}} is now a right action, while the action x 1 ⋯ x n ⟶ σ x 1 σ ⋯ x n σ x_{1}\cdots x_{n}{\overset {\sigma }{\longrightarrow }}x_{1\sigma }\cdots x_{n\sigma } is now a left action.
Maps written on the right:
Example We let σ = ( 1 , 2 , 3 ) \sigma =(1,2,3) be the 3-cycle 1 → 2 → 3 → 1 1\to 2\to 3\to 1 and τ = ( 1 , 3 ) \tau =(1,3) the transposition 1 → 3 → 1 1\to 3\to 1 , as before. Since we now write maps on the right of their arguments we have σ τ = ( 1 , 2 , 3 ) ( 1 , 3 ) = ( 1 , 2 ) , τ σ = ( 1 , 3 ) ( 1 , 2 , 3 ) = ( 2 , 3 ) .
Maps written on the right:
\sigma \tau =(1,2,3)(1,3)=(1,2),\quad \tau \sigma =(1,3)(1,2,3)=(2,3).
Maps written on the right:
Using the first interpretation we have x = x 1 x 2 x 3 ⟶ σ x 3 x 1 x 2 ⟶ τ x 2 x 1 x 3 x=x_{1}x_{2}x_{3}{\overset {\sigma }{\longrightarrow }}x_{3}x_{1}x_{2}{\overset {\tau }{\longrightarrow }}x_{2}x_{1}x_{3} , the result of which agrees with the action of σ τ = ( 1 , 2 ) \sigma \tau =(1,2) on x = x 1 x 2 x 3 x=x_{1}x_{2}x_{3} . So ( x ⋅ σ ) ⋅ τ = x ⋅ ( σ τ ) (x\cdot \sigma )\cdot \tau =x\cdot (\sigma \tau ) .
Maps written on the right:
On the other hand, if we use the second interpretation, we have x = x 1 x 2 x 3 ⟶ σ x 2 x 3 x 1 ⟶ τ x 1 x 3 x 2 x=x_{1}x_{2}x_{3}{\overset {\sigma }{\longrightarrow }}x_{2}x_{3}x_{1}{\overset {\tau }{\longrightarrow }}x_{1}x_{3}x_{2} , the result of which agrees with the action of τ σ = ( 2 , 3 ) \tau \sigma =(2,3) on x = x 1 x 2 x 3 x=x_{1}x_{2}x_{3} . So τ ⋅ ( σ ⋅ x ) = ( τ σ ) ⋅ x \tau \cdot (\sigma \cdot x)=(\tau \sigma )\cdot x .
Summary:
In conclusion, we summarize the four possibilities considered in this article. Here are the four variations: Although there are four variations, there are still only two different ways of acting; the four variations arise from the choice of writing maps on the left or right, a choice which is purely a matter of convention. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rubidium chloride**
Rubidium chloride:
Rubidium chloride is the chemical compound with the formula RbCl. This alkali metal halide salt is composed of rubidium and chlorine, and finds diverse uses ranging from electrochemistry to molecular biology.
Structure:
In its gas phase, RbCl is diatomic with a bond length estimated at 2.7868 Å. This distance increases to 3.285 Å for cubic RbCl, reflecting the higher coordination number of the ions in the solid phase.Depending on conditions, solid RbCl exists in one of three arrangements or polymorphs as determined with holographic imaging: Sodium chloride (octahedral 6:6) The sodium chloride (NaCl) polymorph is most common. A cubic close-packed arrangement of chloride anions with rubidium cations filling the octahedral holes describes this polymorph. Both ions are six-coordinate in this arrangement. The lattice energy of this polymorph is only 3.2 kJ/mol less than the following structure's.
Structure:
Caesium chloride (cubic 8:8) At high temperature and pressure, RbCl adopts the caesium chloride (CsCl) structure (NaCl and KCl undergo the same structural change at high pressures). Here, the chloride ions form a simple cubic arrangement with chloride anions occupying the vertices of a cube surrounding a central Rb+. This is RbCl's densest packing motif. Because a cube has eight vertices, both ions' coordination numbers equal eight. This is RbCl's highest possible coordination number. Therefore, according to the radius ratio rule, cations in this polymorph will reach their largest apparent radius because the anion-cation distances are greatest.
Structure:
Sphalerite (tetrahedral 4:4) The sphalerite polymorph of rubidium chloride has not been observed experimentally. This is consistent with the theory; the lattice energy is predicted to be nearly 40.0 kJ/mol smaller in magnitude than those of the preceding structures.
Synthesis and reaction:
The most common preparation of pure rubidium chloride involves the reaction of its hydroxide with hydrochloric acid, followed by recrystallization: RbOH + HCl → RbCl + H2OBecause RbCl is hygroscopic, it must be protected from atmospheric moisture, e.g. using a desiccator. RbCl is primarily used in laboratories. Therefore, numerous suppliers (see below) produce it in smaller quantities as needed. It is offered in a variety of forms for chemical and biomedical research.
Synthesis and reaction:
Rubidium chloride reacts with sulfuric acid to give rubidium hydrogen sulfate.
Radioactivity:
Every 18 mg of rubidium chloride is equivalent to approximately one banana equivalent dose due to the large fraction (27.8%) of naturally-occurring radioactive isotope rubidium-87.
Uses:
Rubidium chloride is used as a gasoline additive to improve its octane number.
Rubidium chloride has been shown to modify coupling between circadian oscillators via reduced photaic input to the suprachiasmatic nuclei. The outcome is a more equalized circadian rhythm, even for stressed organisms.
Rubidium chloride is an excellent non-invasive biomarker. The compound dissolves well in water and can readily be taken up by organisms. Once broken in the body, Rb+ replaces K+ in tissues because they are from the same chemical group. An example of this is the use of a radioactive isotope to evaluate perfusion of heart muscle.
Rubidium chloride transformation for competent cells is arguably the compound's most abundant use. Cells treated with a hypotonic solution containing RbCl expand. As a result, the expulsion of membrane proteins allows negatively charged DNA to bind.
Rubidium chloride has shown antidepressant effects in experimental human studies, in doses ranging from 180 to 720 mg. It purportedly works by elevating dopamine and norepinephrine levels, resulting in a stimulating effect, which would be useful for anergic and apathetic depression. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pitchers (ceramic material)**
Pitchers (ceramic material):
Pitchers are pottery that has been broken in the course of manufacture. Biscuit (unglazed) pitchers can be crushed, ground and re-used, either as a low-percentage addition to the virgin raw materials on the same factory, or elsewhere as grog. Because of the adhering glaze, glost pitchers find less use. The crushed material can also be used in other industries as an inert filler.
Pitchers (ceramic material):
Archaeologists call ancient pitchers sherds or ostracons; shards or ostraca. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Evelyn effect**
Evelyn effect:
The Evelyn effect is defined as the phenomena in which the product ratios in a chemical reaction change as the reaction proceeds. This phenomenon contradicts the fundamental principle in organic chemistry by reactions always go by the lowest energy pathway. The favored product should remain so throughout a reaction run at constant conditions. However, the ratio of alkenes before the synthesis is complete shows that the favored product to is not the favored product. The basic idea here is that the proportions of the various alkene products changes as a function of time with a change in mechanism.
Background on discovery:
Professor David Todd at Pomona College was testing the dehydration of 2-methylcyclohexanol or 4-methylcyclohexanol c. 1994 and unexpectedly interrupted the alkene distillation midway to have lunch with his secretary, Evelyn Jacoby. After lunch, he continued his distillation but kept the early products separate from the completed ones. The analysis showed two different alkene ratios. The reaction products and pathways to the products seem to have changed over time. Dr. Todd called this phenomenon the “Evelyn effect.”
Dehydration of 2-methylcyclohexanol or 4-methylcyclohexanol:
-A simple example of the Evelyn effect is the sophomore level chemistry lab experiment involving two popular examples that are listed below.
a) Dehydration of 4-methylcyclohexanol b) Dehydration of 2-Methylcyclohexanol c) Mechanism for the dehydration of 2-methylcyclohexanol
Possible explanations of different ratio formations:
In general, if more than one alkene can be formed during dehalogenation by an elimination reaction, the more stable alkene is the major product. There are two types of elimination reactions, E1 and E2. An E2 reaction is a One step mechanism in which carbon-hydrogen and carbon-halogen bonds break to form a double bond. C=C Pi bond. An E1 reaction is the Ionization of the carbon-halogen bond breaking to give a carbocation intermediate, then the Deprotonation of the carbocation.
Possible explanations of different ratio formations:
For these two reactions, there are 3 possible products, 3-methyl-cyclohexene,1-methyl-cyclohexene, methylene-cyclohexane. The production of each of these occurs at different rates and the ratios of these also change over time. It is well known that the dehydration of the cis isomer is 30 times faster than the trans isomer. It then appears that the reaction proceeds mainly by a trans mechanism and, following the Zaitsev rule, 1-methylcyclohexene is preferentially formed in the early stages of the reaction. Indeed, if only about 10% of the total distillate is collected as the first fraction, one finds that the alkene is about 93% l-methylcyclohexene: at the end of the distillation one finds a value as low as 55% of l-methyl isomer.
Possible explanations of different ratio formations:
From these results, the phenomenon of the Evelyn effect can be observed and a conclusion can be drawn that a change of mechanism occurs somewhere during the synthesis.
Additional study on the Evelyn effect:
A kinetic and regional chemical study of the Evelyn effect has been described. The results, in the Journal of Chemical Education, made claims involving the mechanism by which the dehydrations occurred. The article looks into the claim of having E1 and E2 mechanisms occur in the reaction.
Additional study on the Evelyn effect:
The researchers measured the kinetics of the formation of a 3 degree carbocation’s and compared them to theoretical calculations that would occur if the experiment ran as an E2 reaction. Instead, the reaction showed a mechanism that initially formed a 2 degree carbocation, utilizing an E1 pathway. Their conclusion was that the mechanism is neither E1 or E2 but rather “E-2 like”, exhibiting first order kinetics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tyrosine kinase 2**
Tyrosine kinase 2:
Non-receptor tyrosine-protein kinase TYK2 is an enzyme that in humans is encoded by the TYK2 gene.TYK2 was the first member of the JAK family that was described (the other members are JAK1, JAK2, and JAK3). It has been implicated in IFN-α, IL-6, IL-10 and IL-12 signaling.
Function:
This gene encodes a member of the tyrosine kinase and, to be more specific, the Janus kinases (JAKs) protein families. This protein associates with the cytoplasmic domain of type I and type II cytokine receptors and promulgate cytokine signals by phosphorylating receptor subunits. It is also component of both the type I and type III interferon signaling pathways. As such, it may play a role in anti-viral immunity.Cytokines play pivotal roles in immunity and inflammation by regulating the survival, proliferation, differentiation, and function of immune cells, as well as cells from other organ systems. Hence, targeting cytokines and their receptors is an effective means of treating such disorders. Type I and II cytokine receptors associate with Janus family kinases (JAKs) to affect intracellular signaling. Cytokines including interleukins, interferons and hemopoietins activate the Janus kinases, which associate with their cognate receptors.The mammalian JAK family has four members: JAK1, JAK2, JAK3 and tyrosine kinase 2 (TYK2). The connection between Jaks and cytokine signaling was first revealed when a screen for genes involved in interferon type I (IFN-1) signaling identified TYK2 as an essential element, which is activated by an array of cytokine receptors. TYK2 has broader and profound functions in humans than previously appreciated on the basis of analysis of murine models, which indicate that TYK2 functions primarily in IL-12 and type I-IFN signaling. TYK2 deficiency has more dramatic effects in human cells than in mouse cells. However, in addition to IFN-α and -β and IL-12 signaling, TYK2 has major effects on the transduction of IL-23, IL-10, and IL-6 signals. Since, IL-6 signals through the gp-130 receptor-chain that is common to a large family of cytokines, including IL-6, IL-11, IL-27, IL-31, oncostatin M (OSM), ciliary neurotrophic factor, cardiotrophin 1, cardiotrophin-like cytokine, and LIF, TYK2 might also affect signaling through these cytokines. Recently, it has been recognized that IL-12 and IL-23 share ligand and receptor subunits that activate TYK2. IL-10 is a critical anti-inflammatory cytokine, and IL-10−/− mice suffer from fatal, systemic autoimmune disease.
Function:
TYK2 is activated by IL-10, and its deficiency affects the ability to generate and respond to IL-10. Under physiological conditions, immune cells are, in general, regulated by the action of many cytokines and it has become clear that cross-talk between different cytokine-signalling pathways is involved in the regulation of the JAK–STAT pathway.
Role in inflammation:
It is now widely accepted that atherosclerosis is a result of cellular and molecular events characteristic of inflammation. Vascular inflammation can be caused by upregulation of Ang-II, which is produced locally by inflamed vessels and induces synthesis and secretion of IL-6, a cytokine responsible for induction of angiotensinogen synthesis in liver through JAK/STAT3 pathway, which gets activated through high affinity membrane protein receptors on target cells, termed IL-6R-chain recruiting gp-130 that is associated with tyrosine kinases (Jaks 1/2, and TYK2 kinase). Cytokines IL-4 and IL-13 gets elevated in lungs of chronically suffered asthmatics. Signalling through IL-4/IL-13 complexes is thought to occur through IL-4Rα-chain, which is responsible for activation of JAK-1 and TYK2 kinases. A role of TYK2 in rheumatoid arthritis is directly observed in TYK2-deficient mice that were resistant to experimental arthritis. TYK2−/− mice displayed a lack of responsiveness to a small amount of IFN-α, but they respond normally to a high concentration of IFN-α/β. In addition, these mice respond normally to IL-6 and IL-10, suggesting that TYK2 is dispensable for mediating for IL-6 and IL-10 signaling and does not play a major role in IFN-α signaling. Although TYK2−/− mice are phenotypically normal, they exhibit abnormal responses to inflammatory challenges in a variety of cells isolated from TYK2−/− mice. The most remarkable phenotype observed in TYK2-deficient macrophages was lack of nitric oxide production upon stimulation with LPS. Further elucidation of molecular mechanisms of LPS signaling, showed that TYK2 and IFN-β deficiency leads resistance to LPS-induced endotoxin shock, whereas STAT1-deficient mice are susceptible. Development of a TYK2 inhibitor appears to be a rational approach in the drug discovery.
Clinical significance:
A mutation in this gene has been associated with hyperimmunoglobulin E syndrome (HIES), a primary immunodeficiency characterized by elevated serum immunoglobulin E.TYK2 appears to play a central role in the inflammatory cascade responses in the pathogenesis of immune-mediated inflammatory diseases such as psoriasis. The drug deucravacitinib (marketed as Sotyktu), a small-molecule TYK2 inhibitor, was approved for moderate-to-severe plaque psoriasis in 2022.
Clinical significance:
The P1104A allele of TYK2 has been shown to increase risk of tuberculosis when carried as a homozygote; population genetic analyses suggest that the arrival of tuberculosis in Europe drove the frequency of that allele down three-fold about 2,000 years before present.
Interactions:
Tyrosine kinase 2 has been shown to interact with FYN, PTPN6, IFNAR1, Ku80 and GNB2L1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Uridine diphosphate N-acetylglucosamine**
Uridine diphosphate N-acetylglucosamine:
Uridine diphosphate N-acetylglucosamine or UDP-GlcNAc is a nucleotide sugar and a coenzyme in metabolism. It is used by glycosyltransferases to transfer N-acetylglucosamine residues to substrates. D-Glucosamine is made naturally in the form of glucosamine-6-phosphate, and is the biochemical precursor of all nitrogen-containing sugars. To be specific, glucosamine-6-phosphate is synthesized from fructose 6-phosphate and glutamine as the first step of the hexosamine biosynthesis pathway. The end-product of this pathway is UDP-GlcNAc, which is then used for making glycosaminoglycans, proteoglycans, and glycolipids.UDP-GlcNAc is extensively involved in intracellular signaling as a substrate for O-linked N-acetylglucosamine transferases (OGTs) to install the O-GlcNAc post-translational modification in a wide range of species. It is also involved in nuclear pore formation and nuclear signalling. OGTs and OG-ases play an important role in the structure of the cytoskeleton. In mammals, there is enrichment of OGT transcripts in the pancreas beta-cells, and UDP-GlcNAc is thought to be part of the glucose sensing mechanism. There is also evidence that it plays a part in insulin sensitivity in other cells. In plants, it is involved in the control of gibberellin production.Clostridium novyi type A alpha-toxin is an O-linked N-actetylglucosamine transferase acting on Rho proteins and causing the collapse of the cytoskeleton. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Talking Machine News**
Talking Machine News:
Talking Machine News was an English trade publication dedicated to gramophones and gramophone records.
History:
The periodical was established in London, England, in May 1903 as the Talking Machine News and Record Exchange. After the second edition, its title was changed to Talking Machine News and Cinematograph Chronicle. From October 1905 (the thirtieth edition), it was titled simply Talking Machine News. From issue 157, it became Talking Machine News and Journal of Amusements. It ceased publication at some point in the 1930s.
Content:
The periodical described itself as "The recognized organ of the trade". It contained record reviews, articles, and technical information about the use and care of gramophones and records. The publication was sometimes issued on a monthly basis, and sometimes had a semimonthly distribution. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**96 equal temperament**
96 equal temperament:
In music, 96 equal temperament, called 96-TET, 96-EDO ("Equal Division of the Octave"), or 96-ET, is the tempered scale derived by dividing the octave into 96 equal steps (equal frequency ratios). Each step represents a frequency ratio of 96 , or 12.5 cents. Since 96 factors into 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, and 96, it contains all of those temperaments. Most humans can only hear differences of 6 cents on notes that are played sequentially, and this amount varies according to the pitch, so the use of larger divisions of octave can be considered unnecessary. Smaller differences in pitch may be considered vibrato or stylistic devices.
History and use:
96-EDO was first advocated by Julián Carrillo in 1924, with a 16th-tone piano. It was also advocated more recently by Pascale Criton and Vincent-Olivier Gagnon.
Notation:
Since 96 = 24 × 4, quarter-tone notation can be used and split into four parts.
Notation:
One can split it into four parts like this: C, C↑, C↑↑/C↓↓, C↓, C, ..., C↓, C As it can become confusing with so many accidentals, Julián Carrillo proposed referring to notes by step number from C (e.g. 0, 1, 2, 3, 4, ..., 95, 0) Since the 16th-tone piano has a 97-key layout arranged in 8 conventional piano "octaves", music for it is usually notated according to the key the player has to strike. While the entire range of the instrument is only C4–C5, the notation ranges from C0 to C8. Thus, written D0 corresponds to sounding C↑↑4 or note 2, and written A♭/G♯2 corresponds to sounding E4 or note 32.
Interval size:
Below are some intervals in 96-EDO and how well they approximate just intonation.
Moving from 12-EDO to 96-EDO allows the better approximation of a number of intervals, such as the minor third and major sixth.
Scale diagram:
Modes 96-EDO contains all of the 12-EDO modes. However, it contains better approximations to some intervals (such as the minor third). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ancient Symbols (Unicode block)**
Ancient Symbols (Unicode block):
Ancient Symbols is a Unicode block containing Roman characters for currency, weights, and measures.
History:
The following Unicode-related documents record the purpose and process of defining specific characters in the Ancient Symbols block: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trapper (ice hockey)**
Trapper (ice hockey):
A trapper, also referred to as catch glove or simply glove, is a piece of equipment that an ice hockey goaltender wears on the non-dominant hand to assist in catching and stopping the puck.
Evolution:
The trapper originally had the same shape as a baseball glove, but evolved into a highly specific piece of equipment that is designed specifically for catching the puck. Changes made over time include the addition of a "string mesh" in the pocket of the trapper and substantially more palm and wrist protection. The "cheater" portion of the glove covers the wrist, which evolved from gauntlet-like gloves from the 1920s.
Technique:
The pocket is the area of the trapper between the thumb and first finger of the glove, and is where most goaltenders try to catch the puck, as it reduces the discomfort the goaltender experiences and minimizes the chance of the puck falling out of the glove, creating the possibility of a rebound.
Positioning Worn on the non-dominant hand, the trapper can be held in a variety of positions depending upon, individual style and preference. Younger goaltenders tend to hold the glove with the palm facing towards the shooter, instead of in the traditional "shake hands" position. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**International Journal of Data Warehousing and Mining**
International Journal of Data Warehousing and Mining:
The International Journal of Data Warehousing and Mining (IJDWM) is a quarterly peer-reviewed academic journal covering data warehousing and data mining. It was established in 2005 and is published by IGI Global. The editor-in-chief is David Taniar (Monash University, Australia).
Abstracting and indexing:
The journal is abstracted and indexed in: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Philip Pilkington**
Philip Pilkington:
Philip Pilkington is an Irish economist working in investment finance. He became well-known for his critiques of neoclassical economics on his blog Fixing the Economists. Since then he has written a book entitled The Reformation in Economics outlining these critiques, developed an empirical methodology to assess general equilibrium theory and created a new means to estimate both potential output and inflationary pressure in the labor market.
Life:
Pilkington was born in Dublin. He attended C.B.C. Monkstown. Pilkington received his Masters in Economics from Kingston University. His thesis focused on a stock-flow approach to asset price modelling and was subsequently published by the Levy Economics Institute.
Works:
In his article "The Miracle of General Equilibrium" Pilkington argues that all of contemporary macroeconomics is dominated by general equilibrium theory. He argues that both the New Keynesian and New Classical schools take as their starting point the assumption that general equilibrium can and eventually will be reached. Pilkington argues that they only disagree on how easy this is to achieve. Pilkington then goes on to argue that this case has never been argued empirically. He points out that the theory emerges with the French economist Léon Walras in 1899 but that Walras never laid out clear empirical criteria for his theory. Pilkington proceeds to lay out a test case based on the 'hats-in-ring' problem in probability theory.In his paper How Far Can We Push This Thing? Pilkington develops a novel framework for estimating and understanding potential output. Pilkington argues that contemporary approaches to potential output either involve crude trend estimates that simply assume that economies usually operate at full capacity or they utilize the flawed NAIRU framework that has been disproved in practice in the past. He advocates a more intuitive and statistically-grounded approach to estimating potential output. Pilkington points out that statistics on capacity utilization are readily available for most countries. He argues that "the true constraint on economic growth at any moment in time is the utilisation rate of plant and machine" and so we should simply estimate the sensitivity of GDP to capacity utilization and in doing so derive an estimate of potential output. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multichannel marketing**
Multichannel marketing:
Multichannel marketing is the blending of different distribution and promotional channels for the purpose of marketing. Distribution channels include a retail storefront, a website, or a mail-order catalogue.
Multichannel marketing:
Multichannel marketing is about choice. The objective of the companies doing the marketing is to make it easy for a consumer to buy from them in whatever way is most appropriate.To be effective, multichannel marketing needs to be supported by good supply chain management systems, so that the details and prices of goods on offer are consistent across the different channels. It might also be supported by a detailed analysis of the return on investment from each different channel, measured in terms of customer response and conversion of sales. The contribution each channel delivers to sales can be assessed via Marketing Mix Modeling or attribution modelling. Some companies target certain channels at different demographic segments of the market or at different socio-economic groups of consumers.
Multichannel marketing:
Multichannel marketing allows the retail merchant to reach its prospective or current customer through a channel of his/ her liking.
Coordination of online and offline channels:
Companies that sell branded products and services through local businesses market through both online and offline channels to local audiences. Online and offline multichannel marketing campaigns can either inform one another or be executed in isolation. A proportion of companies use their online marketing efforts to inform their offline advertising (i.e. they test keywords online to understand if they fit with customer intent before printing them in offline ads).
Comparison with traditional forms of marketing:
While multichannel marketing focuses primarily on new media platforms in marketing, traditional approaches use old media such as print sources, telemarketing, direct mail and broadcasting stations such as radio and television. Multichannel marketing does not only use web 2.0 forms but also integrates media convergence models, targeting customer interaction through different platforms such as via text messaging, on a website, email, online video campaigns, GPS to track the location of a customer and their proximity to the product or service. Being able to reach out to customers directly is an important marketing strategy because it is convenient and enhances direct customer interaction.
Benefits:
Some of the long term benefits of this style of marketing include: Better management of results and sales: Using many communicative platforms to reach the audience increases the chances of receiving feedback from a variety of customers on the overall performance. This feedback gives companies an idea of what the customer wants and what they can improve upon Higher revenues: The more diverse platforms used in trying to reach customers, the more the potential customers are likely to reach out to purchase goods and services. If the company advertises its brand only on the [internet], it will be very hard to capture the attention of potential customers who do not use the internet regularly and rely on other mediums such as the [television] for example.
Benefits:
Better understanding of customers: By the response from customers, it is easier to understand what they expect from a product or service and how a brand can be improved. To satisfy the needs of a niche, it is necessary to identify the channels and platforms which work for a certain group.
Increased brand visibility and reach: About 36% of shoppers search products on one channel but purchase the product through a different channel.
Optimize media spend: Data retrieval and centralization enables companies to better target consumer segments and provide them with more effective marketing campaigns therefore optimizing media spend. ref url = https://digital.hec.ca/en/digital-and-omni-channel-marketing/ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Robert Swendsen**
Robert Swendsen:
Robert Haakon Swendsen is Professor of Physics at Carnegie Mellon University. He is known in the computational physics community for the Swendsen-Wang algorithm, the Monte Carlo Renormalization Group and related methods that enable efficient computational studies of equilibrium phenomena near phase transitions. He is the 2014 Recipient of the Aneesur Rahman Prize for Computational Physics from the American Physical Society.Swendsen completed his undergraduate studies at Yale University and his PhD at University of Pennsylvania.
Robert Swendsen:
Swendsen is also known for his pedagogy. He received the Ashkin Teaching award in 2014 He is also known for his textbook, An Introduction to Statistical Mechanics and Thermodynamics (2nd ed. 2020). Oxford University Press. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ralph-Johan Back**
Ralph-Johan Back:
Ralph-Johan Back is a Finnish computer scientist. Back originated the refinement calculus, an important approach to the formal development of programs using stepwise refinement, in his 1978 PhD thesis at the University of Helsinki, On the Correctness of Refinement Steps in Program Development. He has undertaken much subsequent research in this area. He has held positions at CWI Amsterdam, the Academy of Finland and the University of Tampere.
Ralph-Johan Back:
Since 1983, he has been Professor of Computer Science at the Åbo Akademi University in Turku. For 2002–2007, he was an Academy Professor at the Academy of Finland. He is Director of CREST (Center for Reliable Software Technology) at Åbo Akademi.Back is a member of Academia Europaea. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Streff syndrome**
Streff syndrome:
Streff syndrome is a vision condition primarily exhibited by children under periods of visual or emotional stress.
Presentation:
Frequently patients will have reduced stereopsis, large accommodative lag on dynamic retinoscopy, and a reduced visual field (tubular or spiral field). Streff Syndrome was first described in 1962 by an optometrist, Dr. John Streff as Non-malingering syndrome. In 1962, Dr. Streff and Dr. Richard Apell expanded the concept to add early adaptive syndrome as a precursor to Streff syndrome. Dr. Streff believed the visual changes were induced by stress from reading. There is dispute on the taxonomy of functional vision defects. Some research indicates that Streff syndrome may be caused by a dysfunction in the magnocellular pathway of the retinal ganglion cells. These cells are only 10% of the retinal nerve cells and register motion detection. Early Adaptive Syndrome
Diagnosis:
The diagnostic criteria for Streff syndrome are not well established, and the validity of this condition has not been recognized by The American Academy of Ophthalmology, The American Academy of Pediatric Ophthalmology, The American Academy of Optometry or The American academy of Pediatrics.
Treatment:
Most optometrists agree that Streff syndrome is a generalized reduction in visual performance that is not caused by structural damage. It is a disease involving vision distress primarily of the accommodation system. Hans Selye described stress, distress and eustress. It is most common in girls ages 8 to 14. Hand held reading material is often positioned excessively close. Reading aloud shows signs of elevated pitch and stumbling over common words. History of homework avoidance and falling class performance are often present. If the patient is directed to read aloud and +.50 lenses are then used, there is usually a dramatic improvement as observed by patient and parent. Abnormal results on color vision or visual field testing is not uncommon. Visual field often presents as constricted 'tubular' at multiple test distances. The poor visual performance is understood as distress, and treatments are usually to provide the patient with low powered reading glasses. The "relaxing" nature of reading glasses is believed to reduce the near vision stress and allow normal function. The emotional effects of chronic near vision stress are also reduced.
Treatment:
The "non-Malingering" name is a refutation that the patient is malingering. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**End-of-Transmission-Block character**
End-of-Transmission-Block character:
End-of-Transmission-Block (ETB) is a communications control character used to indicate the end of a block of data for communications purposes. ETB is used for segmenting data into blocks when the block structure is not necessarily related to the processing function.
In ASCII, ETB is code point 23 (0x17, or ^W in caret notation) in the C0 control code set. In EBCDIC, ETB is code point 0x26. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pine–cypress forest**
Pine–cypress forest:
Pine–cypress forest is a type of mixed conifer woodland in which at least one species of pine (genus Pinus) and one species of cypress (genus Cupressaceae) are present. Such forests are noted in several parts of the world, but are particularly well studied in Japan, and the United States.
Ecology:
A quality of these mixed conifer forests is the mutualistic relationship between pine and cypress trees. In Japanese pine-cypress forests, pine stumps have been found to help stimulate the growth and germination of cypress trees. Cypress trees are extremely sensitive to pH and prefer more acidic soils. Decaying pine stumps have a lower pH than surrounding soils, it is believed that this is the main factor influencing the increased prevalence of cypress seedlings. Analysis of evapotranspiration on pine and cypress wetlands found that both tree types are sensitive to changes in ambient temperature, but pines are more sensitive to changes in humidity. This difference in vulnerabilities could contribute to overall forest resiliency.
Forest management:
Like many mixed forest types, human forest management can impact the structures of pine-cypress forests. A study based in Taiwan used computer modeling to determine the stand density index for pine-cypress forests. This helps to measure interspecies relationships within forests, including species density, competition, and tree development. This is helpful for informing future management practices by maintaining a more current understanding of forest dynamics. Because both tree types can be very sensitive to changes in forest hydrology, additional management is necessary beyond density monitoring. Contentious management of flooding and drainage was shown to improve the health of both pine and cypress trees in a mixed ecosystem.
Global occurrences:
Japan Pine-cypress forests can be found in much of central Japan. A heterogeneous landscape, consisting of pine-oak forests, timber plantations and cypress groves help to maintain this forest structure.
United States California California occurrences of pine–cypress forest are typically along Pacific coastal headlands. Understory species in these California pine–cypress forests include salal and western poison oak.
Florida Many of the Florida occurrences of pine–cypress forest are in swampy areas such as the Everglades. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alcohol dehydrogenase (cytochrome c)**
Alcohol dehydrogenase (cytochrome c):
Alcohol dehydrogenase (cytochrome c) (EC 1.1.2.8, type I quinoprotein alcohol dehydrogenase, quinoprotein ethanol dehydrogenase) is an enzyme with systematic name alcohol:cytochrome c oxidoreductase. This enzyme catalyses the following chemical reaction a primary alcohol + 2 ferricytochrome c ⇌ an aldehyde + 2 ferrocytochrome c + 2 H+A periplasmic PQQ-containing quinoprotein is present in Pseudomonas and Rhodopseudomonas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electronic authentication**
Electronic authentication:
Electronic authentication is the process of establishing confidence in user identities electronically presented to an information system. Digital authentication, or e-authentication, may be used synonymously when referring to the authentication process that confirms or certifies a person's identity and works. When used in conjunction with an electronic signature, it can provide evidence of whether data received has been tampered with after being signed by its original sender. Electronic authentication can reduce the risk of fraud and identity theft by verifying that a person is who they say they are when performing transactions online.Various e-authentication methods can be used to authenticate a user's identify ranging from a password to higher levels of security that utilize multifactor authentication (MFA). Depending on the level of security used, the user might need to prove his or her identity through the use of security tokens, challenge questions, or being in possession of a certificate from a third-party certificate authority that attests to their identity.
Overview:
The American National Institute of Standards and Technology (NIST) has developed a generic electronic authentication model that provides a basic framework on how the authentication process is accomplished regardless of jurisdiction or geographic region. According to this model, the enrollment process begins with an individual applying to a Credential Service Provider (CSP). The CSP will need to prove the applicant's identity before proceeding with the transaction. Once the applicant's identity has been confirmed by the CSP, he or she receives the status of "subscriber", is given an authenticator, such as a token and a credential, which may be in the form of a username.
Overview:
The CSP is responsible for managing the credential along with the subscriber's enrollment data for the life of the credential. The subscriber will be tasked with maintaining the authenticators. An example of this is when a user normally uses a specific computer to do their online banking. If he or she attempts to access their bank account from another computer, the authenticator will not be present. In order to gain access, the subscriber would need to verify their identity to the CSP, which might be in the form of answering a challenge question successfully before being given access.
Overview:
Use of electronic authorization in medical field New invention on medicines, and novel development on medical technologies had been widely deployed and adopted in modern societies. In consequence, the average lifetime of human being is much longer than it was before. Therefore, to safely establish and manage personal health records for each individual during his/her lifetime within the electronic form has gradually become an interesting topic for individual citizens and social welfare departments; the reason is that a well-maintained health records document of an individual can help doctors and hospitals know important and necessary medical and body conditions of the targeted patient in time before conducting any therapy.
History:
The need for authentication has been prevalent throughout history. In ancient times, people would identify each other through eye contact and physical appearance. The Sumerians in ancient Mesopotamia attested to the authenticity of their writings by using seals embellished with identifying symbols. As time moved on, the most common way to provide authentication would be the handwritten signature.
Authentication factors:
There are three generally accepted factors that are used to establish a digital identity for electronic authentication, including: Knowledge factor, which is something that the user knows, such as a password, answers to challenge questions, ID numbers or a PIN.
Authentication factors:
Possession factor, which is something that the user has, such as mobile phone, PC or token Biometric factor, which is something that the user is, such as his or her fingerprints, eye scan or voice patternOut of the three factors, the biometric factor is the most convenient and convincing to prove an individual's identity, but it is the most expensive to implement. Each factor has its weaknesses; hence, reliable and strong authentication depends on combining two or more factors. This is known as multi-factor authentication, of which two-factor authentication and two-step verification are subtypes.
Authentication factors:
Multi-factor authentication can still be vulnerable to attacks, including man-in-the-middle attacks and Trojan attacks.
Methods:
Token Tokens generically are something the claimant possesses and controls that may be used to authenticate the claimant's identity. In e-authentication, the claimant authenticates to a system or application over a network. Therefore, a token used for e-authentication is a secret and the token must be protected. The token may, for example, be a cryptographic key, that is protected by encrypting it under a password. An impostor must steal the encrypted key and learn the password to use the token.
Methods:
Passwords and PIN-based authentication Passwords and PINs are categorized as "something you know" method. A combination of numbers, symbols, and mixed cases are considered to be stronger than all-letter password. Also, the adoption of Transport Layer Security (TLS) or Secure Socket Layer (SSL) features during the information transmission process will as well create an encrypted channel for data exchange and to further protect information delivered. Currently, most security attacks target on password-based authentication systems.
Methods:
Public-key authentication This type of authentication has two parts. One is a public key, the other is a private key. A public key is issued by a Certification Authority and is available to any user or server. A private key is known by the user only.
Symmetric-key authentication The user shares a unique key with an authentication server. When the user sends a randomly generated message (the challenge) encrypted by the secret key to the authentication server, if the message can be matched by the server using its shared secret key, the user is authenticated.
When implemented together with the password authentication, this method also provides a possible solution for two-factor authentication systems.
SMS-based authentication The user receives password by reading the message in the cell phone, and types back the password to complete the authentication. Short Message Service (SMS) is very effective when cell phones are commonly adopted. SMS is also suitable against man-in-the-middle (MITM) attacks, since the use of SMS does not involve the Internet.
Methods:
Biometric authentication Biometric authentication is the use of unique physical attributes and body measurements as the intermediate for better identification and access control. Physical characteristics that are often used for authentication include fingerprints, voice recognition, face recognition, and iris scans because all of these are unique to every individual. Traditionally, biometric authentication based on token-based identification systems, such as passport, and nowadays becomes one of the most secure identification systems to user protections. A new technological innovation which provides a wide variety of either behavioral or physical characteristics which are defining the proper concept of biometric authentication.
Methods:
Digital identity authentication Digital identity authentication refers to the combined use of device, behavior, location and other data, including email address, account and credit card information, to authenticate online users in real time. For example, recent work have explored how to exploit browser fingerprinting as part of a multi-factor authentication scheme.
Electronic credentials:
Paper credentials are documents that attest to the identity or other attributes of an individual or entity called the subject of the credentials. Some common paper credentials include passports, birth certificates, driver's licenses, and employee identity cards. The credentials themselves are authenticated in a variety of ways: traditionally perhaps by a signature or a seal, special papers and inks, high quality engraving, and today by more complex mechanisms, such as holograms, that make the credentials recognizable and difficult to copy or forge. In some cases, simple possession of the credentials is sufficient to establish that the physical holder of the credentials is indeed the subject of the credentials. More commonly, the credentials contain biometric information such as the subject's description, a picture of the subject or the handwritten signature of the subject that can be used to authenticate that the holder of the credentials is indeed the subject of the credentials. When these paper credentials are presented in-person, authentication biometrics contained in those credentials can be checked to confirm that the physical holder of the credential is the subject.
Electronic credentials:
Electronic identity credentials bind a name and perhaps other attributes to a token. There are a variety of electronic credential types in use today, and new types of credentials are constantly being created (eID, electronic voter ID card, biometric passports, bank cards, etc.) At a minimum, credentials include identifying information that permits recovery of the records of the registration associated with the credentials and a name that is associated with the subscriber.
Verifiers:
In any authenticated on-line transaction, the verifier is the party that verifies that the claimant has possession and control of the token that verifies his or her identity. A claimant authenticates his or her identity to a verifier by the use of a token and an authentication protocol. This is called Proof of Possession (PoP). Many PoP protocols are designed so that a verifier, with no knowledge of the token before the authentication protocol run, learns nothing about the token from the run. The verifier and CSP may be the same entity, the verifier and relying party may be the same entity or they may all three be separate entities. It is undesirable for verifiers to learn shared secrets unless they are a part of the same entity as the CSP that registered the tokens. Where the verifier and the relying party are separate entities, the verifier must convey the result of the authentication protocol to the relying party. The object created by the verifier to convey this result is called an assertion.
Authentication schemes:
There are four types of authentication schemes: local authentication, centralized authentication, global centralized authentication, global authentication and web application (portal).
Authentication schemes:
When using a local authentication scheme, the application retains the data that pertains to the user's credentials. This information is not usually shared with other applications. The onus is on the user to maintain and remember the types and number of credentials that are associated with the service in which they need to access. This is a high risk scheme because of the possibility that the storage area for passwords might become compromised.
Authentication schemes:
Using the central authentication scheme allows for each user to use the same credentials to access various services. Each application is different and must be designed with interfaces and the ability to interact with a central system to successfully provide authentication for the user. This allows the user to access important information and be able to access private keys that will allow him or her to electronically sign documents.
Authentication schemes:
Using a third party through a global centralized authentication scheme allows the user direct access to authentication services. This then allows the user to access the particular services they need.
The most secure scheme is the global centralized authentication and web application (portal). It is ideal for E-Government use because it allows a wide range of services. It uses a single authentication mechanism involving a minimum of two factors to allow access to required services and the ability to sign documents.
Authentication and digital signing working together:
Often, authentication and digital signing are applied in conjunction. In advanced electronic signatures, the signatory has authenticated and uniquely linked to a signature. In the case of a qualified electronic signature as defined in the eIDAS-regulation, the signer's identity is even certified by a qualified trust service provider. This linking of signature and authentication firstly supports the probative value of the signature – commonly referred to as non-repudiation of origin. The protection of the message on the network-level is called non-repudiation of emission. The authenticated sender and the message content are linked to each other. If a 3rd party tries to change the message content, the signature loses validity.
Risk assessment:
When developing electronic systems, there are some industry standards requiring United States agencies to ensure the transactions provide an appropriate level of assurance. Generally, servers adopt the US' Office of Management and Budget's (OMB's) E-Authentication Guidance for Federal Agencies (M-04-04) as a guideline, which is published to help federal agencies provide secure electronic services that protect individual privacy. It asks agencies to check whether their transactions require e-authentication, and determine a proper level of assurance.It established four levels of assurance:Assurance Level 1: Little or no confidence in the asserted identity's validity.
Risk assessment:
Assurance Level 2: Some confidence in the asserted identity's validity. Assurance Level 3: High confidence in the asserted identity's validity. Assurance Level 4: Very high confidence in the asserted identity's validity.
Determining assurance levels The OMB proposes a five-step process to determine the appropriate assurance level for their applications: Conduct a risk assessment, which measures possible negative impacts.
Compare with the five assurance levels and decide which one suits this case.
Select technology according to the technical guidance issued by NIST.
Confirm the selected authentication process satisfies requirements.
Reassess the system regularly and adjust it with changes.The required level of authentication assurance are assessed through the factors below: Inconvenience, distress, or damage to standing or reputation; Financial loss or agency liability; Harm to agency programs or public interests; Unauthorized release of sensitive information; Personal safety; and/or civil or criminal violations.
Risk assessment:
Determining technical requirements National Institute of Standards and Technology (NIST) guidance defines technical requirements for each of the four levels of assurance in the following areas: Tokens are used for proving identity. Passwords and symmetric cryptographic keys are private information that the verifier needs to protect. Asymmetric cryptographic keys have a private key (which only the subscriber knows) and a related public key.
Risk assessment:
Identity proofing, registration, and the delivery of credentials that bind an identity to a token. This process can involve a far distance operation.
Credentials, tokens, and authentication protocols can also be combined to identify that a claimant is in fact the claimed subscriber.
An assertion mechanism that involves either a digital signature of the claimant or is acquired directly by a trusted third party through a secure authentication protocol.
Guidelines and regulations:
Triggered by the growth of new cloud solutions and online transactions, person-to-machine and machine-to-machine identities play a significant role in identifying individuals and accessing information. According to the Office of Management and Budget in the U.S., more than $70 million was spent on identity management solutions in both 2013 and 2014.Governments use e-authentication systems to offer services and reduce time people traveling to a government office. Services ranging from applying for visas to renewing driver's licenses can all be achieved in a more efficient and flexible way. Infrastructure to support e-authentication is regarded as an important component in successful e-government. Poor coordination and poor technical design might be major barriers to electronic authentication.In several countries there has been established nationwide common e-authentication schemes to ease the reuse of digital identities in different electronic services. Other policy initiatives have included the creation of frameworks for electronic authentication, in order to establish common levels of trust and possibly interoperability between different authentication schemes.
Guidelines and regulations:
United States E-authentication is a centerpiece of the United States government's effort to expand electronic government, or e-government, as a way of making government more effective and efficient and easier to access. The e-authentication service enables users to access government services online using log-in IDs (identity credentials) from other web sites that both the user and the government trust.
Guidelines and regulations:
E-authentication is a government-wide partnership that is supported by the agencies that comprise the Federal CIO Council. The United States General Services Administration (GSA) is the lead agency partner. E-authentication works through an association with a trusted credential issuer, making it necessary for the user to log into the issuer's site to obtain the authentication credentials. Those credentials or e-authentication ID are then transferred the supporting government web site causing authentication. The system was created in response a December 16, 2003 memorandum was issued through the Office of Management and Budget. Memorandum M04-04 Whitehouse. That memorandum updates the guidance issued in the Paperwork Elimination Act of 1998, 44 U.S.C. § 3504 and implements section 203 of the E-Government Act, 44 U.S.C. ch. 36.
Guidelines and regulations:
NIST provides guidelines for digital authentication standards and does away with most knowledge-based authentication methods. A stricter standard has been drafted on more complicated passwords that at least 8 characters long or passphrases that are at least 64 characters long.
Europe In Europe, eIDAS provides guidelines to be used for electronic authentication in regards to electronic signatures and certificate services for website authentication. Once confirmed by the issuing Member State, other participating States are required to accept the user's electronic signature as valid for cross border transactions.
Guidelines and regulations:
Under eIDAS, electronic identification refers to a material/immaterial unit that contains personal identification data to be used for authentication for an online service. Authentication is referred to as an electronic process that allows for the electronic identification of a natural or legal person. A trust service is an electronic service that is used to create, verify and validate electronic signatures, in addition to creating, verifying and validating certificates for website authentication.
Guidelines and regulations:
Article 8 of eIDAS allows for the authentication mechanism that is used by a natural or legal person to use electronic identification methods in confirming their identity to a relying party. Annex IV provides requirements for qualified certificates for website authentication.
Russia E-authentication is a centerpiece of the Russia government's effort to expand e-government, as a way of making government more effective and efficient and easier for the Russian people to access. The e-authentication service enables users to access government services online using log-in IDs (identity credentials) they already have from web sites that they and the government trust.
Other applications:
Apart from government services, e-authentication is also widely used in other technology and industries. These new applications combine the features of authorizing identities in traditional database and new technology to provide a more secure and diverse use of e-authentication. Some examples are described below.
Other applications:
Mobile authentication Mobile authentication is the verification of a user's identity through the use a mobile device. It can be treated as an independent field or it can also be applied with other multifactor authentication schemes in the e-authentication field.For mobile authentication, there are five levels of application sensitivity from Level 0 to Level 4. Level 0 is for public use over a mobile device and requires no identity authentications, while level 4 has the most multi-procedures to identify users. For either level, mobile authentication is relatively easy to process. Firstly, users send a one-time password (OTP) through offline channels. Then, a server identifies the information and makes adjustment in the database. Since only the user has the access to a PIN code and can send information through their mobile devices, there is a low risk of attacks.
Other applications:
E-commerce authentication In the early 1980s, electronic data interchange (EDI) systems was implemented, which was considered as an early representative of E-commerce. But ensuring its security is not a significant issue since the systems are all constructed around closed networks. However, more recently, business-to-consumer transactions have transformed. Remote transacting parties have forced the implementation of E-commerce authentication systems.Generally speaking, the approaches adopted in E-commerce authentication are basically the same as e-authentication. The difference is E-commerce authentication is a more narrow field that focuses on the transactions between customers and suppliers. A simple example of E-commerce authentication includes a client communicating with a merchant server via the Internet. The merchant server usually utilizes a web server to accept client requests, a database management system to manage data and a payment gateway to provide online payment services.
Other applications:
Self-sovereign identity With self-sovereign identity (SSI) the individual identity holders fully create and control their credentials. Whereas the verifiers can authenticate the provided identities on a decentralized network.
Perspectives:
To keep up with the evolution of services in the digital world, there is continued need for security mechanisms. While passwords will continue to be used, it is important to rely on authentication mechanisms, most importantly multifactor authentication. As the usage of e-signatures continues to significantly expand throughout the United States, the EU and throughout the world, there is expectation that regulations such as eIDAS will eventually be amended to reflect changing conditions along with regulations in the United States. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Magnesium oxide wallboard**
Magnesium oxide wallboard:
Magnesium oxide, more commonly called magnesia, is a versatile mineral that when used as part of a cement mixture and cast into thin cement panels under proper curing procedures and practices can be used in residential and commercial building construction. Some versions are suitable for a wide range of general building uses and for applications that require fire resistance, mold and mildew control, as well as sound control applications and many other benefits. Magnesia board has strength and resistance due to very strong bonds between magnesium and oxygen atoms that form magnesium oxide crystals (with the chemical formula MgO).
Magnesium oxide wallboard:
Magnesia boards are used in place of traditional gypsum drywall as wall and ceiling covering material and sheathing. It is also used in a number of other construction applications such as fascias, soffit, shaft-liner and area separation, wall sheathing, and as tile backing (backer board) or as substrates for coatings and insulated systems such as finish systems, EIFS, and some types of stucco.
Magnesium oxide wallboard:
Magnesia cement board for building construction is available is various sizes and thickness. It is not a paperfaced material. It generally comes in a light gray, white or beige color. Numerous versions and value of grades exist including smooth face, rough texture, utility, versatile grades as well as different densities and strengths for different applications and uses.
Magnesium oxide wallboard:
Presently various magnesia cement boards are widely used in Asia as a primary construction material. Some versions have been designated as the ‘official’ construction specified material of the 2008 Summer Olympics and some versions are used extensively on the inside and outside of all the walls, fireproofing beams, and as the sub-floor sheathing in one of the world's tallest buildings, Taipei 101, located in Taipei, Taiwan.
Magnesium oxide wallboard:
Magnesia cement is manufactured in a number of areas around the world, primarily near areas where magnesia based ore (periclase) deposits are mined. Major deposits are found in China, Europe, and Canada. Magnesia ore deposits in the US are negligible. Estimates put the use of magnesia board products at around 8 million ft² in Asia alone. It is gaining popularity in the US, particularly near coastal regions.
History:
Magnesia cement use in masonry construction is ancient. It was used primarily as a mortar component and stabilizer for soil bricks. Magnesia has also been identified in the Great Wall of China and other ancient landmarks. Roman cement is reported to have contained high levels of magnesia.
In the West, Portland cement replaced magnesia for masonry uses in the 20th century when energy was cheap (see energy efficiency) and mold infection was poorly understood.
However, some projects continued to use magnesia. New York City's Brooklyn Bridge base is made from locally mined cement, a mixture of calcium oxide and magnesia cement commonly called Rosendale cement, the only natural non-fired cement made in the US.
Magnesia cement boards were approved for construction use in the US around 2003.
Due to its fire resistance and safety ratings, New York and New Jersey were early adopters of magnesia cement board. Florida has adopted magnesia boards for mold/mildew resistance. It is hurricane and impact tested and approved in Miami-Dade County.
Located in Taipei, Taiwan, magnesia board can be found on all 101 stories of Taipei 101, currently the eighth tallest building in the world. Magnesia sheeting was used on the inside and outside of all the walls, fireproofing beams and as the sub-floor sheathing.
Purpose and use:
Magnesia is widely used primarily as wallboard alternative to conventional gypsum-based drywall and plywoods. The magnesia boards can be scored and snapped, sawed, drilled, and fastened to wood or steel framing.
Magnesia boards are a good example of the advances made in construction materials to meet changes in building codes for safety and durability.
Applications:
Interior wall and ceiling board Exterior wall and fencing board Exterior sheathing Trim materials Fascias Soffits Shaft-liner and area separation wall board Tile backing (backer board) and underlayment Substrates for coatings and insulated systems such as direct-applied finish systems, EIFS, SIPS, Portland type stucco and synthetic stuccos.
Advantages:
Ratings and testing: Fire-resistant (UL 055 and ASTM-tested and A-rated) Water-resistant (freeze/thaw-tested for 36 months) Mold/fungus/bug free (non-nutritious to mold, fungus, insects ASTM G-21) Impact-resistant (ASTM D-5628) NYC approved (MEA # 359-02-M) Silica/asbestos free STC-rated 53-54 Can be used in the place of traditional drywall or cement boards. No special tools required.
Hard non-absorbent surface – using fibreglass backing – with no paper.
Can be used in applications like cement-based siding subject to using water-proof coating systems.
Available in colors.
Energy efficient – magnesite calcines at approx 780 Celsius, compared to over 1,400 Celsius required to form traditional Portland cement or calcium oxide, the starting material for the preparation of slaked lime or portlandite used in common mortar and plaster.
Magnesia boards have been mentioned in articles about biologically friendly construction and risks of mold infection.
Comparable in cost to cement board made from Portland cement, with numerous advantages over that material for wet applications.
Disadvantages:
Natural deposits of magnesium carbonate (Magnesite ore) occur in China and this is calcined to produce magnesium oxide. Local governments in China prohibit the export of raw materials needed to manufacture MgO elsewhere.
Little mining of magnesium based minerals occurs in the United States or Europe, and is not thought profitable other than for higher value ceramic applications such as refractory brick preparations (so called magnesia refractories). Most building projects involving low cost MgO board will inevitably rely on Chinese or Indian materials.
In most cases, good quality magnesia board is more expensive than paperfaced gypsum drywall material.
Disadvantages:
Like all cement mixtures, magnesia cements and related mixing recipes and equipment require strict controls in both the raw material going into the mixer, as well as the curing process and proper waiting time for setting and handling of the fresh and semi-fresh product. Many cheaper brands achieve high early strength using magnesium oxychloride cement technologies, which make the board more susceptible to water weakening and inconsistent material.
Disadvantages:
Several different producers exist, with big differences in their production and selling costs, which greatly impacts on the mix design and curing process. This makes each brand very different in potential uses. Even though the different brands may look and feel similar, caution must be used when selecting the versions and brands for specific use since they are not all the same or usable in the same way. [?reference] Boards tend to have quite unique installation requirements. Each version of magnesia board needs to be installed using the manufacturer's recommendations to avoid installation problems.
Disadvantages:
Most often the boards are produced by using Sorel cement (Magnesium oxychloride), resulting in a slightly hygroscopic product that can produce a problem called "crying boards" when applied in too humid climate. Example: Dokk1.
The chloride in Sorel cement is relatively immobile, but in some cases can produce a corrosive environment for embedded fasteners and steel studs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Netilmicin**
Netilmicin:
Netilmicin (1-N-ethylsisomicin) is a semisynthetic aminoglycoside antibiotic, and a derivative of sisomicin, produced by Micromonospora inyoensis. Aminoglycoside antibiotics have the ability to kill a wide variety of bacteria. Netilmicin is not absorbed from the gut and is therefore only given by injection or infusion. It is only used in the treatment of serious infections particularly those resistant to gentamicin.
It was patented in 1973 and approved for medical use in 1981. It was approved for medical use in the UK in December 2019, for the treatment of external infections of the eye. It is on the World Health Organization's List of Essential Medicines.
Comparison with drugs:
According to the British National Formulary (BNF), netilmicin has similar activity to gentamicin, but less ototoxicity in those needing treatment for longer than 10 days. Netilmicin is active against a number of gentamicin-resistant Gram-negative bacteria but is less active against Pseudomonas aeruginosa than gentamicin or tobramycin.
However, according to the below-mentioned studies, the above advantages are controversial: Netilmicin (Netromycin, Schering-Plough, Netspan- Cipla): In summary, netilmicin has not been demonstrated to have significant advantages over other aminoglycosides (gentamicin, tobramycin, amikacin), and it is more expensive; thus, its potential value is limited. Drug Intelligence & Clinical Pharmacy: Vol. 17, No. 2, pp. 83-91.
Once-daily gentamicin versus once-daily netilmicin in patients with serious infections—a randomized clinical trial: We conclude that with once-daily dosing no benefit of netilmicin over gentamicin regarding nephro- or ototoxicity could be demonstrated. Journal of Antimicrobial Chemotherapy (1994) 33, 823-835.
Ototoxicity and nephrotoxicity of gentamicin vs netilmicin in patients with serious infections. A randomized clinical trial: We conclude that with once-daily treatment no benefit of netilmicin over gentamicin regarding nephro- or ototoxicity could be demonstrated. Clin Otolaryngol Allied Sci. 1995 Apr;20(2):118-23.
Relative efficacy and toxicity of netilmicin and tobramycin in oncology patients: We conclude that aminoglycoside-associated ototoxicity was less severe and more often reversible with netilmicin than with tobramycin. Arch Intern Med. 1986 Dec;146(12):2329-34.
Daily single-dose aminoglycoside administration. Therapeutic and economic benefits: Animal studies have shown that dosing aminoglycosides once daily is more efficient and less nephrotoxic than the conventional multiple daily dosing regimens. Netilmicin and amikacin are the drugs most often used in clinical trials of once-daily dosing regimens. Ugeskrift for Lægerer. 1993 May 10;155(19):1436-41.
Comparison of Netilmicin with Gentamicin in the Therapy of Experimental Escherichia coli Meningitis: Because of its reduced toxicity and greater in vivo bactericidal activity, netilmicin may offer an advantage over gentamicin in the therapy of gram-negative bacillary meningitis. Antimicrob Agents Chemother. 1978 June; 13(6): 899-904.
A comparison of netilmicin and gentamicin in the treatment of pelvic infections: The microbacteria isolated by standard culture techniques before therapy revealed Neisseria gonorrhoeae in 69% and 51% of the netilmicin and gentamicin groups, respectively; anaerobic organisms were cultured in about 75% of each group. Obstetrics & Gynecology 1979;54:554-557.
Netilmicin: a review of toxicity in laboratory animals: Presently available data suggest that netilmicin offers distinct advantages over older aminoglycosides. Final conclusions must await prospective randomized double-blind trials in man. J Int Med Res. 1978;6(4):286-99.
Nonparallel nephrotoxicity dose-response curves of aminoglycosides: Nephrotoxicity comparisons of aminoglycosides in rats, utilizing large multiples of human doses, have indicated an advantage for netilmicin. However, no nephrotoxicity advantage of netilmicin has been demonstrated at the lower doses used in clinics. Antimicrob Agents Chemother. 1981 June; 19(6): 1024–1028.
Comparative ototoxicity of netilmicin, gentamicin, and tobramycin in cats: Under the conditions of this study, at least a twofold (vestibular) to fourfold (cochlear) relative safety margin for ototoxicity was established in favor of netilmicin over tobramycin and gentamicin. Toxicol Appl Pharmacol. 1985 Mar 15;77(3):479-89.
Comparison with drugs:
Comparison of Netilmicin and Gentamicin Pharmacokinetics in Humans: In a crossover study, single doses of netilmicin and gentamicin were administered intramuscularly, each at 1.0 and 2.5 mg/kg. No significant differences were observed between the two drugs in disposition half-life, rate of distribution and elimination, area under the serum concentration-time curve, urinary excretion, total body clearance, and renal clearance. Antimicrobial Agents and Chemotherapy, Feb. 1980, p. 184-187. Schering-Plough Research Division, Bloomfield, New Jersey 07003. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MIT OpenCourseWare**
MIT OpenCourseWare:
MIT OpenCourseWare (MIT OCW) is an initiative of the Massachusetts Institute of Technology (MIT) to publish all of the educational materials from its undergraduate- and graduate-level courses online, freely and openly available to anyone, anywhere. The project was announced on April 4, 2001, and uses Creative Commons Attribution-NonCommercial-ShareAlike license. The program was originally funded by the William and Flora Hewlett Foundation, the Andrew W. Mellon Foundation, and MIT. MIT OpenCourseWare is supported by MIT, corporate underwriting, major gifts, and donations from site visitors. The initiative inspired a number of other institutions to make their course materials available as open educational resources.As of May 2018, over 2,400 courses were available online. While a few of these were limited to chronological reading lists and discussion topics, a majority provided homework problems and exams (often with solutions) and lecture notes. Some courses also included interactive web demonstrations in Java, complete textbooks written by MIT professors, and streaming video lectures.
MIT OpenCourseWare:
As of May 2018, 100 courses included complete video lectures. The videos were available in streaming mode, but could also be downloaded for viewing offline. All video and audio files were also available from YouTube, iTunes U and the Internet Archive.
Project:
MIT OpenCourseWare sits within MIT Open Learning at the Massachusetts Institute of Technology.
Project:
History The concept of MIT OpenCourseWare grew out of the MIT Council on Education Technology, which was charged by MIT provost Robert Brown in 1999 with determining how MIT should position itself in the distance learning/e-learning environment. MIT OpenCourseWare was then initiated to provide a new model for the dissemination of knowledge and collaboration among scholars around the world, and contributes to the “shared intellectual commons” in academia, which fosters collaboration across MIT and among other scholars. The project was spearheaded by professors Dick K.P Yue, Shigeru Miyagawa, Hal Abelson and other MIT Faculty.The main challenge in implementing the MIT OCW initiative had not been faculty resistance, but rather, the logistical challenges presented by determining ownership and obtaining publication permission for the massive amount of copyrighted items that are embedded in the course materials of MIT's faculty, in addition to the time and technical effort required to convert the educational materials to an online format. Copyright in MIT OpenCourseWare material remains with MIT, members of its faculty, or its students.In September 2002, the MIT OpenCourseWare proof-of-concept pilot site opened to the public, offering 32 courses. In September 2003, MIT OpenCourseWare published its 500th course, including some courses with complete streaming video lectures. By September 2004, 900 MIT courses were available online.
Project:
In 2005, MIT OpenCourseWare and other open educational resources projects formed the OpenCourseWare Consortium, which seeks to extend the reach and impact of open course materials, foster new open course materials and develop sustainable models for open course material publication.
In 2007, MIT OpenCourseWare introduced a site called Highlights for High School that indexes resources on the MIT OCW applicable to advanced high school study in biology, chemistry, calculus and physics in an effort to support US STEM education at the secondary school level.
Project:
In 2011, MIT OpenCourseWare introduced the first of fifteen OCW Scholar courses, which are designed specifically for the needs of independent learners. While still publications of course materials like the rest of the site content, these courses are more in-depth and the materials are presented in logical sequences that facilitate self-study. No interaction with other students is supported by the OCW site, but study groups on collaborating project OpenStudy are available for some OCW Scholar courses.In 2012 Harvard and MIT launched edX, a massive open online course (MOOC) provider to deliver online learning opportunities to the public.Between 2013 and 2019, some MIT OCW courses were delivered by the European MooC platform Eliademy.
Project:
Technology MIT OCW was originally served by a custom content management system based on Microsoft's Content Management Server, which was replaced in mid-2010 with a Plone-based content management system. The publishing process is described by MIT as a "large-scale digital publishing infrastructure consists of planning tools, a content management system (CMS), and the MIT OpenCourseWare content distribution infrastructure".Video content for the courses was originally primarily in RealMedia format. In 2008, OCW transitioned to using YouTube as the primary digital video streaming platform for the site, embedding YouTube video back into the OCW site. OCW video and audio files are also provided in full for offline downloads on iTunesU and the Internet Archive. In 2011, OCW introduced an iPhone App called LectureHall in partnership with Irynsoft.
Project:
Funding As of 2013, the annual cost of running MIT OCW was about $3.5 million. In 2011, "MIT's goal for the next decade [was] to increase our reach ten-fold" and to secure funding for this. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Short-chain acyl-CoA dehydrogenase**
Short-chain acyl-CoA dehydrogenase:
Short-chain acyl-CoA dehydrogenase (EC 1.3.8.1, butyryl-CoA dehydrogenase, butanoyl-CoA dehydrogenase, butyryl dehydrogenase, unsaturated acyl-CoA reductase, ethylene reductase, enoyl-coenzyme A reductase, unsaturated acyl coenzyme A reductase, butyryl coenzyme A dehydrogenase, short-chain acyl CoA dehydrogenase, short-chain acyl-coenzyme A dehydrogenase, 3-hydroxyacyl CoA reductase, butanoyl-CoA:(acceptor) 2,3-oxidoreductase, ACADS (gene).) is an enzyme with systematic name short-chain acyl-CoA:electron-transfer flavoprotein 2,3-oxidoreductase. This enzyme catalyses the following chemical reaction a short-chain acyl-CoA + electron-transfer flavoprotein ⇌ a short-chain trans-2,3-dehydroacyl-CoA + reduced electron-transfer flavoproteinThis enzyme contains FAD as prosthetic group. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ENQUIRE**
ENQUIRE:
ENQUIRE was a software project written in 1980 by Tim Berners-Lee at CERN, which was the predecessor to the World Wide Web. It was a simple hypertext program that had some of the same ideas as the Web and the Semantic Web but was different in several important ways.
According to Berners-Lee, the name was inspired by the title of an old how-to book, Enquire Within Upon Everything.
The conditions:
Around 1980, approximately 10,000 people were working at CERN with different hardware, software and individual requirements. Much work was done by email and file exchange. The scientists needed to keep track of different things and different projects became involved with each other. Berners-Lee started to work for 6 months on 23 June 1980 at CERN while he developed ENQUIRE. The requirements for setting up a new system were compatibility with different networks, disk formats, data formats, and character encoding schemes, which made any attempt to transfer information between dissimilar systems a daunting and generally impractical task. The different hypertext-systems before ENQUIRE were not passing these requirements i.e. Memex and NLS.
Differences to the World Wide Web:
ENQUIRE had pages called cards and hyperlinks within the cards. The links had different meanings and about a dozen relationships which were displayed to the creator, things, documents and groups described by the card. The relationship between the links could be seen by everybody explaining what the need of the link was or what happen if a card was removed. Everybody was allowed to add new cards but they always needed an existing card.
Differences to the World Wide Web:
ENQUIRE was closer to a modern wiki than to a web site: database, though a closed system (all of the data could be taken as a workable whole) bidirectional hyperlinks (in Wikipedia and MediaWiki, this is approximated by the What links here feature). This bidirectionality allows ideas, notes, etc. to link to each other without the author being aware of this. In a way, they (or, at least, their relationships) get a life of their own.
Differences to the World Wide Web:
direct editing of the server (like wikis and CMS/blogs) ease of compositing, particularly when it comes to hyperlinking.The World Wide Web was created to unify the different existing systems at CERN like ENQUIRE, the CERNDOC, VMS Notes and the USENET.
Why ENQUIRE failed:
Berners-Lee came back to CERN in 1984 and intensively used his own system. He realized that most of the time coordinating the project was to keep information up to date. He recognized that a system similar to ENQUIRE was needed, "but accessible to everybody." There was a need that people be able to create cards independent of others and to link to other cards without updating the linked card. This idea is the big difference and the cornerstone to the World Wide Web. Berners-Lee didn't make ENQUIRE suitable for other persons to use the system successfully, and in other CERN divisions there were similar situations to the division he was in. Another problem was that external links, for example to existing databases, weren't allowed, and that the system wasn't powerful enough to handle enough connections to the database.Further development stopped because Berners-Lee gave the ENQUIRE disc to Robert Cailliau, who had been working under Brian Carpenter before he left CERN. Carpenter suspects that the disc was reused for other purposes since nobody was later available to do further work on ENQUIRE.
Technical:
The application ran on terminal with plaintext 24x80.
The first version was able to hyperlink between files.
ENQUIRE was written in the Pascal programming language and implemented on a Norsk Data NORD-10 under SINTRAN III, and version 2 was later ported to MS-DOS and to VAX/VMS. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Grey import vehicle**
Grey import vehicle:
Grey import vehicles are new or used motor vehicles and motorcycles legally imported from another country through channels other than the maker's official distribution system or a third-party channel officially authorized by the manufacturer. The synonymous term parallel import is sometimes substituted.Car makers frequently arbitrage markets, setting the price according to local market conditions so the same vehicle will have different real prices in different territories. Grey import vehicles circumvent this profit-maximization strategy. Car makers and local distributors sometimes regard grey imports as a threat to their network of franchised dealerships, but independent distributors do not since more cars of an odd brand bring in money from service and spare parts.In order for the arbitrage to work, there must be some means to reduce, eliminate, or reverse whatever savings could be achieved by purchasing the car in the lower-priced territory. Examples of such barriers include regulations preventing import or requiring costly vehicle modifications. In some countries, such as Vietnam, the import of grey-market vehicles has largely been banned.
Overview:
Grey imports are generally used vehicles, although some are new, particularly in Europe where the European Union tacitly approves grey imports from other EU countries. In 1998, the European Commission fined Volkswagen for attempting to prevent prospective buyers from Germany and Austria from going to Italy to buy new VWs at lower pre-tax prices; pre-tax price is lower in Italy, as in Denmark, due to higher tax on cars. It is even possible for car buyers in the United Kingdom to buy right-hand drive cars in EU countries with right-hand traffic where left-hand drive cars are the norm.Japanese used vehicle exporting is a large global business, as rigorous road tests and high depreciation make such vehicles worth very little (in Japan) after six years, and strict environmental laws make vehicle disposal expensive. Consequently, it is profitable to export them to other countries with left-hand traffic, such as Australia, New Zealand, the Republic of Ireland, the United Kingdom, Malta, South Africa, Kenya, Uganda, Zambia, Mozambique, Malaysia, Bangladesh and Cyprus. Some have even been exported to countries such as Peru, Paraguay, Russia, Mongolia, Yemen, Burma, Canada, and the United Arab Emirates, despite the fact that these countries drive on the right. It is actually because of these vehicles' RHD configuration that many of them are sent to LHD countries in the first place, for use as mail delivery vehicles. Many Japanese market Jeep Cherokees, for example, have found new use with rural mail carriers in the United States.Thailand is the third largest exporter of brand new and used right-hand drive cars after Japan and Singapore, because of that country's high-volume production of diesel 4x4 vehicles such as the Toyota Hilux Vigo, Toyota Fortuner, Mitsubishi L200, Nissan Navara, Ford Ranger, Chevy Colorado, and others. The Toyota Vigo is the most exported vehicle by parallel exporters. Unlike Japanese and Singaporean exports, the majority of Thailand's grey exports are of new vehicles and the market is dominated by two companies. One of them known as Alpha Automobile Co., Ltd. These trucks are also exported many countries include Japan due to Japanese domestic makers no longer officially selling them through authorized dealers there.
Overview:
Similarly, there are exports of left hand drive (LHD) used cars from Germany to countries in Eastern Europe, some EU countries (less likely Spain, Portugal and Greece) and West Africa (especially Ghana and Nigeria, not Senegal and its surrounding countries). Some cars in the United States are sold only as export by insurance companies due to them having been stolen and recovered, or damaged in other ways.
By country:
United States Because the United States and Canada uniquely have not signed onto United Nations Economic Commission for Europe standards for automobile design (see World Forum for Harmonization of Vehicle Regulations), they use an anomalous set of motor vehicle safety and emission regulations.
By country:
For the U.S., these are developed and administered by the National Highway Traffic Safety Administration (NHTSA) and Environmental Protection Agency (EPA). They differ significantly in detail from the international UN Regulations used throughout the rest of the world.From 1976 to 1988, individual Americans were actually able to obtain cars conforming to World Forum for Harmonization of Vehicle Regulations standards and "convert" them to vehicles compliant with U.S. regulations - this was known as the "grey market," in that it was a legal activity parallel to officially sanctioned manufacturer efforts.Vehicle manufacturers face considerable expense to type-certify a vehicle for U.S. sale – this amount is not widely publicized, but Automotive News cites a 2013 model vehicle where this modification cost US$42 million. This cost particularly affects low-volume manufacturers and models, most notably the makers of high end sports cars. However, larger companies such as Alfa Romeo and Peugeot have also cited costs of "federalizing" their vehicle lineups as a disincentive to re-enter the U.S. market.
By country:
Pre-1968: Era of Freedom During the Second World War American servicemen stationed in Europe began to experience the benefits of the nimble British sports cars, and many shipped them home on their return. There were no legal restrictions to this behavior until 1967. Some owners even acted as sales reps for manufacturers who were happy to help, leading to official imports and the British sports car craze in North America. As with future waves of imported cars, this led the U.S. automakers to respond by introducing home grown models, such as the Nash-Healey, Ford Vega, Ford Thunderbird, and Chevrolet Corvette.
By country:
1968: Walled Garden Beginning in 1968, U.S. regulations surrounding vehicle importing became far harsher, and many vehicles, like the Mini, were immediately excluded from the market.NHTSA and EPA regulations post 1968 criminalize the possession of a vehicle not meeting U.S. standards. Exceptions exist for foreign nationals touring the U.S. in their own vehicle and for cars imported for Show and Display purposes.
By country:
Response of Market Demand Because of the unavailability of certain car models, demand for grey market vehicles arose during the Malaise era in the late 1970s. Importing them into the US involved modifying or adding certain equipment, such as headlamps, sidemarker lights, bumpers, and a catalytic converter as required by the relevant regulations. The NHTSA and EPA would review the paperwork and then approve possession of the vehicle. It was also possible for these agencies to reject the application and order the automobile destroyed or re-exported. The grey market provided an alternative method for Americans to acquire desirable vehicles, and still obtain certification.Tens of thousands of cars were imported this way each year during the 1980s.
By country:
The vast majority of these imports were by individuals importing just one car. Many otherwise unavailable vehicles entered the US via the grey market, like the Citroën CX, Range Rover Classic, Renault 5 Turbo, and Mercedes-Benz G-Class.
By country:
1988: Crackdown The grey market was successful enough that it ate significantly into the business of Mercedes-Benz in North America and their dealers. The corporation launched a successful multi-million-dollar congressional lobbying effort to stop private importation of vehicles not officially intended for the U.S. An organisation called AICA (Automotive Importers Compliance Association) was formed by importers in California, Florida, New York, Texas, and elsewhere to counter some of these actions by Mercedes lobbyists, but the Motor Vehicle Safety Compliance Act was passed in 1988, effectively ending private import of grey-market vehicles to the United States. There have been allegations of improper lobbying, but the issue has never been raised in court.
By country:
As a result of being practically banned, the grey market declined from 66,900 vehicles in 1985to 300 vehicles in 1995. It is no longer possible to import a non-U.S. vehicle into the United States as a personal import, with four exceptions, none of which permits Americans to buy recent vehicles not officially available in the United States.
By country:
Registered Importer After 1988, a vehicle not originally built to U.S. specifications can, under certain, very limited circumstances be imported through a registered importer or independent commercial importer, who modifies the vehicle to comply with US equipment and safety regulations and then certifies it as compliant, at enormous expense. In practice, this avenue is nearly impossible, even for Bill Gates. Those who import nonconforming motor vehicles sometimes bring in more than one car at a time to spread the substantial cost of the necessary destructive testing, modification, and safety certification. Destructive crash testing is not always needed if the vehicle can be shown to be substantially similar to a model sold in the U.S.
By country:
Even Canadian-market vehicles may not meet these requirements. Since Canadian regulations are similar to those in the U.S., an individual can import a vehicle manufactured to Canadian motor vehicle safety standards if the original manufacturer issues a letter stating that the vehicle also conforms to U.S. motor vehicle standards. The decision to issue a compliance letter is solely at the discretion of the manufacturer, even if the vehicle is known to meet U.S. standards. Before issuing a compliance letter, most manufacturers request proof that the owner of the vehicle is a resident of Canada, and that the car was registered and used in Canada for a minimum period. This is done because the manufacturers maintain separate pricing structures for the U.S. and Canadian markets.
By country:
25 Year Rule In 1989, NHTSA granted vehicles over 25 years of age dispensation from the rules it administers, since these are presumed to be collector vehicles. However, there are two exceptions to the rule. One is California, where vehicle emissions requirements make it difficult to register a classic vehicle from overseas. (California Smog Check is mandated for automobiles 1976 and newer.) In 21 states, mini trucks (JDM market kei trucks) of any age can be legally imported and registered as a utility vehicle with on-road use and top speed restrictions varying by state, although states that allow mini trucks to be operated on public roads prohibit their operation on Interstate highways. In 2021, Maine began deregistering third-generation Mitsubishi Delica vans that were legally imported through the dispensation by classifying it as an off-road all-terrain vehicle. Rhode Island also deregistered legally imported Kei cars (including non-van models) in October 2021, citing a best practices guideline (non-legally binding) issued by the American Association of Motor Vehicle Administrators.
By country:
JDM Some Americans are interested in Japanese domestic market vehicles, like the Nissan Skyline. In 1999, a California company called Motorex had a number of Nissan Skyline R33 GTS25s crash-tested. They submitted their information to NHTSA and petitioned for 1990–1999 GT-Rs and GTSs to be declared eligible for import.
By country:
Many Skylines were subsequently imported through Motorex. This lasted until late 2005, when NHTSA learned not all 1990 through 1999 Skyline models would perform identically in crash testing. Motorex had submitted information for only the R33, but had asserted that the data applied to R32, R33, and R34 models. NHTSA determined that only 1996–1998 R33 models have been demonstrated as capable of being modified to meet the federal motor vehicle safety standards, and that only those models are eligible for import. In March 2006, Motorex ceased all imports and Motorex principal Hiroaki "Hiro" Nanahoshi was arrested and held on $1 million bail on financial, kidnapping, and assault charges.
By country:
Market demand signals from grey market Penetrating the market via the grey market first, is a valid market entry strategy. The Lamborghini Countach was initially only available as a grey market vehicle (from 1976 to 1985), as were the Range Rover and Mercedes-Benz G-Class. These automakers later made US models to meet the demand.This avenue of vehicle availability was increasingly successful, especially in cases where the US model of a vehicle was less powerful and/or less well equipped than versions available in other markets. For example, Mercedes-Benz chose to offer only the sluggish 380SEL model in 1981 to Americans, some of whom wanted the much faster 500SEL available in the rest of the world. BMW had the same issue with their 745i Turbo.Nissan developed the 2008 Nissan GT-R to meet this market demand in the United States.As with the earlier G-Wagen and 560SEL, Mercedes-Benz was also able to use the grey market to read market demand signals – with the Smart Fortwo, which was imported in this manner in 2004–2006, prior to its official U.S. release in 2007.
By country:
Canada Cars not originally manufactured to Canadian-market specifications may be legally imported once they are 15 or more years old. This has led to the import of many Japanese sports cars such as the Nissan Skyline. The only categorical exception to the 15-year rule is that many – but not all – vehicles manufactured to United States-market specifications can legally be imported into Canada under the compliance modification and inspection program administered by the Registrar of Imported Vehicles. Typically, modifications to meet Canadian standards include the installation of daytime running lights and tether anchors to permit secure attachment of infant car seats, documentation indicating that any repairs required in response to the original manufacturer's factory recalls are complete, and passenger cars assembled on or after September 1, 2007 are also required to have an immobilization system that meets the CMVSS 114 standard. Labelling of the vehicle to indicate its imported status, to warn that the odometer is counting in miles (as made-for-Canada odometers have used kilometres since 1976) and to translate safety-related warning labels (such as airbag maintenance procedures) is typically also required. Speedometers in US and most Canadian vehicles indicate both miles per hour and km/h, either with dual calibration or with a single set of numbers that can be made to display miles or kilometres at the driver's option, so are usually left unmodified.
By country:
In March 2007, Transport Canada initiated proposed rulemaking to change the importation laws such that vehicles not originally manufactured to Canadian-market specifications would be eligible for import only once they are 25 years old, rather than the present 15-year cutoff rule. The main impetus behind this proposal is the significant influx of Japanese-market vehicles in Canada in recent years, particularly in Western provinces such as British Columbia due to geographical proximity to Asian ports of departure. BC's public auto insurance administrative body, Insurance Corporation of British Columbia, in 2007 released a report finding that right-hand drive vehicles are involved in 40% more crashes than left-hand drive vehicles in that province.
By country:
New Zealand In the 1980s, New Zealand eased import restrictions, and reduced import tariffs on cars. Consequently, large volumes of used cars from Japan appeared on the local market, at a time when most cars in New Zealand were locally assembled, and expensive compared to other countries, with most used cars available being comparatively old.Local buyers now had a much wider choice of models, but despite specifications being higher than so-called "NZ New" cars, there were many problems with "clocking" or odometer fraud, with the odometer wound back to display a much lower mileage. Other problems include vehicles damaged in accidents in Japan. This is in contrast to those imported from Australia, for which the history of such vehicles, including write-offs, is readily available from insurance companies.However, the widespread availability of used Japanese imports prompted official importers to reduce the price of brand new cars, and in 1998, New Zealand became one of the few countries in the world to remove all import tariffs on motor vehicles.Grey market vehicles comprise a majority of cars in the national fleet. These secondhand imports have achieved 'normal' status and are used and serviced without comment throughout society. A huge industry servicing and supplying parts for these vehicles has developed. After years of trying to stop grey imports the car companies themselves have become involved, importing in competition with their own new models and providing owners with spare part and repair services. Russia and many African countries, albeit not South Africa where second-hand car imports are illegal, import large quantities of secondhand vehicles from Japan and Singapore.
By country:
Nevertheless, a great many used vehicles are imported, 94.6 percent of which come from Japan, most of which are Japanese makes. Most of the other makes are German, such as Audi, BMW, Mercedes-Benz, Porsche and Volkswagen. There are a smaller number of United States makes such as Chevrolet and Chrysler, which were built in right hand drive for the Japanese market. Although in heavy decline from 2005, used-vehicle import totals are higher than those of vehicles first registered in New Zealand. In 2006, 123,390 ex-overseas vehicles were registered, compared to 76,804 brand new vehicles.Used vehicles must, with some exceptions, be right-hand drive, and they must comply with recognised European, Australian, Japanese, or American emission and safety standards, or they are ineligible for import to New Zealand.In some cases a left-hand drive vehicle can be imported into New Zealand if it meets certain conditions or is a specialized vehicle. Left-hand drive vehicles 20 years or older normally do not have to meet any special requirements but must weigh no more than 3500 kg.
By country:
Ireland Japanese used car importing has been quite common in Ireland since the 1980s. The imported cars are cheaper than local used cars due to the very low value of used cars in Japan (and to an extent, used products in general), and a much larger range of specifications are available on Japanese models compared to the very limited ranges sold locally – even in comparison to the UK, model ranges of Japanese cars can be very limited – mostly due to the high vehicle-registration tax and other taxes imposed on new cars sold in Ireland.For example, the Toyota Corollas sold in the late 1980s up until the late 1990s (E90 and E100 series) were only available in Ireland in one specification level, with few features and only the base 1.3 litre petrol and diesel engines. In Japan, however, 1.5 and 1.6 litre engines were also available, with around 6 different trim levels, options such as sunroofs, central locking and electric windows available on many specs as early as 1989, ABS and driver airbags optional since 1991, four-wheel drive, and performance GT models. Very basic saloons and diesel-engined models with automatic transmissions also appealed to taxi drivers.
By country:
In more recent years, Japanese imports have become less common with typical family cars, probably due to the great change in the Irish economy over the past 20 years – people generally have larger incomes now, and sales in new cars have soared. Imports from Japan has become more of a specialty market now – importing of sports models not originally available in Europe such as the Mitsubishi FTO, Toyota Corolla Levin/Toyota Sprinter Trueno, Toyota Starlet Glanza and Honda Integra has become quite popular, and sports cars like the Nissan Skyline GT-R, Toyota Supra and Mazda RX-7 are more easily available as imports. Also, small commercial kei car models such as the Daihatsu Midget II and Nissan S-Cargo are used by some businesses as advertising aids, as they are quite distinctive and eye-catching on the roads in Ireland.No modifications are required for Japanese imported cars to be registered and driven on the roads in Ireland. One disadvantage is that Japan uses a different FM radio band than everywhere else, so a band expander or a replacement stereo system is required to receive the full FM band used locally. Like all other cars used on public roads in Ireland, Japanese imports have to pass the National Car Test.
By country:
Other used imports sold in Ireland are from the UK, the most readily identifiable being those from General Motors, which badges its cars in the UK as Vauxhalls, not as Opels as in Ireland. As of 2007 the number of cars being Imported into the Republic of Ireland from both Northern Ireland and Great Britain is at record high levels due to high new-car taxation in Ireland and the fact that UK cars are of a higher spec than Irish ones. This trend was highlighted by RTÉ in a consumer programme entitled "Highly Recommended".
By country:
United Kingdom In the United Kingdom, many people have chosen to buy new cars in other EU member states, where pre-tax prices are much lower than in the UK, and then import them into their own country, where they only pay the UK's rate of value added tax (VAT). This is especially the case in Northern Ireland, as pre-tax prices in the Republic of Ireland are kept low because of a vehicle registration tax levied on top of VAT. Other UK buyers can also request a model in RHD when ordering from a dealer in continental Europe for a small supplement. Motor dealers in the EU are compelled under EU competition law to supply right-hand drive models at the same price as LHD models should a buyer request it. Strictly speaking, such imports are known as parallel imports.
By country:
Warranties on new cars bought in an EU member state are valid throughout the EU, meaning that a UK resident who has bought a new car in another member state and then imports it into the UK will be covered by the same warranty. However, whereas UK warranties tend to be for three years, those in other EU countries may be only for one or two.
By country:
There are also some Japanese imported cars found in the UK, the most popular being the Mazda Eunos Roadster, Nissan Figaro and Mitsubishi Pajero as well as performance cars such as Nissan Skylines, Mitsubishi FTOs and highly tuned Subaru Impreza and Toyota Supra variants that were never officially imported into the UK. These cars tend to be cheaper than official UK imports, but often have better Japanese domestic market specification levels by comparison. The range of Japanese vehicles in the UK is rising all the time as UK customers see the impressive high spec, low-mileage Japanese vehicles on the roads. Each month new models are being imported by dealers and rapidly become popular on the UK market.
By country:
Importing a vehicle from outside of the EU may require the vehicle to be subjected to an IVA test if the vehicle is under ten years of age. This test is carried out by the government agency known as the DVSA. If the vehicle is left hand drive this is fairly straightforward to submit an IVA test application and then present the vehicle to the local test station. The vehicle will likely need changes to lighting systems e.g. amber rear indicators, addition of rear foglight and correct UK headlights. If the vehicle being imported is right hand drive then it must be a personal import i.e. owned by yourself for at least 6 months and you are returning to live in the UK permanently having lived outside of the EU for at least 12 months.
By country:
Australia In Australia, the commercial import of used motor vehicles is significantly regulated and restricted by the Department of Infrastructure.The allowed imports are limited to what are called special and enthusiast vehicles (SEVS), or cars manufactured 25 years ago and older (With the introduction of the [1]).
By country:
All vehicles imported into Australia must also be listed on the "Register of Approved Vehicles"Limitations of vehicles manufactured after 1 January 1989 was to ensure compliance with the regulations governing safety of vehicles on Australian Roads that came into force on this date.Until the present regulations entered force at the start of 2004, cars over 15 years old could be imported, and would need to gain a Vehicle Inspection in Australia (needed for registration transfer in many states anyway) and often safety modifications to ensure that they met with regulations that would have been in force at the time of their manufacture.To bring a special or enthusiast vehicle into Australia, the importer must either apply to have the car added to the SEVS register, or import a car already listed on the register.Vehicles can also be imported to Australia under what is known as a "Type Approval", this is generally used by remanufacturing companies who wish to import a vehicle that is commercially sold overseas, and remanufactured for use in Australia. Examples of vehicles such as this are the Chevrolet Camaro, the RAM Family of Trucks, and the Ford F-Series.
By country:
Grey imports can pose considerable challenges, due to Australia using Left Hand Traffic, as opposed to the more common Right-Hand Traffic.
By country:
This means that vehicles produced for compliance in a country that uses Right Hand Traffic would need a number of modifications to be compliant with the Australian Design Rules, these modifications must be compliant with the Vehicle Standards Bulletin 14, primarily the components that relate to Suspension and Steering, and Lighting Vehicles imported into Australia must be complied by a Registered Automotive Workshop for use on the roads in Australia. This is different to a regular workshop as detailed in the Vehicle Inspection in Australia article, where a regular automotive workshop can be authorised to conduct vehicle safety inspections in most states.
By country:
External Territories Australia's external territories include Christmas Island, the Cocos (Keeling) Islands and Norfolk Island.
You do not need to obtain an approval under the RVS legislation to import a road vehicle into an external Australian territory.
However, you do need to obtain an approval to import a road vehicle into Australia from an Australian external territory.
Russia In Russia, grey imports, both new and used, comprised, at certain points, up to 80% of all automobile fleets, because of domestic production being unable to meet the market demands, both in numbers and in quality, especially in the early to mid-1990s.
By country:
In western Russia, most imports were from Germany, while to the east of the Ural mountains where the cost of delivery made both the German imports and domestic productions particularly unattractive, a thriving industry of importing used vehicles from Japan developed. Even though Russia is a left hand drive country, RHD vehicles are nevertheless legal there, provided that some adjustments (e.g. retuning the headlights) are made, but these are cheap and easily done, thus making the cheap and well-built Japanese cars, trucks and tractors (which proved sturdy enough to withstand severe Russian climate, bad roads, often inadequate servicing and questionable quality of fuel/oil) particularly attractive to the customer. In main import centers such as Vladivostok and Yuzhno-Sakhalinsk a large-scale service and aftermarket industry developed, including parts depots, auction houses, and logistic centers, all geared for servicing and supporting this relentless import drive. The authorities tried to fight this phenomenon to protect the domestic industry, but the effects have so far been mixed. While increasingly stringent technical requirements and stiffer import tariffs had some effect, they only managed to force the locals to import newer and more expensive vehicles, and to invent various quasi-legal ways to circumvent these regulations. Japanese manufacturers themselves have also stepped in, creating local assembly plants that produce new, left hand drive models to try to compete with the grey imports.
By country:
South Korean used vehicle imports become highly prominent in Vladivostok, as the solution to save Russian market from sanctions they have caused by the suspension of Japanese used vehicle imports from 2014 onwards.
By country:
China The parallel market has existed to a limited extent in China for several years, especially in port cities Tianjin and Dalian. The Shanghai free trade zone has a 'car dealer' where new cars imported from other territories are sold. Due to price arbitrage, these vehicles can be significantly cheaper than the same vehicle available from the official distributor. For example, a Porsche Cayenne parallel imported from the United States are priced significantly cheaper, sometimes up to CNY 200,000 ($30,000) than the official version sold in Chinese markets due to heavy dealer markups.
By country:
Hong Kong Although Hong Kong is a Special administrative Region of China, the cars in Hong Kong are Right Hand Drive, which in contrast with China. Quite a lot of used cars from Japan are registered in Hong Kong, including both Japanese makes and even European makes, since both Hong Kong and Japan are right hand drive. In addition to used cars from Japan, some new cars are also independently imported too. In order to register the car in Hong Kong, the car must be right hand drive, less than seven years old, gasoline powered, meets Euro VIc emission and noise standards, with E-mark for all glasses, all lamps and safety belts, and an unleaded-fuel restrictor installed (if not present already). For cars over 20 years old, they can be imported as classic cars and do not have to meet Euro VIc emission standards. However, Hong Kong does not accept privacy windows. If a Japanese used car is fitted with privacy windows, it must be converted to AS-2 standard clear glass in order to register in Hong Kong.
By country:
Hong Kong does not have import tax for cars, however, the first registration tax is high. And the first registration tax rate is progressive, which means a higher price car will fall into a higher tax bracket. Since the used car values are mostly lower than new cars of the same model, used car can enjoy a lower tax rate due to depreciated car value.
By country:
Philippines In the Philippines, the main source of import grey market vehicles, both passenger and commercial, is Japan. Second is Korea, third is the US via trans-shipments through Japan. Only one port, Port Irene, Cagayan, was open to grey market passenger vehicles between 2008 and 2014. Some dispute over the legality of importation of passenger vehicles arose and led to a Port Irene ban. Currently the Executive Order 418 has been upheld, thus banning the importation of used passenger vehicles into the Philippines.
By country:
Commercial vehicles, special purpose vehicles are not under EO 418 and thus are currently being imported into Subic and Poro Point, San Fernando. Subic sees roughly 1 cargo ship per week, and has many licensed locators. A second port, Poro Point is, as of February 2014, once per month and is solely facilitated by Forerunner Multi-Resources as the licensed locator, and SASTRAD KK in Japan as the sole shipper.
By country:
There is a grey-area trade of used mini trucks from Japan flourishing in Cebu. Mini trucks are chopped in Japan, imported as "parts" and are re-assembled by welding chassis parts together. Cheap to buy, they are a staple of small businesses transporting goods.
By country:
Japan In Japan, although the laws against grey import products were strict, and domestic car makers and authorized dealers have to conform the vehicle dimension standards and other various regulations differing from Europe and United States, the laws against grey-imported vehicles are very lax due to absence of import tariffs, and there are some grey imported vehicles that were never officially sold in the Japanese domestic market. These ranging from small city cars like Toyota Aygo and Smart ForTwo, to sports cars like AC Cobra and Chevrolet Camaro, to commercial vehicles and pickup trucks like the Toyota Tundra. Most cars were imported from their country of origin, and a few were converted to RHD, though some were later introduced to authorized dealers due to popular demand.
By country:
Some grey market importers have tried to import domestically-made export-only cars, such as the Subaru BRAT and Infiniti FX, back into Japan's grey market. In Japan, the term used to refer to this scene is called Gyakuyunyū (逆輸入, literally "Reverse import", commonly shortened to "reimport"), and this mainly applies to Japanese-branded vehicles. Also, due to NOx laws and some other differing regulations, pickup trucks like the Toyota Hilux are no longer officially distributed in the JDM, so importers continue to import foreign-built trucks into Japan.
By country:
The concept of Gyakuyunyu is referenced in the music album Gyakuyunyū: Kōwankyoku.In Japan, foreign cars sold locally have traditionally been LHD, which is regarded as exotic or a status symbol. This even applies to British brands (although cars for the British market have the steering wheel on the right), in part because many have been imported via the US. Many tollbooths in Japan have a special lane for LHD vehicles. However, some US manufacturers have made RHD models for the Japanese market (namely the Ford Taurus and Chevrolet Cavalier), though with limited success; and as continental European brands become more popular, the preference is increasingly for RHD models, many of which are re-exported to countries like New Zealand as grey imports, along with Japanese models.
By country:
Sweden In Sweden, the main source of grey market vehicles is via Germany, which has more liberal laws and better tax deals on new imported cars. Many used cars also come from Germany, which has a bigger domestic market and rigorous roadworthiness tests. There are no age restrictions on imported vehicles, as such.
By country:
Thailand In Thailand, most grey imports are expensive, rare, and/or sports cars. Most are right-hand drive cars from the United Kingdom, Japan, or Hong Kong, though some are brought in from the US (as left-hand drive cars).Mercedes-Benz Thailand, an authorised distributor, does exist, but many Mercedes-Benz cars are imported and sold by grey importers. As of 2011, Mercedes-Benz Thailand has a new policy of not providing warranty work to grey-market imports.The new Thai law prohibits the importation of used cars, effective on December 10, 2019.
By country:
Trinidad and Tobago In Trinidad and Tobago, nearly all used imports come from Japan, with some vehicles coming from Thailand and Singapore. Before February 2016, cars under 6 years old could be legally imported. Since February 2016, no car can be imported and registered if over 4 years old. An import licence is required and most imports are through dealerships. No new importer dealerships are permitted.
By country:
Senegal Grey imports from France are generally being most common in Senegal. There are also grey imports from Spain, especially vehicles formerly being owned by their taxi companies, though some are brought in from Italy or even Belgium.
Slovakia Grey import in Slovakia exists in some ways. As part of the EU, importing vehicles from member states is not restricted at all, although imported vehicle has to be inspected for origin (VIN number manipulation, possible stolen vehicle legalization) Main destinations for import are Germany, Austria, Italy, and the Czech Republic.
By country:
Imports from non-member EU states are restricted by age. Personal vehicles are restricted to 8 years (from date of first registration to date of customs declaration), and vehicles intended to be used for business are restricted to 5 years. Vehicles imported from non-member states which are not approved for European roads have to pass import inspection, and modified to meet restrictions. For vehicles from the U.S. or Canada it is mandatory to change headlights, or adjust them for the EU bulb pattern, change red rear turn signals to orange ones, separate them from brake wiring, and add fog lights. Importing cars with RHD has been allowed from 20 May 2018, but the headlights and fog lights have to be converted to the continental pattern.
By country:
Pakistan Grey imports in Pakistan do exist; however, it is limited by passenger cars no older than 3 years old and 4x4 jeeps or off-road passenger vehicles no older than 5 years old at the time of arrival in the Port of Entry. There could be a significant saving on luxury cars in Pakistan, as the official dealer channels may be excessive in price, and also the used car markets may also be excessive in price.
By country:
The Federal Board of Revenue for Pakistan gives the importer three scenarios which a passenger vehicle could be imported into the country. These scenarios are as follows: Personal Baggage scheme Transfer of Residence scheme Gift schemeThe Pakistan Customs Duty rates of imports, providing that the vehicles (excluding jeeps) meet the age requirements for Pakistan, are shown below: Up to 800 CC (other than US$6,600 Asian Makes); From 801 CC to 1000 CC US$5,500 From 1001 CC to 1300 CC US$11,000 From 1301 CC to 1500 CC US$15,400 From 1501 CC to 1600 CC US$18,700 From 1601 CC to 1800 CC US$23,100In 2018, after years of appealing with the Pakistan Government, the Classic Car Club of Pakistan was delighted to announce that the Pakistan Government has finally allowed the importation of classic vehicles. The Express Tribune of Pakistan reported on the subject by stating; "...Earlier in the budget announcement for fiscal year 2018–19 at the end of April, the government had said that it would impose a flat duty of $5,000 on the import of vintage or classic cars and jeeps. A formal notification was issued by the FBR on July 3..." “...The federal government is pleased to exempt vintage or classic cars and jeeps meant for transport of persons on the import thereof from customs duty, regulatory duty, additional customs duty, federal excise duty, sales tax and withholding tax as are in excess of the cumulative amount of US dollars five thousand per unit,” said the FBR notification.
By country:
The duties have been exempted in exercise of the powers of the Customs Act 1969, Federal Excise Act 2005 and Sales Tax Act. However, it clarified that vintage or classic cars and jeeps mean old and used automotive vehicles, falling under PCT Code 87.03 of the First Schedule to the Customs Act 1969 (IV of 1969), manufactured prior to January 1, 1968. APP..." Greece Grey imports do exist in Greece. However, they are rare, expensive, and are really limited by small and medium vehicles must be new and not more than 15 days old, buses, vans, and heavy vehicles cannot be more than 25 days old, and vehicles have to be left hand drive before arrival in Greece. Vehicles from left hand traffic countries must be at least 3 years old, have to be converted to left hand drive, and must be inspected before arrival.
By country:
The majority of grey imported vehicles originate from either Germany, Netherlands and France, and was the most preferred way to purchase a used vehicle until 2020, when the Ministry of Energy and Environment had announced environmental tax measures that: Prohibited the import of vehicles that meet Euro III (or lower) standards As for Euro IV vehicles, the importer has to pay a fee of €3,000 As for Euro V vehicles, the importer has to pay a fee of €1,000 As for Euro VI vehicles, the importer does not pay any environmental fees upon importing The only exception from the restrictions above is when an imported vehicle is over 30 years old and intended for historical use only.
By country:
Cambodia Used cars in Cambodia are mainly imported from Taiwan, United States, Russia, Japan, South Korea and the GCC countries (mostly UAE). Used vehicles from Germany and also the Netherlands are rarely imported there due to the lack of port cities and also due to troubled and highly problematic cost of delivery.
Singapore As of 2019, the Land Transport Authority mandates that vehicles imported into Singaporean soil must be brand new or manufactured at most three years ago.
Armenia Used cars in Armenia are mainly imported from Japan, Singapore and the United States. There are also used cars from European countries, but they are mostly over 15 years old.
Kyrgyzstan In Kyrgyzstan, Grey imports of used cars from Japan and the United States are generally most common there. Used cars from European countries are predominantly more than 15 years old, and mostly limited to German car brands.
Suriname Grey imports from Japan and Singapore are generally being most common there in Suriname. LHD vehicles are also allowed on the road, but importation of LHD vehicles (especially from the US) is prohibited. Used vehicles from the United Kingdom, Republic of Ireland, Australia and New Zealand are rarely imported there.
Most of the privately owned buses are imported from Japan, since they are already RHD. Most state-owned buses, however, are from the US and the doors heve to be repositioned.
Myanmar In spite of the change to driving on the right, most passenger cars in the country today are RHD, being second-hand vehicles imported from Japan, Thailand, and Singapore. Now, only left hand drive can be imported to Myanmar and the reason behind is that right hand drive cars are unsuitable for Myanmar's road traffic system.
By country:
Caribbean In many Caribbean islands where traffic drives on the left, such as the British Virgin Islands, U.S. Virgin Islands, the Cayman Islands, the Bahamas and Turks and Caicos Islands, most passenger cars are LHD, being imported from the United States or even Korea. Only government cars, buses and vehicles imported from Japan are RHD. The US Virgin Islands (originally a Danish protectorate which drove on the left, as did Denmark until the early 20th century) are particularly notorious for a high accident rate caused by American tourists from the mainland who are unfamiliar with driving on the left in their rental cars – the confusion from which is obviously compounded by using a LHD vehicle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dual oxidase 2**
Dual oxidase 2:
Dual oxidase 2, also known as DUOX2 or ThOX2 (for thyroid oxidase), is an enzyme that in humans is encoded by the DUOX2 gene. Dual oxidase is an enzyme that was first identified in the mammalian thyroid gland. In humans, two isoforms are found; hDUOX1 and hDUOX2 (this enzyme). The protein location is not exclusive to thyroid tissue; hDUOX1 is prominent in airway epithelial cells and hDUOX2 in the salivary glands and gastrointestinal tract.
Function:
Investigations into reactive oxygen species (ROS) in biological systems have, until recently, focused on characterization of phagocytic cell processes. It is now well accepted that production of such species is not restricted to phagocytic cells and can occur in eukaryotic non-phagocytic cell types via NADPH oxidase (NOX) or dual oxidase (DUOX). This new family of proteins, termed the NOX/DUOX family or NOX family of NADPH oxidases, consists of homologs to the catalytic moiety of phagocytic NADPH-oxidase, gp91phox. Members of the NOX/DUOX family have been found throughout eukaryotic species, including invertebrates, insects, nematodes, fungi, amoeba, algae, and plants (not found in prokaryotes). These enzymes clearly demonstrate regulated production of ROS as their sole function. Genetic analyses have implicated NOX/DUOX derived ROS in biological roles and pathological conditions including hypertension (NOX1), innate immunity (NOX2/DUOX), otoconia formation in the inner ear (NOX3) and thyroid hormone biosynthesis (DUOX1/2).DUOX2 is the isoform that generates H2O2 utilized by thyroid peroxidase (TPO) for the biosynthesis of thyroid hormones, supported by the discovery of congenital hypothyroidism resultant from an inactivating mutation in the DUOX2 gene.The family currently has seven members including NOX1, NOX2 (formerly known as gp91phox), NOX3, NOX4, NOX5, DUOX1 and DUOX2.
Function:
This protein is known as a dual oxidase because it has both a peroxidase homology domain and a gp91phox domain.Duox are also implicated in lung defence system and especially in cystic fibrosis.Schema of duox implication in human lung defence system | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**National Library of Medicine classification**
National Library of Medicine classification:
The National Library of Medicine (NLM) classification system is a library indexing system covering the fields of medicine and preclinical basic sciences. The NLM classification is patterned after the Library of Congress (LC) Classification system: alphabetical letters denote broad subject categories which are subdivided by numbers. For example, QW 279 would indicate a book on an aspect of microbiology or immunology.
National Library of Medicine classification:
The one- or two-letter alphabetical codes in the NLM classification use a limited range of letters: only QS–QZ and W–WZ. This allows the NLM system to co-exist with the larger LC coding scheme as neither of these ranges are used in the LC system. There are, however, three pre-existing codes in the LC system which overlap with the NLM: Human Anatomy (QM), Microbiology (QR), and Medicine (R). To avoid further confusion, these three codes are not used in the NLM.
National Library of Medicine classification:
The headings for the individual schedules (letters or letter pairs) are given in brief form (e.g., QW - Microbiology and Immunology; WG - Cardiovascular System) and together they provide an outline of the subjects covered by the NLM classification. Headings are interpreted broadly and include the physiological system, the specialties connected with them, the regions of the body chiefly concerned and subordinate related fields. The NLM system is hierarchical, and within each schedule, division by organ usually has priority. Each main schedule, as well as some sub-sections, begins with a group of form numbers ranging generally from 1–49 which classify materials by publication type, e.g., dictionaries, atlases, laboratory manuals, etc.
National Library of Medicine classification:
The main schedules QS-QZ, W-WY, and WZ (excluding the range WZ 220–270) classify works published after 1913; the 19th century schedule is used for works published 1801–1913; and WZ 220-270 is used to provide century groupings for works published before 1801.
Overview of the NLM Classification categories:
Preclinical Sciences QS Human Anatomy QT Physiology QU Biochemistry QV Pharmacology QW Microbiology & Immunology QX Parasitology QY Clinical Pathology QZ PathologyMedicine and Related Subjects W Health Professions WA Public Health WB Practice of Medicine WC Communicable Diseases WD Disorders of Systemic, Metabolic, or Environmental Origin, etc.
Overview of the NLM Classification categories:
WE Musculoskeletal System WF Respiratory System WG Cardiovascular System WH Hemic and Lymphatic Systems WI Digestive System WJ Urogenital System WK Endocrine System WL Nervous System WM Psychiatry WN Radiology. Diagnostic Imaging WO Surgery WP Gynecology WQ Obstetrics WR Dermatology WS Pediatrics WT Geriatrics. Chronic Disease WU Dentistry. Oral Surgery WV Otolaryngology WW Ophthalmology WX Hospitals & Other Health Facilities WY Nursing WZ History of Medicine 19th Century Schedule | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sitnikov problem**
Sitnikov problem:
The Sitnikov problem is a restricted version of the three-body problem named after Russian mathematician Kirill Alexandrovitch Sitnikov that attempts to describe the movement of three celestial bodies due to their mutual gravitational attraction. A special case of the Sitnikov problem was first discovered by the American scientist William Duncan MacMillan in 1911, but the problem as it currently stands wasn't discovered until 1961 by Sitnikov.
Definition:
The system consists of two primary bodies with the same mass (m1=m2=m2) , which move in circular or elliptical Kepler orbits around their center of mass. The third body, which is substantially smaller than the primary bodies and whose mass can be set to zero (m3=0) , moves under the influence of the primary bodies in a plane that is perpendicular to the orbital plane of the primary bodies (see Figure 1). The origin of the system is at the focus of the primary bodies. A combined mass of the primary bodies m=1 , an orbital period of the bodies 2π , and a radius of the orbit of the bodies a=1 are used for this system. In addition, the gravitational constant is 1. In such a system that the third body only moves in one dimension – it moves only along the z-axis.
Equation of motion:
In order to derive the equation of motion in the case of circular orbits for the primary bodies, use that the total energy E is: E=12(dzdt)2−1r After differentiating with respect to time, the equation becomes: d2zdt2=−zr3 This, according to Figure 1, is also true: r2=a2+z2=1+z2 Thus, the equation of motion is as follows: d2zdt2=−z(1+z2)3 which describes an integrable system since it has one degree of freedom.
Equation of motion:
If on the other hand the primary bodies move in elliptical orbits then the equations of motion are d2zdt2=−z(ρ(t)2+z2)3 where ρ(t)=ρ(t+2π) is the distance of either primary from their common center of mass. Now the system has one-and-a-half degrees of freedom and is known to be chaotic.
Significance:
Although it is nearly impossible in the real world to find or arrange three celestial bodies exactly as in the Sitnikov problem, the problem is still widely and intensively studied for decades: although it is a simple case of the more general three-body problem, all the characteristics of a chaotic system can nevertheless be found within the problem, making the Sitnikov problem ideal for general studies on effects in chaotic dynamical systems.
Literature:
K. A. Sitnikov: The existence of oscillatory motions in the three-body problems. In: Doklady Akademii Nauk SSSR, 133/1960, pp. 303–306, ISSN 0002-3264 (English Translation in Soviet Physics. Doklady., 5/1960, S. 647–650) K. Wodnar: The original Sitnikov article – new insights. In: Celestial Mechanics and Dynamical Astronomy, 56/1993, pp. 99–101, ISSN 0923-2958, pdf D. Hevia, F. Rañada: Chaos in the three-body problem: the Sitnikov case. In: European Journal of Physics, 17/1996, pp. 295–302, ISSN 0143-0807, pdf Rudolf Dvorak, Florian Freistetter, J. Kurths, Chaos and Stability in Planetary Systems., Springer, 2005, ISBN 3540282084 J. Moser: "Stable and Random Motion", Princeton Univ. Press, 1973, ISBN 978-0691089102 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electric track vehicle system**
Electric track vehicle system:
An electric track vehicle system (ETV) is a conveyor system for light goods transport. The system uses independently driven vehicles traveling on a monorail track network, consisting of straight track elements, bends, curves and transfer-units for changing of travel direction. Vehicles in an ETV system transport payloads up to 50 kg both vertical and horizontal in buildings and manufacturing plants. ETV systems were primarily put on the market in the sixties by German company Telelift. Later additional companies like Siemens or Thyssen engaged in this business.
Electric track vehicle system:
Initially, electric track vehicle systems were designed for documents transport and mail distribution in office buildings and headquarters. Later, further applications were designed for hospitals, libraries, printing plants, retail stores and material handling in manufacturing plants.
Electric track vehicle system:
Mainly transported goods are: Hospital: blood plasma, lab samples, pharmaceuticals, sterile utilities, patient records, x-rays and hospital consumables Library: books, newspapers, journals, media Headquarters, ministries, office buildings: documents, mail, parcels Printing plants: printing plates Retail stores: garment, shoes, jewelry, luxury goods Manufacturing plants: component parts, assemblies, toolsElectric track vehicle systems operate horizontal and vertical in one and the same vehicle per transport job. Conveying without transfer allows gentle transport of sensitive goods. Vehicle destination is usually typed into a touchscreen terminal at the station. The vehicles operate with a speed up to 1 m/s.
Electric track vehicle system:
The modular design of Electric Track Vehicle systems allows wide spreading track networks. E.g., the ETV system in the Bibliothèque nationale de France in Paris consists of 6.6 km tracks, 151 stations and 300 vehicles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solar eclipse of August 2, 2046**
Solar eclipse of August 2, 2046:
A total solar eclipse will occur on Thursday, August 2, 2046. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is greater than the Sun's, blocking all direct sunlight. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide.
Related eclipses:
Solar eclipses of 2044–2047 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit.
Related eclipses:
Saros 146 It is a part of Saros cycle 146, repeating every 18 years, 11 days, containing 76 events. The series started with partial solar eclipse on September 19, 1541. It contains total eclipses from May 29, 1938, through October 7, 2154, hybrid eclipses from October 17, 2172, through November 20, 2226, and annular eclipses from December 1, 2244, through August 10, 2659. The series ends at member 76 as a partial eclipse on December 29, 2893. The longest duration of totality was 5 minutes, 21 seconds on June 30, 1992.
Related eclipses:
Inex series This eclipse is a part of the long period inex cycle, repeating at alternating nodes, every 358 synodic months (≈ 10,571.95 days, or 29 years minus 20 days). Their appearance and longitude are irregular due to a lack of synchronization with the anomalistic month (period of perigee). However, groupings of 3 inex cycles (≈ 87 years minus 2 months) comes close (≈ 1,151.02 anomalistic months), so eclipses are similar in these groupings.
Related eclipses:
Tritos series This eclipse is a part of a tritos cycle, repeating at alternating nodes every 135 synodic months (≈ 3986.63 days, or 11 years minus 1 month). Their appearance and longitude are irregular due to a lack of synchronization with the anomalistic month (period of perigee), but groupings of 3 tritos cycles (≈ 33 years minus 3 months) come close (≈ 434.044 anomalistic months), so eclipses are similar in these groupings.
Related eclipses:
Metonic series The metonic series repeats eclipses every 19 years (6939.69 days), lasting about 5 cycles. Eclipses occur in nearly the same calendar date. In addition, the octon subseries repeats 1/5 of that or every 3.8 years (1387.94 days). All eclipses in this table occur at the Moon's descending node. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Food energy**
Food energy:
Food energy is chemical energy that animals (including humans) derive from their food to sustain their metabolism, including their muscular activity.Most animals derive most of their energy from aerobic respiration, namely combining the carbohydrates, fats, and proteins with oxygen from air or dissolved in water. Other smaller components of the diet, such as organic acids, polyols, and ethanol (drinking alcohol) may contribute to the energy input. Some diet components that provide little or no food energy, such as water, minerals, vitamins, cholesterol, and fiber, may still be necessary to health and survival for other reasons. Some organisms have instead anaerobic respiration, which extracts energy from food by reactions that do not require oxygen.
Food energy:
The energy contents of a given mass of food is usually expressed in the metric (SI) unit of energy, the joule (J), and its multiple the kilojoule (kJ); or in the traditional unit of heat energy, the calorie (cal). In nutritional contexts, the latter is often (especially in US) the "large" variant of the unit, also written "Calorie" (with symbol Cal, both with capital "C") or "kilocalorie" (kcal), and equivalent to 4184 J or 4.184 kJ. Thus, for example, fats and ethanol have the greatest amount of food energy per unit mass, 37 and 29 kJ/g (9 and 7 kcal/g), respectively. Proteins and most carbohydrates have about 17 kJ/g (4 kcal/g), though there are differences between different kinds. For example, the values for glucose, sucrose, and starch are 15.57, 16.48 and 17.48 kilojoules per gram (3.72, 3.94 and 4.18 kcal/g) respectively. The differing energy density of foods (fat, alcohols, carbohydrates and proteins) lies mainly in their varying proportions of carbon, hydrogen, and oxygen atoms. Carbohydrates that are not easily absorbed, such as fibre, or lactose in lactose-intolerant individuals, contribute less food energy. Polyols (including sugar alcohols) and organic acids contribute 10 kJ/g (2.4 kcal/g) and 13 kJ/g (3.1 kcal/g) respectively.The energy contents of a complex dish or meal can be approximated by adding the energy contents of its components.
History and methods of measurement:
Direct calorimetry of combustion The first determinations of the energy content of food were made by burning a dried sample in a bomb calorimeter and measuring the temperature change in the water surrounding the apparatus, a method known as direct calorimetry.
History and methods of measurement:
The Atwater system However, the direct calorimetric method generally overestimates the actual energy that the body can obtain from the food, because it also counts the energy contents of dietary fiber and other indigestible components, and does not allow for partial absorption and/or incomplete metabolism of certain substances. For this reason, today the energy content of food is instead obtained indirectly, by using chemical analysis to determine the amount of each digestible dietary component (such as protein, carbohydrates, and fats), and adding the respective food energy contents, previously obtained by measurement of metabolic heat released by the body. In particular, the fibre content is excluded. This method is known as the Modified Atwater system, after Wilbur Atwater who pioneered these measurements in the late 19th century.The system was later improved by Annabel Merrill and Bernice Watt of the USDA, who derived a system whereby specific calorie conversion factors for different foods were proposed.
Dietary sources of energy:
The typical human diet consists chiefly of carbohydrates, fats, proteins, water, ethanol, and indigestible components such as bones, seeds, and fibre (mostly cellulose). Carbohydrates, fats, and proteins typically comprise ninety percent of the dry weight of food. Ruminants can extract food energy from the respiration of cellulose because of bacteria in their rumens that decompose it into digestible carbohydrates. Other minor components of the human diet that contribute to its energy content are organic acids such as citric and tartaric, and polyols such as glycerol, xylitol, inositol, and sorbitol. Some nutrients have regulatory roles affected by cell signaling, in addition to providing energy for the body. For example, leucine plays an important role in the regulation of protein metabolism and suppresses an individual's appetite. Small amounts of essential fatty acids, constituents of some fats that cannot be synthesized by the human body, are used (and necessary) for other biochemical processes.
Dietary sources of energy:
The approximate food energy contents of various human diet components, to be used in package labeling according to the EU regulations and UK regulations, are: (1) Some polyols, like erythritol, are not digested and should be excluded from the count.
Dietary sources of energy:
(2) This entry exists in the EU regulations of 2008, but not in the UK regulations, according to which fibre shall not be counted.More detailed tables for specific foods have been published by many organizations, such as the United Nations Food and Agriculture Organization also has published a similar table.Other components of the human diet are either noncaloric, or are usually consumed in such small amounts that they can be neglected.
Energy usage in the human body:
The food energy actually obtained by respiration is used by the human body for a wide range of purposes, including basal metabolism of various organs and tissues, maintaining the internal body temperature, and exerting muscular force to maintain posture and produce motion. About 20% is used for brain metabolism.The conversion efficiency of energy from respiration into muscular (physical) power depends on the type of food and on the type of physical energy usage (e.g., which muscles are used, whether the muscle is used aerobically or anaerobically). In general, the efficiency of muscles is rather low: only 18 to 26% of the energy available from respiration is converted into mechanical energy. This low efficiency is the result of about 40% efficiency of generating ATP from the respiration of food, losses in converting energy from ATP into mechanical work inside the muscle, and mechanical losses inside the body. The latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). For an overall efficiency of 20%, one watt of mechanical power is equivalent to 18 kJ/h (4.3 kcal/h). For example, a manufacturer of rowing equipment shows calories released from "burning" food as four times the actual mechanical work, plus 1,300 kJ (300 kcal) per hour, which amounts to about 20% efficiency at 250 watts of mechanical output. It can take up to 20 hours of little physical output (e.g., walking) to "burn off" 17,000 kJ (4,000 kcal) more than a body would otherwise consume. For reference, each kilogram of body fat is roughly equivalent to 32,300 kilojoules of food energy (i.e., 3,500 kilocalories per pound or 7,700 kilocalories per kilogram).
Recommended daily intake:
Many countries and health organizations have published recommendations for healthy levels of daily intake of food energy. For example, the United States government estimates 8,400 and 10,900 kJ (2,000 and 2,600 kcal) needed for women and men, respectively, between ages 26 and 45, whose total physical activity is equivalent to walking around 2.5 to 5 km (1+1⁄2 to 3 mi) per day in addition to the activities of sedentary living. These estimates are for a "reference woman" who is 1.63 m (5 ft 4 in) tall and weighs 57 kg (126 lb) and a "reference man" who is 1.78 m (5 ft 10 in) tall and weighs 70 kg (154 lb). Because caloric requirements vary by height, activity, age, pregnancy status, and other factors, the USDA created the DRI Calculator for Healthcare Professionals in order to determine individual caloric needs.According to the Food and Agriculture Organization of the United Nations, the average minimum energy requirement per person per day is about 7,500 kJ (1,800 kcal).Older people and those with sedentary lifestyles require less energy; children and physically active people require more. Recognizing these factors, Australia's National Health and Medical Research Council recommends different daily energy intakes for each age and gender group. Notwithstanding, nutrition labels on Australian food products typically recommend the average daily energy intake of 8,800 kJ (2,100 kcal).
Recommended daily intake:
The minimum food energy intake is also higher in cold environments. Increased mental activity has been linked with moderately increased brain energy consumption.
Nutrition labels:
Many governments require food manufacturers to label the energy content of their products, to help consumers control their energy intake. To facilitate evaluation by consumers, food energy values (and other nutritional properties) in package labels or tables are often quoted for convenient amounts of the food, rather than per gram or kilogram; such as in "calories per serving" or "kcal per 100 g", or "kJ per package". The units vary depending on country: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Legendre wavelet**
Legendre wavelet:
In functional analysis, compactly supported wavelets derived from Legendre polynomials are termed Legendre wavelets or spherical harmonic wavelets. Legendre functions have widespread applications in which spherical coordinate system is appropriate. As with many wavelets there is no nice analytical formula for describing these harmonic spherical wavelets. The low-pass filter associated to Legendre multiresolution analysis is a finite impulse response (FIR) filter.
Legendre wavelet:
Wavelets associated to FIR filters are commonly preferred in most applications. An extra appealing feature is that the Legendre filters are linear phase FIR (i.e. multiresolution analysis associated with linear phase filters). These wavelets have been implemented on MATLAB (wavelet toolbox). Although being compactly supported wavelet, legdN are not orthogonal (but for N = 1).
Legendre multiresolution filters:
Associated Legendre polynomials are the colatitudinal part of the spherical harmonics which are common to all separations of Laplace's equation in spherical polar coordinates. The radial part of the solution varies from one potential to another, but the harmonics are always the same and are a consequence of spherical symmetry. Spherical harmonics Pn(z) are solutions of the Legendre 2nd -order differential equation, n integer: 0.
Legendre multiresolution filters:
cos (θ)) polynomials can be used to define the smoothing filter H(ω) of a multiresolution analysis (MRA). Since the appropriate boundary conditions for an MRA are |H(0)|=1 and |H(π)|=0 , the smoothing filter of an MRA can be defined so that the magnitude of the low-pass |H(ω)| can be associated to Legendre polynomials according to: 1.
cos cos (0)| Illustrative examples of filter transfer functions for a Legendre MRA are shown in figure 1, for 5.
Legendre multiresolution filters:
A low-pass behaviour is exhibited for the filter H, as expected. The number of zeroes within −π<ω<π is equal to the degree of the Legendre polynomial. Therefore, the roll-off of side-lobes with frequency is easily controlled by the parameter ν The low-pass filter transfer function is given by cos (ω2)) The transfer function of the high-pass analysing filter Gν(ω) is chosen according to Quadrature mirror filter condition, yielding: sin (ω2)) Indeed, |Gν(0)|=0 and |Gν(π)|=1 , as expected.
Legendre multiresolution filter coefficients:
A suitable phase assignment is done so as to properly adjust the transfer function Hν(ω) to the form Hν(ω)=12∑k∈Zhkνe−jωk The filter coefficients {hk}k∈Z are given by: hkν=−222ν(2kk)(2ν−2kν−k) from which the symmetry: hkν=hν−kν, follows. There are just ν+1 non-zero filter coefficients on Hn(ω) , so that the Legendre wavelets have compact support for every odd integer ν Table I - Smoothing Legendre FIR filter coefficients for ν=1,3,5 (N is the wavelet order.)N.B. The minus signal can be suppressed.
MATLAB implementation of Legendre wavelets:
Legendre wavelets can be easily loaded into the MATLAB wavelet toolbox—The m-files to allow the computation of Legendre wavelet transform, details and filter are (freeware) available. The finite support width Legendre family is denoted by legd (short name). Wavelets: 'legdN'. The parameter N in the legdN family is found according to 2N=ν+1 (length of the MRA filters).
Legendre wavelets can be derived from the low-pass reconstruction filter by an iterative procedure (the cascade algorithm). The wavelet has compact support and finite impulse response AMR filters (FIR) are used (table 1). The first wavelet of the Legendre's family is exactly the well-known Haar wavelet. Figure 2 shows an emerging pattern that progressively looks like the wavelet's shape.
The Legendre wavelet shape can be visualised using the wavemenu command of MATLAB. Figure 3 shows legd8 wavelet displayed using MATLAB. Legendre Polynomials are also associated with windows families.
Legendre wavelet packets:
Wavelet packets (WP) systems derived from Legendre wavelets can also be easily accomplished. Figure 5 illustrates the WP functions derived from legd2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Handgrip maneuver**
Handgrip maneuver:
The handgrip maneuver is performed by clenching one's fist forcefully for a sustained time until fatigued. Variations include squeezing an item such as a rolled up washcloth.
Physiological response:
The handgrip maneuver increases afterload by squeezing the arterioles and increasing total peripheral resistance.
Cardiology:
Since increasing afterload will prevent blood from flowing in a normal forward path, it will increase any murmurs that are due to backwards flowing blood.
Cardiology:
This includes aortic regurgitation (AR), mitral regurgitation (MR), and a ventricular septal defect (VSD).Mitral valve prolapse: The click and the murmur of mitral valve prolapse are delayed because left atrial volume also increases due to mitral regurgitation along with increased left ventricular volume.Murmurs that are due to forward flowing of blood such as aortic stenosis, and hypertrophic cardiomyopathy decrease in intensity.The effect of reducing the intensity in forward flowing murmurs is much more evident in aortic stenosis rather than mitral stenosis. The reason for this is that there is a larger pressure gradient across the aortic valve. A complementary maneuver for differentiating disorders is the Valsalva maneuver, which decreases preload. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inferior colliculus**
Inferior colliculus:
The inferior colliculus (IC) (Latin for lower hill) is the principal midbrain nucleus of the auditory pathway and receives input from several peripheral brainstem nuclei in the auditory pathway, as well as inputs from the auditory cortex. The inferior colliculus has three subdivisions: the central nucleus, a dorsal cortex by which it is surrounded, and an external cortex which is located laterally. Its bimodal neurons are implicated in auditory-somatosensory interaction, receiving projections from somatosensory nuclei. This multisensory integration may underlie a filtering of self-effected sounds from vocalization, chewing, or respiration activities.The inferior colliculi are part of the tectum of the midbrain, and together with the superior colliculi form the corpora quadrigemina. An inferior colliculus lies caudal/inferior to the ipsilateral superior colliculus, rostral/superior to the superior cerebellar peduncle and the trochlear nerve, and at the base of the projection of the medial geniculate nucleus and the lateral geniculate nucleus.
Subdivisions:
The inferior colliculus has three subdivisions – the central nucleus, the dorsal cortex by which it is surrounded, and an external cortex which is located laterally.
Relationship to auditory system:
The inferior colliculi of the midbrain are located just below the visual processing centers known as the superior colliculi. The inferior colliculus is the first place where vertically orienting data from the fusiform cells in the dorsal cochlear nucleus can finally synapse with horizontally orienting data. Sound location data thus becomes fully integrated by the inferior colliculus.
IC are large auditory nuclei on the right and left sides of the midbrain. Of the three subdivisions the central nucleus of IC (CNIC) is the principal way station for ascending auditory information in the IC.
Relationship to auditory system:
Input and output connections of IC The input connections to the inferior colliculus are composed of many brainstem nuclei. All nuclei except the contralateral ventral nucleus of the lateral lemniscus send projections to the central nucleus (CNIC) bilaterally. It has been shown that great majority of auditory fibers ascending in the lateral lemniscus terminate in the CNIC. In addition, the IC receives inputs from the auditory cortex, the medial division of the medial geniculate body, the posterior limitans, suprapeduncular nucleus and subparafascicular intralaminar nuclei of the thalamus, the substantia nigra pars compacta lateralis, the dorsolateral periaqueductal gray, the nucleus of the brachium of the inferior colliculus (or inferior brachium) and deep layers of the superior colliculus. The inferior brachium carries auditory afferent fibers from the inferior colliculus of the mesencephalon to the medial geniculate nucleus.The inferior colliculus receives input from both the ipsilateral and contralateral cochlear nucleus and respectively the corresponding ears. There is some lateralization, the dorsal projections (containing vertical data) only project to the contralateral inferior colliculus. This inferior colliculus contralateral to the ear it is receiving the most information from, then projects to its ipsilateral medial geniculate nucleus.
Relationship to auditory system:
The inferior colliculus also receives descending inputs from the auditory cortex and auditory thalamus (or medial geniculate nucleus).The medial geniculate body (MGB) is the output connection from inferior colliculus and the last subcortical way station. The MGB is composed of ventral, dorsal, and medial divisions, which are relatively similar in humans and other mammals. The ventral division receives auditory signals from the central nucleus of the IC.
Relationship to auditory system:
Function of IC The majority of the ascending fibers from the lateral lemniscus project to IC, which means major ascending auditory pathways converge here. IC appears as an integrative station and switchboard as well. It is involved in the integration and routing of multi-modal sensory perception, mainly the startle response and vestibulo-ocular reflex. It is also responsive to specific amplitude modulation frequencies and this might be responsible for detection of pitch. In addition, spatial localization by binaural hearing is a related function of IC as well.
Relationship to auditory system:
The inferior colliculus has a relatively high metabolism in the brain. The Conrad Simon Memorial Research Initiative measured the blood flow of the IC and put a number at 1.80 cc/g/min in the cat brain. For reference, the runner up in the included measurements was the somatosensory cortex at 1.53. This indicates that the inferior colliculus is metabolically more active than many other parts of the brain. The hippocampus, normally considered to use up a disproportionate amount of energy, was not measured or compared.Skottun et al. measured the interaural time difference sensitivity of single neurons in the inferior colliculus, and used these to predict behavioural performance. The predicted just noticeable difference was comparable to that achieved by humans in behavioral tests. This suggested that by the level of the inferior colliculus, integration of information over multiple neurons is unnecessary (see population code).
Relationship to auditory system:
Axiomatically determined functional models of spectro-temporal receptive fields in inferior colliculus have been determined by Lindeberg and Friberg in terms of derivatives of Gaussian functions over the log-spectral domain and either Gaussian kernels over time in the case of non-causal time or first-order integrators (truncated exponential kernels) coupled in cascade in the case of truly time-causal operations, optionally in combination with local glissando transformations to account for variations in frequencies over time.
Relationship to auditory system:
The shapes of the receptive field functions in these models can be determined by necessity from structural properties of the environment combined with requirements about the internal structure of the auditory system to enable theoretically well-founded processing of sound signals at different temporal and log-spectral scales. Thereby, the receptive fields in inferior colliculus can be seen as well adapted to handling natural sound transformations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HyperLogLog**
HyperLogLog:
HyperLogLog is an algorithm for the count-distinct problem, approximating the number of distinct elements in a multiset. Calculating the exact cardinality of the distinct elements of a multiset requires an amount of memory proportional to the cardinality, which is impractical for very large data sets. Probabilistic cardinality estimators, such as the HyperLogLog algorithm, use significantly less memory than this, but can only approximate the cardinality. The HyperLogLog algorithm is able to estimate cardinalities of > 109 with a typical accuracy (standard error) of 2%, using 1.5 kB of memory. HyperLogLog is an extension of the earlier LogLog algorithm, itself deriving from the 1984 Flajolet–Martin algorithm.
Terminology:
In the original paper by Flajolet et al. and in related literature on the count-distinct problem, the term "cardinality" is used to mean the number of distinct elements in a data stream with repeated elements. However in the theory of multisets the term refers to the sum of multiplicities of each member of a multiset. This article chooses to use Flajolet's definition for consistency with the sources.
Algorithm:
The basis of the HyperLogLog algorithm is the observation that the cardinality of a multiset of uniformly distributed random numbers can be estimated by calculating the maximum number of leading zeros in the binary representation of each number in the set. If the maximum number of leading zeros observed is n, an estimate for the number of distinct elements in the set is 2n.In the HyperLogLog algorithm, a hash function is applied to each element in the original multiset to obtain a multiset of uniformly distributed random numbers with the same cardinality as the original multiset. The cardinality of this randomly distributed set can then be estimated using the algorithm above.
Algorithm:
The simple estimate of cardinality obtained using the algorithm above has the disadvantage of a large variance. In the HyperLogLog algorithm, the variance is minimised by splitting the multiset into numerous subsets, calculating the maximum number of leading zeros in the numbers in each of these subsets, and using a harmonic mean to combine these estimates for each subset into an estimate of the cardinality of the whole set.
Operations:
The HyperLogLog has three main operations: add to add a new element to the set, count to obtain the cardinality of the set and merge to obtain the union of two sets. Some derived operations can be computed using the inclusion–exclusion principle like the cardinality of the intersection or the cardinality of the difference between two HyperLogLogs combining the merge and count operations.
Operations:
The data of the HyperLogLog is stored in an array M of m counters (or "registers") that are initialized to 0. Array M initialized from a multiset S is called HyperLogLog sketch of S.
Operations:
Add The add operation consists of computing the hash of the input data v with a hash function h, getting the first b bits (where b is log {\textstyle \log _{2}(m)} ), and adding 1 to them to obtain the address of the register to modify. With the remaining bits compute {\textstyle \rho (w)} which returns the position of the leftmost 1. The new value of the register will be the maximum between the current value of the register and {\textstyle \rho (w)} := := := := max (M[j],ρ(w)) Count The count algorithm consists in computing the harmonic mean of the m registers, and using a constant to derive an estimate {\textstyle E} of the count: Z=(∑j=1m2−M[j])−1 log 2(2+u1+u))mdu)−1 E=αmm2Z The intuition is that n being the unknown cardinality of M, each subset {\textstyle M_{j}} will have {\textstyle n/m} elements. Then max {\textstyle \max _{x\in M_{j}}\rho (x)} should be close to log {\textstyle \log _{2}(n/m)} . The harmonic mean of 2 to these quantities is {\textstyle mZ} which should be near {\textstyle n/m} . Thus, {\textstyle m^{2}Z} should be n approximately.
Operations:
Finally, the constant {\textstyle \alpha _{m}} is introduced to correct a systematic multiplicative bias present in {\textstyle m^{2}Z} due to hash collisions.
Practical considerations The constant {\textstyle \alpha _{m}} is not simple to calculate, and can be approximated with the formula 0.673 for 16 0.697 for 32 0.709 for 64 0.7213 1.079 for 128.
Operations:
The HyperLogLog technique, though, is biased for small cardinalities below a threshold of {\textstyle {\frac {5}{2}}m} . The original paper proposes using a different algorithm for small cardinalities known as Linear Counting. In the case where the estimate provided above is less than the threshold {\textstyle E<{\frac {5}{2}}m} , the alternative calculation can be used: Let {\textstyle V} be the count of registers equal to 0.
Operations:
If {\textstyle V=0} , use the standard HyperLogLog estimator {\textstyle E} above.
Operations:
Otherwise, use Linear Counting: log {\textstyle E^{\star }=m\log \left({\frac {m}{V}}\right)} Additionally, for very large cardinalities approaching the limit of the size of the registers ( 32 30 {\textstyle E>{\frac {2^{32}}{30}}} for 32-bit registers), the cardinality can be estimated with: 32 log 32 ) With the above corrections for lower and upper bounds, the error can be estimated as 1.04 {\textstyle \sigma =1.04/{\sqrt {m}}} Merge The merge operation for two HLLs ( {\textstyle {\mathit {hll}}_{1},{\mathit {hll}}_{2}} ) consists in obtaining the maximum for each pair of registers 1..
Complexity:
To analyze the complexity, the data streaming (ϵ,δ) model is used, which analyzes the space necessary to get a 1±ϵ approximation with a fixed success probability 1−δ . The relative error of HLL is 1.04 /m and it needs log log log n) space, where n is the set cardinality and m is the number of registers (usually less than one byte size).
Complexity:
The add operation depends on the size of the output of the hash function. As this size is fixed, we can consider the running time for the add operation to be O(1) The count and merge operations depend on the number of registers m and have a theoretical cost of O(m) . In some implementations (Redis) the number of registers is fixed and the cost is considered to be O(1) in the documentation.
HLL++:
The HyperLogLog++ algorithm proposes several improvements in the HyperLogLog algorithm to reduce memory requirements and increase accuracy in some ranges of cardinalities: 64-bit hash function is used instead of the 32 bits used in the original paper. This reduces the hash collisions for large cardinalities allowing to remove the large range correction.
Some bias is found for small cardinalities when switching from linear counting to the HLL counting. An empirical bias correction is proposed to mitigate the problem.
A sparse representation of the registers is proposed to reduce memory requirements for small cardinalities, which can be later transformed to a dense representation if the cardinality grows.
Streaming HLL:
When the data arrives in a single stream, the Historic Inverse Probability or martingale estimator significantly improves the accuracy of the HLL sketch and uses 36% less memory to achieve a given error level. This estimator is provably optimal for any duplicate insensitive approximate distinct counting sketch on a single stream.
The single stream scenario also leads to variants in the HLL sketch construction.
HLL-TailCut+ uses 45% less memory than the original HLL sketch but at the cost of being dependent on the data insertion order and not being able to merge sketches. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coronal rain**
Coronal rain:
Coronal rain is a phenomenon that occurs in the Sun's corona when hot plasma cools and condenses in strong magnetic fields and falls to the photosphere. It is usually associated with active regions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NGD-4715**
NGD-4715:
NGD-4715 is a drug developed by Neurogen, which acts as a selective, non-peptide antagonist at the melanin concentrating hormone receptor MCH1. In animal models it has anxiolytic, antidepressant, and anorectic effects, and it has successfully passed Phase I clinical trials in humans.Neurogen was acquired by Ligand Pharmaceuticals in August, 2009, and NGD-4715 was not listed among its key assets. All four laboratories were closed and sold, and no employees were retained.The structure of NGD-4715 has been confused with for example 1-(5-bromo-6-methoxypyridin-2-yl)-4-(3,4-dimethoxybenzyl)piperazine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**WorldBox**
WorldBox:
WorldBox is a sandbox game that was released in 2012 by Ukrainian indie game developer Maxim Karpenko. The game allows the use of different elements to create, change, and destroy virtual worlds.
Gameplay:
The game's main feature is the ability to create worlds, using godlike tools known as "God powers" provided in the game. These are divided into several groups: World Creation, Civilizations, Creatures, Nature and Disasters, Destruction Powers, and Other Powers. Conversely, the game also allows the destruction of worlds, ranging from explosives to natural disasters, such as earthquakes, volcanoes, and acid rain. Populations can also be reduced with hostile entities, illnesses, etc. Worlds can also, as of version 0.21, go through "ages", which can affect biomes and creatures, both negatively and positively.Some creatures are able to create civilizations (humans, orcs, elves and dwarves). Such civilizations can grow, declare war on each other, and suffer rebellions.Beginning with version 0.14, players are also able to customise the banners and symbols of kingdoms, along with the ability to control the traits of creatures, adding more content and depth to the gameplay. Beginning with version 0.21, kingdoms can form alliances, and kings can form clans.
Development:
Karpenko started working on the game back in 2011, and published the first prototype on Flash in the same year. In 2012, he released it on Newgrounds. The Newgrounds version is still available, but not playable due to Adobe Flash support ending.He continued working on the game for several years, and released it on IOS in December 2018, with the Android version coming in early February 2019. He proceeded to work on the game and released it for PC in October 2019. The Steam version would only come more than 2 years later in December 2021.In early March 2022, he stated that he would delay the next content update for the game due to Russia's invasion of Ukraine. He released the next content update (0.14) 2 months later in May 2022. According to his Twitter, he planned to release the next update in mid-December 2022. The update was later delayed and ultimately released 4 months later, on 12 March 2023, to Steam, and on Android and IOS between March 13 to 14.
Reception:
As of November 27, 2022, WorldBox has more than 14 000 reviews on Steam, with 94% of them being positive, giving the game a "Very Positive" rating.Graham Smith of Rock Paper Shotgun wrote: "I'd probably had my fill of WorldBox after around 4 hours, but it was a happy four hours."Joseph Knoop of PC Gamer wrote: "It's funny how much WorldBox shares with big strategy games, despite not presenting an ultimate goal to the player, and almost always ending with a boredom-killing nuclear bomb. Watching the borders of a kingdom stretch, retract, and suddenly disappear tickles a part of my brain that really likes to be tickled. Considering WorldBox is about to become an Early Access game on Steam, I'm eager to see what other maniacal tools get added to the toybox."
2020 plagiarism scandal:
In November 2020, Maxim reported that a shell company known as Stavrio LTD copied WorldBox after he refused to let them buy it at a DevGAMM conference the previous year, and attempted to trademark the name. This resulted in Maxim attempting to get Google Play to take action against the company by starting the hashtag "#saveworldbox". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hwb**
Hwb:
Hwb is a website and collection of online tools provided to all schools in Wales by the Welsh Government. It was created in response to the 'Find it, Make it, Use it, Share it' report into Digital Learning in Wales.
Hwb provides access to tools such as Microsoft Office 365, Google Classroom, J2e, and Adobe Spark all free to students in Wales.
The main site contains over 88,000 bilingual resources that were transferred from NGfL Cymru. In addition teachers and learners with accounts can sign in and access a range of other online tools and resources. Included in this is a school specific Learning Platform (Hwb+). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Elston–Stewart algorithm**
Elston–Stewart algorithm:
The Elston–Stewart algorithm is an algorithm for computing the likelihood of observed data on a pedigree assuming a general model under which specific genetic segregation, linkage and association models can be tested. It is due to Robert Elston and John Stewart. It can handle relatively large pedigrees providing they are (almost) outbred. When used for linkage analysis its computation time is exponential in the number of markers, in contrast to the Lander-Green algorithm, which has computational time exponential in the number of pedigree members. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spark gap**
Spark gap:
A spark gap consists of an arrangement of two conducting electrodes separated by a gap usually filled with a gas such as air, designed to allow an electric spark to pass between the conductors. When the potential difference between the conductors exceeds the breakdown voltage of the gas within the gap, a spark forms, ionizing the gas and drastically reducing its electrical resistance. An electric current then flows until the path of ionized gas is broken or the current reduces below a minimum value called the "holding current". This usually happens when the voltage drops, but in some cases occurs when the heated gas rises, stretching out and then breaking the filament of ionized gas. Usually, the action of ionizing the gas is violent and disruptive, often leading to sound (ranging from a snap for a spark plug to thunder for a lightning discharge), light, and heat.
Spark gap:
Spark gaps were used historically in early electrical equipment, such as spark gap radio transmitters, electrostatic machines, and X-ray machines. Their most widespread use today is in spark plugs to ignite the fuel in internal combustion engines, but they are also used in lightning arresters and other devices to protect electrical equipment from high-voltage transients.
Breakdown voltage:
For air, the breakdown strength is about 30 kV/cm at sea level.
Spark visibility:
The light emitted by a spark does not come from the current of electrons itself, but from the material medium fluorescing in response to collisions from the electrons. When electrons collide with molecules of air in the gap, they excite their orbital electrons to higher energy levels. When these excited electrons fall back to their original energy levels, they emit energy as light. It is impossible for a visible spark to form in a vacuum. Without intervening matter capable of electromagnetic transitions, the spark will be invisible (see vacuum arc).
Applications:
Spark gaps are essential to the functioning of a number of electronic devices.
Applications:
Ignition devices A spark plug uses a spark gap to initiate combustion. The heat of the ionization trail, but more importantly, UV radiation and hot free electrons (both cause the formation of reactive free radicals) ignite a fuel-air mixture inside an internal combustion engine, or a burner in a furnace, oven, or stove. The more UV radiation is produced and successfully spread into the combustion chamber, the further the combustion process proceeds.The Space Shuttle Main Engine hydrogen oxygen propellant mixture was ignited with a spark igniter.
Applications:
Protective devices Spark gaps are frequently used to prevent voltage surges from damaging equipment. Spark gaps are used in high-voltage switches, large power transformers, in power plants and electrical substations. Such switches are constructed with a large, remote-operated switching blade with a hinge as one contact and two leaf springs holding the other end as second contact. If the blade is opened, a spark may keep the connection between blade and spring conducting. The spark ionizes the air, which becomes conductive and allows an arc to form, which sustains ionization and hence conduction. A Jacob's ladder on top of the switch will cause the arc to rise and eventually extinguish. One might also find small Jacob's ladders mounted on top of ceramic insulators of high-voltage pylons. These are sometimes called horn gaps. If a spark should ever manage to jump over the insulator and give rise to an arc, it will be extinguished.
Applications:
Smaller spark gaps are often used to protect sensitive electrical or electronic equipment from high-voltage surges. In sophisticated versions of these devices (called gas tube arresters), a small spark gap breaks down during an abnormal voltage surge, safely shunting the surge to ground and thereby protecting the equipment. These devices are commonly used for telephone lines as they enter a building; the spark gaps help protect the building and internal telephone circuits from the effects of lightning strikes. Less sophisticated (and much less expensive) spark gaps are made using modified ceramic capacitors; in these devices, the spark gap is simply an air gap sawn between the two lead wires that connect the capacitor to the circuit. A voltage surge causes a spark that jumps from lead wire to lead wire across the gap left by the sawing process. These low-cost devices are often used to prevent damaging arcs between the elements of the electron gun(s) within a cathode ray tube (CRT).Small spark gaps are very common in telephone switchboards, as the long phone cables are very susceptible to induced surges from lightning strikes. Larger spark gaps are used to protect power lines.
Applications:
Spark gaps are sometimes implemented on Printed Circuit Boards in electronics products using two closely spaced exposed PCB traces. This is an effectively zero cost method of adding crude over-voltage protection to electronics products.Transils and trisils are the solid-state alternatives to spark gaps for lower-power applications. Neon bulbs are also used for this purpose.
High speed photography A triggered spark gap in an air-gap flash is used to produce photographic light flashes in the sub-microsecond domain.
Applications:
Radio transmitters A spark radiates energy throughout the electromagnetic spectrum. Nowadays, this is usually regarded as illegal radio frequency interference and is suppressed, but in the early days of radio communications (1880–1920), this was the means by which radio signals were transmitted, in the unmodulated spark-gap transmitter. Many radio spark gaps include cooling devices, such as the rotary gap and heat sinks, since the spark gap becomes quite hot under continuous use at high power.
Applications:
Sphere gap for voltage measurement A calibrated spherical spark gap will break down at a highly repeatable voltage, when corrected for air pressure, humidity and temperature. A gap between two spheres can provide a voltage measurement without any electronics or voltage dividers, to an accuracy of about 3%. A spark gap can be used to measure high voltage AC, DC, or pulses, but for very short pulses, an ultraviolet light source or radioactive source may be put on one of the terminals to provide a source of electrons.
Applications:
Power-switching devices Spark gaps may be used as electrical switches because they have two states with significantly different electrical resistance. Resistance between the electrodes may be as high as 1012 ohms when the electrodes are separated by gas or vacuum which means that little current flows even when a high voltage exists between the electrodes. Resistance drops as low as a 10-3 ohms low when the electrodes are connected by plasma which means that power dissipation is low even at high current. This combination of properties has led to the use of spark gaps as electrical switches in pulsed power applications where energy is stored at high voltage in a capacitor and then discharged at high current. Examples include pulsed lasers, railguns, Marx generators, fusion, ultrastrong pulsed magnetic field research, and nuclear bomb triggering.
Applications:
When a spark gap consists of only two electrodes separated by gas, the transition between the non-conducting and conducting states is governed by Paschen's law. At typical pressure and electrode distance combinations, Paschen's law says that Townsend discharge will fill the gap between the electrodes with conductive plasma whenever the ratio of the electric field strength to the pressure exceeds a constant value determined by the composition of the gas. The speed with which pressure can be reduced is limited by choked flow, while increasing the electric field in a capacitor discharge circuit is limited by the capacitance in the circuit and the current available for charging the capacitance. These limitations on the speed with which discharge may be initiated mean that spark gaps with two electrodes typically have high jitter.
Applications:
Triggered spark gaps are a class of devices with some additional means of triggering to achieve low jitter. Most commonly, this is a third electrode, as in a trigatron. The voltage of the trigger electrode can be changed quickly because the capacitance between it and the other electrodes is small. In a triggered spark gap, gas pressure is optimized to minimize jitter while also avoiding unintentional triggering. Triggered spark gaps are made in permanently sealed versions with limited voltage range and in user-pressurized versions with voltage range proportional to the available pressure range. Triggered spark gaps share many similarities with other gas-filled tubes such as thyratrons, krytrons, ignitrons, and crossatrons.
Applications:
Triggered vacuum gaps, or sprytrons, resemble triggered spark gaps both in appearance and construction but rely on a different operating principle. A triggered vacuum gap consists of three electrodes in an airtight glass or ceramic envelope that has been evacuated. This means that, unlike a triggered spark gap, a triggered vacuum gap operates in the parameter space to the left of the Paschen minimum where breakdown is promoted by increasing pressure. Current between the electrodes is limited to a small value by field emission in the non-conducting state. Breakdown is initiated by rapidly evaporating material from a trigger electrode or an adjacent resistive coating. Once the vacuum arc is initiated, a triggered vacuum gap is filled with conductive plasma as in any other spark gap. A triggered vacuum gap has a larger operating voltage range than a sealed triggered spark gap because Paschen curves are much steeper to the left of the Paschen minimum than at higher pressures. Triggered vacuum gaps are also rad hard because in the non-conducting state they do not contain any gas that could be ionized by radiation.
Applications:
Insect control They are also used as insect zappers. The two electrodes are implemented as metal lattices placed a slightly too far apart for the voltage to jump. When an insect ventures between the electrodes the gap distance is reduced by the insects body, being conductive, and a spark discharge occurs to electrocute and burn the insect.
In this use the spark gap mechanism is often used in conjunction with a bait, such as a light, to attract the insect into the spark gap. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aquanaut**
Aquanaut:
An aquanaut is any person who remains underwater, breathing at the ambient pressure for long enough for the concentration of the inert components of the breathing gas dissolved in the body tissues to reach equilibrium, in a state known as saturation. Usually this is done in an underwater habitat on the seafloor for a period equal to or greater than 24 continuous hours without returning to the surface. The term is often restricted to scientists and academics, though there were a group of military aquanauts during the SEALAB program. Commercial divers in similar circumstances are referred to as saturation divers. An aquanaut is distinct from a submariner, in that a submariner is confined to a moving underwater vehicle such as a submarine that holds the water pressure out. Aquanaut derives from the Latin word aqua ("water") plus the Greek nautes ("sailor"), by analogy to the similar construction "astronaut".
Aquanaut:
The first human aquanaut was Robert Sténuit, who spent 24 hours on board a tiny one-man cylinder at 200 feet (61 m) in September 1962 off Villefranche-sur-Mer on the French Riviera. Military aquanauts include Robert Sheats, author Robin Cook, and astronauts Scott Carpenter and Alan Shepard. Civilian aquanaut Berry L. Cannon died of carbon dioxide poisoning during the U.S. Navy's SEALAB III project.
Aquanaut:
Scientific aquanauts include Sylvia Earle, Jonathan Helfgott, Joseph B. MacInnis, Dick Rutkowski, Phil Nuytten, and about 700 others, including the crew members (many of them astronauts) of NASA's NEEMO missions at the Aquarius underwater laboratory.
Russian military program:
A unit of the Russian navy has developed an aquanaut program that has deployed divers more than 300 meters deep. An ocean vessel has been developed and is based in Vladivostok that is specialized for submarine and other deep sea rescue and that is equipped with a diving complex and a 120-seat deep sea diving craft.
Accidental aquanaut:
A Nigerian ship's cook, Harrison Odjegba Okene, survived for 60 hours in a sunken tugboat, Jascon-4, that capsized on 26 May 2013 in heavy seas while it was stabilising an oil tanker at a Chevron platform in the Gulf of Guinea in the Atlantic Ocean, about 32 km (20 mi) off the Nigerian coast.
Accidental aquanaut:
The boat came to a rest upside down on the sea bottom, at a depth of 30 m (98 ft). Eleven crew members died, but in total darkness, Okene felt his way into the engineer's office 1.2 m (3 ft 11 in) in height that contained air sufficient to keep him alive. There, he fabricated a platform from a mattress and other materials which kept the upper part of his body above water that helped reduce heat loss.Three days after the accident, Okene was discovered by South African divers, Nicolaas van Heerden, Darryl Oosthuizen and Andre Erasmus, employed to investigate the scene and recover the bodies. The rescuing divers fitted Okene with a diving helmet so he could breathe while being transferred into a closed diving bell and returned to the surface for decompression from saturation. Okene lost consciousness during the transfer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**System of systems engineering**
System of systems engineering:
System of systems engineering (SoSE) is a set of developing processes, tools, and methods for designing, re-designing and deploying solutions to system-of-systems challenges.
Overview:
System of Systems Engineering (SoSE) methodology is heavily used in U.S. Department of Defense applications, but is increasingly being applied to non-defense related problems such as architectural design of problems in air and auto transportation, healthcare, global communication networks, search and rescue, space exploration, industry 4.0 and many other System of Systems application domains. SoSE is more than systems engineering of monolithic, complex systems because design for System-of-Systems problems is performed under some level of uncertainty in the requirements and the constituent systems, and it involves considerations in multiple levels and domains. Whereas systems engineering focuses on building the system right, SoSE focuses on choosing the right system(s) and their interactions to satisfy the requirements.
Overview:
System-of-Systems Engineering and Systems Engineering are related but different fields of study. Whereas systems engineering addresses the development and operations of monolithic products, SoSE addresses the development and operations of evolving programs. In other words, traditional systems engineering seeks to optimize an individual system (i.e., the product), while SoSE seeks to optimize network of various interacting legacy and new systems brought together to satisfy multiple objectives of the program. SoSE should enable the decision-makers to understand the implications of various choices on technical performance, costs, extensibility and flexibility over time; thus, effective SoSE methodology should prepare decision-makers to design informed architectural solutions for System-of-Systems problems.
Overview:
Due to varied methodology and domains of applications in existing literature, there does not exist a single unified consensus for processes involved in System-of-Systems Engineering. One of the proposed SoSE frameworks, by Dr. Daniel A. DeLaurentis, recommends a three-phase method where a SoS problem is defined (understood), abstracted, modeled and analyzed for behavioral patterns. More information on this method and other proposed methods can be found in the listed SoSE focused organizations and SoSE literature in the subsequent sections. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Philips Nino**
Philips Nino:
The Philips Nino is a so-called Palm-size PC, a predecessor to the Pocket PC platform. It was a PDA-style device with a stylus-operated touch screen. The Nino 200 and Nino 300 models had a monochrome screen while the Nino 500 had a color display. The Nino featured a Voice Control Software and Tegic T9. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Advance Concrete**
Advance Concrete:
Advance Concrete is a computer-aided design (CAD) software application was developed by GRAITEC, but is now an Autodesk product, used for modeling and detailing reinforced concrete structures. Advance Concrete is used in the structural / civil engineering and drafting fields.
Advance Concrete was discontinued by Autodesk on January 31, 2017, with Revit as the suggested replacement.
Features:
Advance Concrete is specifically designed for engineers and structural draftsmen looking for comprehensive and easy to use software, including its own graphics engine allowing it to run with or without AutoCAD.
The application can be used for modeling and detailing different types of concrete structures, such as buildings, precast concrete elements, and also for civil engineering designs.
Advance Concrete uses AutoCAD technology: ObjectARX. This technology provides users with professional objects (beams, columns, slabs, bars, frames, stirrup bars) integrated into AutoCAD and on which most basic AutoCAD functions can be applied (stretch, shorten, copy, move). The Advance Concrete native file is the DWG file.
The main functions of Advance Concrete concern: 2D / 3D modeling of concrete structures; Automated drawings, with tools for automatic creation of sections, elevations, foundations, isometric views, etc.; Advanced reinforcement, with automatic creation and update functionalities and also with manual input tools; Creation of paper drawings; Automatic bill of materials.
Features:
Multi-user modeling. Users can securely and simultaneously work on the same project through a shared database that stores the model data.The program provides a working environment for creating 3D structural models from which drawings are created. The 3D model is created using Advance Concrete specific objects (structural elements, openings, rebars, etc.) and stored in a drawing (in DWG format). Once a model is complete, Advance Concrete creates all structural and reinforcement drawings using a large selection of tools for view creation, dimensions, interactive annotations, symbols, markings and automatic layout functions. Advance Concrete provides functionalities for automatic drawing updates based on model modifications.
Features:
Specific functionalities One of the features of Advance Concrete is the “dynamic reinforcement” technology for the rapid reinforcement of concrete elements taking into account their context (type, parameters, connection with other elements).
Features:
The user creates a so-called dynamic reinforcement solution that integrates the reinforcement cage elements and properties: geometric information, local rules and standards for steel grade, reinforcement bar placement, concrete cover, etc. The reinforcement solutions can be used later for elements that have different sizes. The reinforcement elements adjust to the new dimensions and are taken into account at reinforcement drawing creation and lists.
Features:
The user can save the reinforcement solution in an external file, which can be exchanged, downloaded, reused in any other projects etc.
Software compatibility:
The application is compatible with the following operating systems and Autodesk platforms: Windows Vista, Windows XP Pro and Windows 7 (32 and 64 bits) AutoCAD 2007, 2008, 2009 32 and 64 bits, 2010 32 and 64 bits ADT 2007, AutoCAD Architecture 2008, 2009, 2010, 2011.
Software interoperability:
Advance Concrete integrates GRAITEC’s “GTC” (GRAITEC Transfer Center) technology, a data synchronization technology that allows: Importing / exporting data to other GRAITEC software and standard formats (IFC 2.×3) Several Advance Concrete users working simultaneously on the same project and synchronizing their models Synchronizing in Advance Concrete the modifications made by engineers in other GRAITEC software applications (e.g., section changes, addition of structural elements, etc.).GTC is the GRAITEC solution for CAD/Design software interoperability and integration specialized in creating and handling a BIM (Building Information Model). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Goaltender mask**
Goaltender mask:
A goaltender mask, commonly referred to as a goalie mask, is a mask worn by goaltenders in a variety of sports to protect the head and face from injury from the ball or puck, as they constantly face incoming shots on goal. Some sports requiring their use include ice hockey, lacrosse, inline hockey, field hockey, rink hockey, ringette, bandy, rinkball, broomball, and floorball. This article deals chiefly with the goal masks used in ice hockey.
Goaltender mask:
In ice hockey it is sometimes simply referred to as a hockey mask. In some cases the facemask must meet safety specifications designed for use in a specific sport such as ringette. Some recent changes have also occurred in bandy though not without controversy. This article deals chiefly with masks designed for ice hockey goaltenders.
Goaltender mask:
Jacques Plante was the first ice hockey goaltender to create and use a practical mask in 1959. Plante's mask was a piece of fiberglass that was contoured to his face. This mask later evolved into a helmet-cage combination, and single piece full fiberglass mask. Today, the full fiberglass mask with the birdcage facial protector (known as a "combo mask") is the more popular option, because it is safer and offers better visibility.
Goaltender mask:
Since the invention of the fiberglass ice hockey mask, professional goaltenders no longer play without a mask, considering it is now a mandatory piece of equipment. The last goaltender to play without a mask was Andy Brown, who played his last NHL game in 1974. He later moved to the Indianapolis Racers of the World Hockey Association and played without a mask until his retirement in 1977.
History:
The first recorded case of an ice hockey goaltender using a mask was in February of 1927 where a metal fencing mask was donned by Queen's University netminder Elizabeth Graham, mainly to protect her teeth.In 1930, the first crude leather model of the mask (actually an American football "nose-guard") was worn by Clint Benedict to protect his broken nose. After recovering from the injury, he abandoned the mask, never wearing one again in his career.
History:
At the 1936 Winter Olympics, Japanese ice hockey goaltender Teiji Honma wore a crude mask, similar to the one worn by baseball catchers. The mask was made of leather, and had a wire cage that protected the face, as well as Honma's large circular glasses.
History:
Jacques Plante It was not until 1959 that a goaltender wore a mask full-time. On November 1, 1959, in the first period of a game between the Montreal Canadiens and New York Rangers of the National Hockey League (NHL) at Madison Square Garden, Canadiens goaltender Jacques Plante was struck in the face by a shot from Andy Bathgate. Plante had previously worn his mask in practice, but head coach Toe Blake refused to allow him to wear it in a game, fearing it would inhibit his vision. After being stitched up, Plante gave Blake an ultimatum, refusing to go back out onto the ice without the mask, to which Blake obliged, not wanting to forfeit the game, since NHL teams did not have back-up goaltenders at the time. Montreal won the game 3–1 and continued on an 18-game unbeaten streak, which went through November.In preparation for the playoffs, Plante was asked by Blake to remove it for a game on March 8, a 3–0 loss. Plante donned the mask the next night, and for the remainder of his career. When he introduced the mask into the NHL, many questioned his dedication and bravery; in response, Plante made an analogy to a person, which he described as skydiving without a parachute, which he considered a gesture of stupidity rather than bravery. Although Plante faced some laughter, the face-hugging fiberglass goaltender mask soon became the standard; by late 1969, only a few NHL goaltenders went without one.
Types:
Face-hugging The face-hugging fiberglass, the type which was worn first by Jacques Plante, is a longtime symbol of ice hockey as typified by the famous painting At the Crease, by Ken Danby. The goaltender mask evolved further from the original face-hugging fiberglass mask designed by Plante. Although this mask does not seem very protective now, at the time it was, based on the style of game that was played.Gerry Cheevers's use of the face-hugging mask for the Boston Bruins was among the first to be "decorated" in a custom manner; as prompted by then-Bruins trainer John "Frosty" Forristall as a joke, painting a fake stitch on the mask where Cheevers had been struck by an errant puck. Cheevers adopted the "stitch mask" as his own, and went on to set an NHL record (which still stands) of 32-straight wins during the Bruins' 1971–72 season.While this style of mask is no longer used by hockey leagues, it has remained famous because of its use in popular culture. Perhaps the best-known example is the character Jason Voorhees from the Friday the 13th horror film franchise. Casey Jones from the Teenage Mutant Ninja Turtles franchise also wears a stylized version of the mask, as did D-Roc the Executioner, the late guitarist of the heavy metal band Body Count. Similarly, the members of Hollywood Undead are always seen wearing signature masks based on this design. In the film Heat, the protagonists wear face-hugging hockey masks as part of their disguise during a heist, as do the characters in the video game Grand Theft Auto: Vice City, during a mission which is heavily inspired by the heist from the film.
Types:
Helmet-cage combination In the 1970s, a helmet-cage combination was popularized by Vladislav Tretiak. Like the original fiberglass design, the helmet-cage combination has been criticized for not providing adequate facial/cranial protection. Dan Cloutier switched from this type of mask to the more popular full fiberglass citing safety reasons upon the advice of the Los Angeles Kings. Dominik Hašek used this type of mask. Rick DiPietro, last with the New York Islanders in 2013, was one of the last NHL goaltenders to use this type of mask. Following Clint Malarchuk's life threatening injury in 1989, more goaltender masks have adopted a plastic extension to guard the neck, usually hanging loose for more maneuverability. On March 4, 2014, Tim Thomas took the ice for the Florida Panthers wearing an old Cooper helmet painted dark blue with a modern Bauer cage and white Itech neck guard attached. During the game, the cage broke from a slapshot and Thomas returned with a red Mage-style helmet with a similar Bauer cage. Goaltenders at lower levels of hockey (such as high-school, college or recreational leagues) who choose to use this design cite reasons such as the plastic helmet used is lighter than the fiberglass or composite materials used in other designs, and that the helmet has a wider opening than a traditional mask for a less claustrophobic feeling and better sight of the puck.
Types:
Fiberglass/cage combination (Combo mask) In the late 1970s, a second type of goaltender mask consisting of a fiberglass mask with a wire cage covering a cut-out area in the middle of it was developed by Dave Dryden and Greg Harrison. The fiberglass portion can also be made out of carbon fiber, or a fiberglass and kevlar mix. Gilles Meloche and Chico Resch were among the first NHL adopters of the combo mask, in the early 1980s. More modern versions of this type of mask are designed to better withstand the impact of a hockey puck at higher speeds, and are used at all levels of organized ice hockey. This type of mask is considered safer than the other types, since it disperses the impact of the puck better than the helmet-cage combination, and is the most common type used by goaltenders today. Former goaltender Tim Thomas of the Boston Bruins wore a newer style one piece called a Sportmask Mage RS, which is made like the newer fiberglass mask, but resembles the helmet/cage combination. The combo mask was approved for Canadian minor hockey in 1989. Amateur versions have only square or rectangular openings between the bars, as the cats-eye bars are banned in minor hockey.
Types:
Cats-eye bars Brian Heaton, designer of the Cooper Canada HM30 cage and HM40 for forward players, inspired the basis for all cats-eye bars, (a.k.a. "cateye" cages) in use by goaltenders today.Cats-eye bars are banned in all minor hockey governed by Hockey Canada, unless they feature additional bars to reduce the size of the openings.
Tactical play:
The advent of the goaltender mask changed the way goaltenders play, allowing them to make more saves on their knees without fear of serious head or facial injuries. Before the advent of the mask, most goaltenders stayed standing as much as possible. In the modern era, a goaltender is likely to suffer temporary discomfort instead of serious concussions and lacerations; however, a mask does not eliminate all potential risk of injury, and goaltenders have been concussed by a shot hitting the head. Some goaltenders, such as Dominik Hašek and Henrik Lundqvist, have used their heads intentionally to stop shots. Lundqvist said that his reason for this is to not obstruct his vision by placing his catching glove in front of his mask to stop the shot.
Mask decoration:
With available surface area provided by fiberglass masks, goaltenders find it fashionable to give their mask distinctive decorations. This tradition started with the earliest masks, notably by the aforementioned, now-retired Boston Bruins goaltender Gerry Cheevers, who was known for drawing stitches on his mask whenever it got hit. These stitches represented where Cheevers would have been cut had he not been wearing his mask. Modern-day masks also offer this ability, and goaltenders are well-identified with their helmet design, often transferring the motif into their new team's colours when traded or signed to a new team (for example, Patrick Lalime's Marvin the Martian theme, Félix Potvin's cat theme, Curtis Joseph's Cujo theme, Ed Belfour's eagle theme, Martin Brodeur's Devils theme, Peter Budaj's Ned Flanders theme, Cam Talbot's Ghostbusters theme or John Gibson's Arcade game theme).
Other uses:
Other sports In recent years, baseball catchers have begun wearing facemasks similar in style to goaltender masks. Charlie O'Brien was the first to use a hockey-style catcher's mask in a Major League Baseball game in 1996 while playing for the Toronto Blue Jays.Goaltender masks are commonly seen being worn by box lacrosse, ringette, rinkball, floorball and field hockey goaltenders at both youth and professional levels. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**H-TCP**
H-TCP:
H-TCP is another implementation of TCP with an optimized congestion control algorithm for high speed networks with high latency (LFN: Long Fat Networks). It was created by researchers at the Hamilton Institute in Ireland.
H-TCP is an optional module in Linux since kernel version 2.6, and has been implemented for FreeBSD 7.
Principles of operation:
H-TCP is a loss-based algorithm, using additive-increase/multiplicative-decrease (AIMD) to control TCP's congestion window. It is one of many TCP congestion avoidance algorithms which seeks to increase the aggressiveness of TCP on high bandwidth-delay product (BDP) paths, while maintaining "TCP friendliness" for small BDP paths. H-TCP increases its aggressiveness (in particular, the rate of additive increase) as the time since the previous loss increases. This avoids the problem encountered by HSTCP and BIC TCP of making flows more aggressive if their windows are already large. Thus, new flows can be expected to converge to fairness faster under HTCP than HSTCP and BIC TCP.
Strengths and weaknesses:
A side effect of increasing the rate of increase as the time since the last packet loss increases, is that flows which happen not to lose a packet when other flows do, can then take an unfair portion of the bandwidth. Techniques to overcome this are currently in the research phase.
The Linux implementation of H-TCP also has an option for avoiding "RTT unfairness", which occurs in TCP Reno, but is a particular problem for most high speed variants of TCP (although not FAST TCP).
Name:
The algorithm was initially introduced as H-TCP, without mention of what the 'H' stands for. However, it is often called "Hamilton TCP", for the Hamilton Institute where it was created. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sturge–Weber syndrome**
Sturge–Weber syndrome:
Sturge–Weber syndrome, sometimes referred to as encephalotrigeminal angiomatosis, is a rare congenital neurological and skin disorder. It is one of the phakomatoses and is often associated with port-wine stains of the face, glaucoma, seizures, intellectual disability, and ipsilateral leptomeningeal angioma (cerebral malformations and tumors). Sturge–Weber syndrome can be classified into three different types. Type 1 includes facial and leptomeningeal angiomas as well as the possibility of glaucoma or choroidal lesions. Normally, only one side of the brain is affected. This type is the most common. Type 2 involvement includes a facial angioma (port wine stain) with a possibility of glaucoma developing. There is no evidence of brain involvement. Symptoms can show at any time beyond the initial diagnosis of the facial angioma. The symptoms can include glaucoma, cerebral blood flow abnormalities and headaches. More research is needed on this type of Sturge–Weber syndrome. Type 3 has leptomeningeal angioma involvement exclusively. The facial angioma is absent and glaucoma rarely occurs. This type is only diagnosed via brain scan.Sturge–Weber is an embryonal developmental anomaly resulting from errors in mesodermal and ectodermal development. Unlike other neurocutaneous disorders (phakomatoses), Sturge–Weber occurs sporadically (i.e., does not have a hereditary cause). It is caused by a mosaic, somatic activating mutation occurring in the GNAQ gene. Imaging findings may include tram track calcifications on CT, pial angiomatosis, and hemicerebral atrophy.
Signs and symptoms:
Sturge–Weber syndrome is usually manifested at birth by a port-wine stain on the forehead and upper eyelid of one side of the face, or the whole face. The birthmark can vary in color from light pink to deep purple and is caused by an overabundance of capillaries around the ophthalmic branch of the trigeminal nerve, just under the surface of the face. There is also malformation of blood vessels in the pia mater overlying the brain on the same side of the head as the birthmark. This causes calcification of tissue and loss of nerve cells in the cerebral cortex.Neurological signs include seizures that begin in infancy and may worsen with age. Convulsions usually happen on the side of the body opposite the birthmark, and vary in severity. There may also be muscle weakness on the side of the body opposite the birthmark.Some children will have developmental delays and cognitive delays; about 50% will have glaucoma (optic neuropathy often associated with increased intraocular pressure), which can be present at birth or develop later. Glaucoma can be expressed as leukocoria, which should suggest further evaluation for retinoblastoma. Increased pressure within the eye can cause the eyeball to enlarge and bulge out of its socket (buphthalmos).Sturge–Weber syndrome rarely affects other body organs.
Cause:
The blood vessel formations associated with SWS start in the fetal stage. Around the sixth week of development, a network of nerves develops around the area that will become a baby's head. Normally, this network goes away in the ninth week of development. In babies with SWS due to mutation of gene GNAQ, this network of nerves doesn't go away. This reduces the amount of oxygen and blood flowing to the brain, which can affect brain tissue development.
Diagnosis:
CT and MRI are most often used to identify intracranial abnormalities. When a child is born with a facial cutaneous vascular malformation covering a portion of the upper or the lower eyelids, imaging should be performed to screen for intracranial leptomeningeal angiomatosis. The haemangioma present on the surface of the brain is in the vast majority of cases on the same side as the birth mark and gradually results in calcification of the underlying brain and atrophy of the affected region.
Treatment:
Treatment for Sturge–Weber syndrome is symptomatic. Laser treatment may be used to lighten or remove the birthmark. Anticonvulsant medications may be used to control seizures. Doctors recommend early monitoring for glaucoma, and surgery may be performed on more serious cases. When one side of the brain is affected and anticonvulsants prove ineffective, the standard treatment is neurosurgery to remove or disconnect the affected part of the brain (hemispherectomy). Physical therapy should be considered for infants and children with muscle weakness. Educational therapy is often prescribed for those with intellectual disability or developmental delays, but there is no complete treatment for the delays.
Treatment:
Brain surgery involving removing the portion of the brain that is affected by the disorder can be successful in controlling the seizures so that the patient has only a few seizures that are much less intense than pre-surgery. Surgeons may also opt to "switch-off" the affected side of the brain.Latanoprost (Xalatan), a prostaglandin, may significantly reduce IOP (intraocular pressure) in patients with glaucoma associated with Sturge–Weber syndrome. Latanoprost is commercially formulated as an aqueous solution in a concentration of 0.005% preserved with 0.02% benzalkonium chloride (BAC). The recommended dosage of latanoprost is one drop daily in the evening, which permits better diurnal IOP control than does morning instillation. Its effect is independent of ethnicity, gender or age, and it has few to no side effects. Contraindications include a history of cystic macular edema (CME), epiretinal membrane formation, vitreous loss during cataract surgery, history of macular edema associated with branch retinal vein occlusion, history of anterior uveitis, and diabetes mellitus. It is also wise to advise patients that unilateral treatment can result in heterochromia or hypertrichosis that may become cosmetically objectionable.
Prognosis:
Although it is possible for the birthmark and atrophy in the cerebral cortex to be present without symptoms, most infants will develop convulsive seizures during their first year of life. There is a greater likelihood of intellectual impairment when seizures are resistant to treatment. Studies do not support the widely held belief that seizure frequency early in life in patients who have SWS is a prognostic indicator.
Epidemiology:
It occurs in approximately 1 in 50,000 newborns.
Eponym:
It is named for William Allen Sturge and Frederick Parkes Weber.
Society and culture:
The Sturge-Weber Foundation's (The SWF) international mission is to improve the quality of life and care for people with Sturge–Weber syndrome and associated port wine birthmark conditions. It supports affected individuals and their families with education, advocacy, and research to promote effective management and awareness. The SWF was founded by Kirk and Karen Ball, who began searching for answers after their daughter was diagnosed with Sturge–Weber syndrome at birth. The SWF was incorporated in the US in 1987 as an International 501(c)(3) non-profit organization. In 1992, the mission was expanded to include individuals with capillary vascular birthmarks, Klippel Trenaunay (KT) and Port Wine Birthmarks.The Hemispherectomy Foundation was formed in 2008 to assist families with children who have Sturge–Weber syndrome and other conditions that require hemispherectomy. The Brain Recovery Project was formed in 2011 to fund research and establish rehabilitation protocols to help children who have had hemispherectomy surgery reach their full potential.Sturge Weber UK (SWUK), formerly Sturge-Weber Foundation UK, is a volunteer-run registered charity formed in 1990. The charity exists to support those affected by Sturge Weber syndrome, promote research into the condition and raise awareness of the condition amongst both public and professionals. The charity was instrumental in setting up a specialist Sturge Weber clinic at Great Ormond Street Hospital. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Enterovirus D**
Enterovirus D:
Enterovirus D is a species of enterovirus which causes disease in humans. Five subtypes have been identified to date: Enterovirus 68: causes respiratory disease, and is associated with acute flaccid paralysis (AFP) – a disease similar to polio.
Enterovirus 70: causes outbreaks of acute hemorrhagic conjunctivitis.
Enterovirus 94: has been associated with a single case of AFP.
Enterovirus 111: has been associated with a single case of AFP, and has been found in primate feces.
Enterovirus 120: has only been found in non-human primate feces.
Similarities with Rhinovirus:
Enterovirus D has many serotypes and some closely resembling other viral species, such as Human rhinovirus (HRV) 87, which was reclassified as a strain of EV-D68 (Enterovirus D - Serotype 68) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ostrich leather**
Ostrich leather:
Ostrich leather is the result of tanning skins taken from African ostriches farmed for their feathers, skin and meat. The leather is distinctive for its pattern of bumps or vacant quill follicles, ranged across a smooth field in varying densities. It requires an intricate, specialised and expensive production process making its aesthetic value costly.Although the first commercial farming began in South Africa in 1850, the industry collapsed after World War I and the drop in demand for the feathers for fashionable hats and military uniforms. Other products were marketed, with each success battered by world events and droughts until now, when ostrich skin is globally available and seen as a luxury item in high-end demand.
Ostrich leather:
Leather came late in the story of ostrich farming but after a tannery was set up onsite, it went on to make an impact in European haute couture and in the US for cowboy boots becoming widespread during the 1970s. Demand peaked in the 1980s. Availability was artificially limited when ostrich leather was subject to a cartel monopoly through trade sanctions, and single export and distribution channels until the end of apartheid in 1993. After that and other factors, the South African government began to export stock allowing other countries to have their own ranches. Although wider production resulted in competition and lower prices, Klein Karoo Group remains the leading global producer.
Ostrich leather:
There were estimated to be just under 500,000 commercially bred ostriches in the world in 2003, with around 350,000 of these in South Africa. Ostrich leather is regarded as an exotic leather product alongside crocodile, snake, lizard, camel, emu among others. Ostrich skins are the largest in terms of volumes traded in the global exotic skins market.
The premium strain of ostrich is the "African Black", which originated on the ranches of South Africa through various forms of selective breeding.
General description:
Ostrich leather is distinct in its appearance and is characterized by raised points that are localized to the center of the hide. The portion with these bumps is called the "crown". It is actually the back of the ostrich where the animal's neck meets its body. The bumps are quill follicles where a feather grew. On the left and right side of the diamond shaped crown the skin is quite smooth. Only about a third of the whole skin has quill bumps. Since the crown is the most sought-after portion and since it constitutes such a small area of the skin, "full quill" ostrich products are considerably more expensive when compared to bovine leather. This, along with the fact that it is one of the strongest commercial leathers, leads ostrich leather to be seen as a luxury item.
History:
Feathers The commercial farming of ostriches first began in the 1850s when pioneering farmers located in Oudtshoorn, South Africa, saw great economic potential in harvesting ostrich feathers. Horse-drawn carriages made large, dramatic hats fashionable. Ostrich feathers are some of the most intricate and grandiose in the world so it only made sense to use them in this new rage. During this period of the late 19th and early 20th century, South African ostrich farmers made a fortune. Henry Ford began to mass-produce the automobile which made large stylish hats for women virtually obsolete. The onset of World War I shut down the ostrich feather industry. The same barons who had been making a fortune soon found themselves on the verge of poverty.
History:
Klein Karoo cooperative Over the next 50 years the entire industry bottomed out and maintained a minimal presence in the world. This status quo would not last, however. In 1945 the Klein Karoo region near Oudtshoorn set up a cooperative of farmers and speculators ("KKLK") who would work together to rebuild the ostrich industry. Eventually the demand for ostrich meat locally grew to a point where an abattoir was needed. In 1963/64 the world's first ostrich abattoir was erected in Klein Karoo by the KKLK to supply dried and fresh ostrich meat locally.The marketing of ostrich skin started in 1969/1970 when a leather tannery was built near the abattoir. Prior to this, there was very little known about the tanning process of ostrich skin. Most likely, ostrich skins were sent from the abattoir to tanneries in England and then sold to fashion houses. It appears that a group of South African entrepreneurs set out earlier in the 1960s in search of ways to tan ostrich skin. "I will give anything to see ostrich skins used", said Gerhard Olivier. With Hannes Louw, Jurgens Schoeman and the tanner Johan Wilken, he traveled abroad for the first time in search of people who could tan ostrich leather. Arnold and Dianne de Jager, founders of a tannery in London, offered to train a tanner for Klein Karoo, and in 1970, the first ostrich skin tannery opened there.
History:
Cartel Ostrich leather was instantly popular in high fashion throughout Europe and the United States where it was used for cowboy boots. Notably, during the 1980s, demand was extremely high in the United States. During this period, apartheid and other political turmoil caused some countries, the United States included, to put pressure on South Africa in the form of trade sanctions. The ostrich leather purveyors, brothers John G. Mahler and Wilfred Mahler of Dallas, Texas, were the only importers of ostrich skin for many years. Just like the single channel KKLK, who had a virtual monopoly on the exportation of the only viable ostrich skins, the Mahlers were able to control not only prices but also who got skins in the United States and how many they were allowed. The Mahlers' control was so absolute that some bootmakers would be reprimanded by John if they sold his skins to other bootmakers. The arrangement has been compared to the DeBeers diamond cartel.
History:
In 1993 apartheid in South Africa ended. This, combined with several droughts in Klein Karoo that crippled the ostrich industry, forced the South African government to permit the exportation of ostrich stock. This allowed neighboring countries and the US to import and begin raising ostriches. It effectively ended the strong monopoly enjoyed by South Africa and the KKLK. It also ended the Mahlers' monopoly in the United States. More suppliers began to open up in the US and, with fewer trade restrictions, they were able to supply ostrich leather at lower prices. There are several ostrich ranches and tanneries in the US, but with a 150-year head start Klein Karoo Group is still considered the industry leader, and South Africa the centre of ostrich leather production.
The leather:
Like other exotic skins, ostrich is a small-area skin compared to bovine and horse hides, and is ranked by the follicles per area since they thin out further away from the neck. It is processed consequently in a more particular way to preserve the largest possible area for processing and treatments, and is put through more than 30 stages related to this. Per unit it is comparable to goat and sheep skins and the range of equipment used is about the same. For this reason, the smaller units, smaller batches, subsequent longer tanning times and skills, increase the cost, elevating it to a luxury product.Chemical measurements must be precise to avoid mistakes and waste to produce a beautiful, finished skin. Skins are tagged, production takes time and quality standards are high. Lime is added, removed and redone after days of processing and expert clipping prevents skins from tangling with each other. Pigmentation is bleached out of the skin, there is tanning and more tanning with mould prevention along the way. A well-finished hide necessarily receives high-quality colours and finishing dyes to industry and market standards.
The leather:
Size The average size of a prepared ostrich skin which can be used with success in most applications is around 16 square feet (1.5 m2). The size and thickness of the skin, as well as follicle development is influenced by maturity of development of the birds at the time of slaughter. This varies depending on production methods.
The leather:
Processing The production process is tabulated in three stages: raw, crust, and finished product. At the first stage, the raw materials are pre-soaked, then cleaned, fleshed and picked over several times, trimmed, weighed chrome-tanned using chromium sulfate and other chromium salts; and so on, involving 15 steps the last being the "wet blue" product where it is wrung and set out.The second, or crust stage involves 10 steps, with side trimming and finer cutting, dyeing and drying to produce "crust" leather product. This when it is measured, after drying. The third stage of "finishing" ostrich leather begins with conditioning to soften it, staking, and other applications making eleven steps, including grading, measuring and packing.
The leather:
Because it is expensive to manufacture all three processes, countries that produce ostrich skins on a smaller scale, export them at the "raw" and "crust" stages. South Africa is an important processor of finished skins for the main leather manufacturers in Japan. Other African countries engaged in ostrich skin processing are Zimbabwe, Namibia, Botswana. Botswana markets directly to South African tanneries.South African tanneries receives about 200,000 skins a year from the ostrich abattoirs of the region and from elsewhere in the world it receives around 15, 000 skins. South African tanneries export around 90% of its finished leather to manufacturers in Europe and East Asia where it is made into gloves, wallets, hand bags, shoes, luggage, upholstery and sports goods. The remaining 10% goes to South African manufacturers of the same range of items.
The leather:
Grading Ostrich skins, like crocodile skins, are graded by the centimetre as they are sold in small measurements.Grading is required to set producer payment as well as finished leather prices.
Finished Skins: Tannery for buyer Graded Crust: Tannery to pay farmer or buyer purchasing crust skins Graded Green: Tannery to pay farmer or trader Graded Green: Trader to pay farmerNote: 1 and 2 use the same standards.
The leather:
Scars and blemishes currently form the basis for grading with further penalties for poorly developed follicles and skins deemed too small. Definitions of acceptable follicle size and style are vague and often simply a subjective opinion of the tanner or buyer. A defect can be such things as a hole, scratch, loose scab, a healed wound or bacterial damage. The World Ostrich Association has a document with full definitions of defects for each grade.
The leather:
The World Ostrich Association also have a document entitled "Factors Influencing Skin Quality".
Uses:
Traditionally, fashion has driven the demand for ostrich leather. Fashion houses successfully used ostrich leather in handbags for many years. Most designer brands have at least one purse made with ostrich leather. Footwear is another way in which designers showcase the material. By far the most widely-used application is for ostrich leather boots. Just about every bootmaker uses ostrich and the demand for ostrich boots is higher than for any other ostrich leather product. Belts are another major accessory that utilize ostrich leather; most ostrich boots are purchased with a matching ostrich belt. There are other uses for ostrich leather notably shoes, wallets and jackets. Handbags and jackets are the highest priced due to the sheer amount of leather used.
Uses:
Designer handbags in ostrich leather are popular as many luxury designers such as Prada, Hermès, Smythson, Bottega Veneta and Gucci, continue to make fashion statements with their wares made of exotic skins. Louis Vuitton has also popularized the use of ostrich skins, especially part of their runway collections. In 2006, the classic Louis Vuitton Keepall 50 special ordered in an ostrich skin was $10,800. Although such items are expensive, people seek them out because they are simply beautiful and different. It is quite easy to recognize an ostrich handbag.
Uses:
Different geographic regions have different demands for ostrich leather. For instance, Japan has an especially strong market for ladies' handbags while the southern United States has many consumers of ostrich boots.
Uses:
Extended applications Ostrich leather has also made a name for itself within the street and skate cultures, as it has been featured on several skate shoes; most notably the Nike Dunk Low Pro SB "Ostrich." Aside from fashion designers, the automotive industry is a heavy user of ostrich leather. Car seats, dashboards, motorcycle seats, and door panels can all be covered or accented with ostrich leather. Most after-market car and motorcycle shops can alter seats by applying ostrich leather as seat inserts. Many luxury car manufacturers offer ostrich leather seats from the factory. This practice is especially popular in European countries.
Uses:
During the 2008 - 2010 ostrich leather started to be used for sporting footwear. Well-known Mexican soccer brands such as Pirma have adopted new range of ostrich leather soccer boots launched during the 2010 FIFA World Cup in South Africa. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dwell time (transportation)**
Dwell time (transportation):
In transportation, dwell time or terminal dwell time refers to the time a vehicle such as a public transit bus or train spends at a scheduled stop without moving. Typically, this time is spent boarding or deboarding passengers, but it may also be spent waiting for traffic ahead to clear, trying to merge into parallel traffic, or idling time in order to get back on schedule. Dwell time is one common measure of efficiency in public transport, with shorter dwell times being universally desirable.
Rail systems:
Dwell times are particularly important for a rail system. Rail headways increase where the dwell times are high. Dwell times are an important focus for rail systems; a reduction in a dwell time can often result in a reduced headway.
Rail systems:
Passengers who want to board and alight from a train need time to do so. Almost always passengers disembark first, and then passengers waiting on a platform board. A variety of different factors determine how long this takes, including the size of the door on the train, the number of passengers waiting to board, or the step height from the platform to the floor of the car of the vehicle. Metro rail systems attempt to solve the problem of long dwell times by designing large numbers of doors in the rolling stock. Another solution is to increase the width of doors, but that is often ineffective as there are other bottlenecks within the rail vehicle, such as stairs, or a large number of other passengers not boarding or alighting.The structure of the rail station can also have an effect on dwell times. Narrow platforms, structural elements in front of doors, or generally poor access in and out of the station, can all have an effect on dwell times. Passengers need to wait within the train for others to move away, so that they may alight. Older stations, especially those constructed before World War I, are often quite constrained in space, and passenger flow rates can be very poor.One solution to the problem of long dwell times, particularly at busy stations, is to design stations with platforms on both sides of the train. This is called the Spanish Solution.
Rail systems:
Dwell times for rail services to airports can be very long. Passengers are carrying luggage, and this makes boarding and alighting take much longer. Airport rail links have become popular since the 1970s and many new airports are constructed with high speed rail connections. Specialised trains with locations to store luggage can help reduce dwell times, but on metro rail systems, passengers with luggage can be crowded in with all other passengers.
Causes of increased dwell times:
The main predictor of dwell times varies widely by mode, time, and line. However, dwell times are usually affected mostly by the number of passengers needing to board and alight from a vehicle. Density imbalance along the platform and between vehicles is mainly due to human and motivational factors (minimising distance and time at the arrival)In the case of bus transit in particular, one cause for major delays at stops is passengers using a wheelchair lift. Often, the driver will also be required to secure the passenger in addition to operating the ramp or lift.
Causes of increased dwell times:
Subway overcrowding in New York City has resulted in increased dwell times and travel delays, especially after 2014.
Methods of minimizing dwell times:
Make the vehicle entrance level and flush with the platform, eliminating the need for special wheelchair access apparatus/procedures.
Expedite or eliminate fare payment at the point of entry to the vehicle. Fares may be paid before the arrival of the vehicle as is done in rapid transit systems.
Board passengers through multiple doors (also called "all-door boarding") Passengers equi-distribution along the platform and between vehicles. Agent and motivation-based simulation can help to establish better architectural and behavioral (nudging) recommendations by testing a variety of hypothetical scenarios and effects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Manganese(III) acetate**
Manganese(III) acetate:
Manganese(III) acetate describes a family of materials with the approximate formula Mn(O2CCH3)3. These materials are brown solids that are soluble in acetic acid and water. They are used in organic synthesis as oxidizing agents.
Structure:
Although manganese(III) triacetate has not been reported, salts of basic manganese(III) acetate are well characterized. Basic manganese acetate adopts the structure reminiscent of those of basic chromium acetate and basic iron acetate. The formula is [Mn3O(O2CCH3)6Ln]X where L is a ligand and X is an anion. The salt [Mn3O(O2CCH3)6]O2CCH3.HO2CCH3 has been confirmed by X-ray crystallography.
Preparation:
It is usually used as the dihydrate, although the anhydrous form is also used in some situations. The dihydrate is prepared by combining potassium permanganate and manganese(II) acetate in acetic acid. Addition of acetic anhydride to the reaction produces the anhydrous form. It is also synthesized by electrochemical method starting from Mn(OAc)2.
Use in organic synthesis:
Manganese triacetate has been used as a one-electron oxidant. It can oxidize alkenes via addition of acetic acid to form lactones.
Use in organic synthesis:
This process is thought to proceed via the formation of a •CH2CO2H radical intermediate, which then reacts with the alkene, followed by additional oxidation steps and finally ring closure. When the alkene is not symmetric, the major product depends on the nature of the alkene, and is consistent with initial formation of the more stable radical (among the two carbons of the alkene) followed by ring closure onto the more stable conformation of the intermediate.When reacted with enones, the carbon on the other side of the carbonyl reacts rather than the alkene portion, leading to α'-acetoxy enones. In this process, the carbon next to the carbonyl is oxidized by the manganese, followed by transfer of acetate from the manganese to it.
Use in organic synthesis:
It can similarly oxidize β-ketoesters at the α carbon, and this intermediate can react with various other structures, including halides and alkenes (see: manganese-mediated coupling reactions). One extension of this idea is the cyclization of the ketoester portion of the molecule with an alkene elsewhere in the same structure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Line doubler**
Line doubler:
A line doubler is a device or algorithm used to deinterlace video signals prior to display on a progressive scan display.
Line doubler:
The main function of a deinterlacer is to take an interlaced video frame which consists of 60 two-field interlaced fields of an NTSC analogue video signal or 50 fields of a PAL signal, and create a progressive scan output. Cathode ray tube (CRT) based displays (both direct-view and projection) are capable of directly displaying both interlaced and progressive video, and therefore the line-doubling process is an optional step to enhance picture quality. Other types of displays are fixed pixel displays, including LCD displays, plasma displays, DLP projectors, and OLED displays, which are not scanned from top left to bottom right corners and generally cannot accept an interlaced signal directly, and so require some kind of deinterlacing. Often, this is built in to the display and transparent to the user. Progressive scan DVD players also feature a deinterlacer.
Line doubler:
Line doubling is a literal way to deinterlace an interlaced signal, although the method used may differ. Typically the use of the term "line doubler" refers to a simple repeat of a scanline so that the lines in a field match the lines of a frame. However, this produces a "bobbing" effect and has led to this method of deinterlacing being referred to as "bob deinterlacing". An iteration on bob deinterlacing is to average adjacent scanlines of two frames which can produce a smoother, although blurrier, image. This technique is referred to as "blend deinterlacing".
Line doubler:
Some line doublers are capable of using the former technique in moving areas and the latter in static areas (to avoid the "bob" effect), which improves overall sharpness.
It is worth noting that even if a line doubler employs the merging method it cannot be considered an inverse telecine device if a frame rate of 60p other than the original 24p is obtained. From this aspect of view some hyped progressive scan technologies (including Pioneer's PureCinema Progressive Scan) bearing an inverse telecine insignia are thus overstated.
Line doubler:
Line doublers have been replaced recently by video scalers which incorporate 3:2 pulldown removal and the ability to scale the image to the various screen resolutions used on modern projectors and displays. However, line doublers such as the Open Source Scan Converter have been developed to convert signals from older video game consoles and have found popularity among retro gaming enthusiasts due to their minimal contribution to input lag. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Deck lock**
Deck lock:
Deck lock is one of several systems for automatically securing rotorcraft on the Helicopter decks of small ships.A deck lock system was in use by the Royal Navy with its Westland Lynx aircraft, and presently with its AgustaWestland AW159 Wildcat helicopters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MindRDR**
MindRDR:
MindRDR is a Google Glass app created by This Place, a London, Seattle and Tokyo based user experience agency. MindRDR connects a Neurosky MindWave Mobile portable EEG monitor to Google Glass and uses the EEG signal to control functionality on Google Glass.
MindRDR:
The MindRDR app can use the EEG signals to take a photo using the Google Glass camera, and then can share the picture to Twitter or Facebook. The MindWave EEG sensor measures brainwaves and the MindRDR app interprets these brain waves as an input signal for activation of hardware on Google Glass; with high levels of concentration being used as a positive user gesture, and relaxation as a negative user gesture.
User interface:
Within the app, users are presented with a live feed if the viewfinder with a graphical overlay - a vertical line with a scale at the side and options to exit the app or take a photo at the top and bottom. The vertical line moves up and down the screen dependent on the signal from the EEG monitor, with higher levels of brain waves associated with concentration moving the bar towards the top, and higher levels of brain waves associated with relaxation moving the bar towards the bottom. Once concentration levels get high enough, the camera captures a picture and the image remains on screen with the bar moving to allow the user to share to Twitter with a positive response, or discard the photo with a negative response.
Possible uses:
The creators of MindRDR suggest that this tool could be used to help people with severe physical disabilities, such as locked-in syndrome and quadriplegia interact with the digital world, however there has been no research published as yet to show its efficacy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Indolelactate dehydrogenase**
Indolelactate dehydrogenase:
In enzymology, an indolelactate dehydrogenase (EC 1.1.1.110) is an enzyme that catalyzes the chemical reaction (indol-3-yl)lactate + NAD+ ⇌ (indol-3-yl)pyruvate + NADH + H+Thus, the two substrates of this enzyme are (indol-3-yl)lactate and NAD+, whereas its 3 products are (indol-3-yl)pyruvate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (indol-3-yl)lactate:NAD+ oxidoreductase. This enzyme is also called indolelactate:NAD+ oxidoreductase. This enzyme participates in tryptophan metabolism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Halfsies**
Halfsies:
Halfsies was a breakfast cereal manufactured by Quaker Oats from 1979 through 1984. It was the result of the so-called "sugar backlash" in which the amount of sugar in children's breakfast cereals became an issue. Its premise was that it contained half the sugar of regular breakfast cereals, and that it was half-corn and half-rice. The cereal nuggets were shaped as half a normal cereal ring, like the letter C.
Halfsies:
It was essentially Cap'n Crunch (another Quaker Oats product) with half the sugar and a slightly different texture.The marketing campaign involved the fictional King of The Land of Half, who presided over The Land of Half. The box featured Half Land where houses, cars, food, etc. were all cut in half.
Halfsies was discontinued due to poor sales. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Snuba**
Snuba:
Snuba is form of surface-supplied diving that uses an underwater breathing system developed by Snuba International. The origin of the word "Snuba" may be a portmanteau of "snorkel" and "scuba", as it bridges the gap between the two. Alternatively, some have identified the term as an acronym for "Surface Nexus Underwater Breathing Apparatus", though this may have been ascribed retroactively to fit the portmanteau. The swimmer uses swimfins, a diving mask, weights, and diving regulator as in scuba diving.
Snuba:
Instead of coming from tanks strapped to the diver's back, air is supplied from long hoses connected to compressed air cylinders contained in a specially designed flotation device at the surface.
Snuba often serves as a form of introductory diving, in the presence of a professionally trained guide, but requires no scuba certification.
Popularity:
The snuba system was devised in 1989 by California diver Michael Stafford. It was developed and patented in 1990 by Snuba International, based in Diamond Springs, California, who own the trademark and licensed it as a touring program. Snuba diving is a popular guided touring activity in tropical tourist locations such as Hawaii, Thailand, the Caribbean, and Mexico.Snuba is also popular because no certification or prior diving experience is required. Participants only need to be at least 8 years of age, have basic swimming ability, have no known medical disqualification, and be comfortable in the water. Its popularity as a first timer's experience can be attributed to several factors: The participant tows the raft on the surface via a lightweight harness connected to an air line. This gives the participant the secure knowledge that he/she cannot descend too deep and allows them to choose the depth that they feel most comfortable while being able to control their depth, descent, and ascent rates. By utilizing the hose as a guide, combined with wearing soft weights to achieve neutral buoyancy, participants are able to descend anywhere from just under the surface to 6 metres (20 ft) deep.
Popularity:
The user is able to hold on to the raft at the surface by a rope that runs the length of the raft on both sides. This also allows the user to hold on to the raft while becoming comfortable breathing before beginning to descend. Being connected to the raft also provides users with a feeling of safety, comfort, and gives them the option to hold on to the raft when they return to the surface.
Popularity:
Compared to scuba, snuba divers wear minimal gear. Each diver is equipped with a mask, fins, weight belt, harness, and regulator. The harness holds the regulator and the air line in place, allowing the diver to swim unencumbered beneath the surface. Full scuba gear, which includes a buoyancy compensator, weights, and cylinder, can weigh in excess of 27 kilograms (60 lb), but this is not strictly comparable, as it would usually include a wetsuit for thermal protection.
Popularity:
Although scuba equipment is nearly weightless underwater, out of the water the weight becomes a significant factor for weak or unfit individuals. Unlike a scuba diver using a buoyancy compensator, the snuba diver is not provided with an emergency buoyancy system. This means that in an emergency, the snuba diver must reach the surface unaided. On the other hand, a correctly weighted snuba diver, with no compressible dive suit will be neutrally buoyant at all depths, has a hose and harness to prevent sinking, can pull on the hose to surface which is less effort than swimming, and has a raft with a grab-rope to hold on to at the surface. The buoyancy compensator is a complex piece of equipment requiring significant skill to use safely.
Disadvantages:
In a strong current, wave action, or breeze, the combination of underwater hose and surface raft can pull quite hard on a diver. Therefore, snuba is best used in areas where wind, waves, and current are negligible. Since all snuba use is provided through licensed snuba operators, the possibility of being subjected to strong current, high waves, or high wind is low. However, it is beneficial if an employee of the snuba operator remains on the surface to monitor conditions.
Risk and liability:
Since the depth of a snuba dive is limited by the length of the hose to about 6 metres (20 ft), decompression sickness is unlikely to be a problem. However, as the snuba diver is breathing compressed air there is still a risk of injury or death due to barotrauma, which is a more severe hazard at shallow depths if divers ascend as little as a few feet without venting the expanding air from their lungs. This is easily avoided by breathing normally and continuously while ascending, provided that the diver is medically fit to dive. This point is thoroughly covered in snuba pre-dive briefings, and monitored by the dive guide throughout the dive by watching for the continual release of bubbles from each diver.. It is not clear how such monitoring is intended to help, unless the dive guide is within immediate reach of the diver. The risk of pulmonary barotrauma is greatest during an emergency ascent, if the diver uses up all the air or loses their grip on the mouthpiece, panics, and ascends while holding their breath. This is one of the more common causes of fatalities in inexperienced scuba divers, even when trained and certified. The equipment does not provide the diver with any means of monitoring the amount of gas remaining in the cylinder.
Risk and liability:
According to the snuba website, since starting operation in 1989, more than 5 million dives were conducted without injury or fatality, Nonetheless, there has been at least one fatality to a snuba diver and it occurred in April of 2014. The cause of death was not reported so it is unknown if the death was related specifically to the use of snuba or other causes. There is a snuba liability release form that releases the operators and developers of the snuba system from any liability or responsibility for damage, injury, or death due to neglect, system failure, or any other reason. It requires the diver to assert that they are not aware of any medical reason why they should not dive, or have been cleared to dive by a physician. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Darlington transistor**
Darlington transistor:
In electronics, a multi-transistor configuration called the Darlington configuration (commonly called a Darlington pair) is a circuit consisting of two bipolar transistors with the emitter of one transistor connected to the base of the other, such that the current amplified by the first transistor is amplified further by the second one. The collectors of both transistors are connected together. This configuration has a much higher current gain than each transistor taken separately. It acts like and is often packaged as a single transistor. It was invented in 1953 by Sidney Darlington.
Behavior:
A Darlington pair behaves like a single transistor, meaning it has one base, collector, and emitter. It typically creates a high current gain (approximately the product of the gains of the two transistors, due to the fact that their β values multiply together). A general relation between the compound current gain and the individual gains is given by: βDarlington=β1⋅β2+β1+β2 If β1 and β2 are high enough (hundreds), this relation can be approximated with: βDarlington≈β1⋅β2 Advantages A typical Darlington transistor has a current gain of 1000 or more, so that only a small base current is needed to make the pair switch on much higher switched currents.Another advantage involves providing a very high input impedance for the circuit which also translates into an equal decrease in output impedance.
Behavior:
The ease of creating this circuit also provides an advantage. It can be simply made with two separate NPN (or PNP) transistors, and is also available in a variety of single packages.
Behavior:
Disadvantages One drawback is an approximate doubling of the base–emitter voltage. Since there are two junctions between the base and emitter of the Darlington transistor, the equivalent base–emitter voltage is the sum of both base–emitter voltages: VBE=VBE1+VBE2≈2VBE1 For silicon-based technology, where each VBEi is about 0.65 V when the device is operating in the active or saturated region, the necessary base–emitter voltage of the pair is 1.3 V.
Behavior:
Another drawback of the Darlington pair is its increased "saturation" voltage. The output transistor is not allowed to saturate (i.e. its base–collector junction must remain reverse-biased) because the first transistor, when saturated, establishes full (100%) parallel negative feedback between the collector and the base of the second transistor. Since collector–emitter voltage is equal to the sum of its own base–emitter voltage and the collector-emitter voltage of the first transistor, both positive quantities in normal operation, it always exceeds the base-emitter voltage. (In symbols, VCE2=VCE1+VBE2>VBE2⇒VC2>VB2 always.) Thus the "saturation" voltage of a Darlington transistor is one VBE (about 0.65 V in silicon) higher than a single transistor saturation voltage, which is typically 0.1 - 0.2 V in silicon. For equal collector currents, this drawback translates to an increase in the dissipated power for the Darlington transistor over a single transistor. The increased low output level can cause troubles when TTL logic circuits are driven.
Behavior:
Another problem is a reduction in switching speed or response, because the first transistor cannot actively inhibit the base current of the second one, making the device slow to switch off. To alleviate this, the second transistor often has a resistor of a few hundred ohms connected between its base and emitter terminals. This resistor provides a low-impedance discharge path for the charge accumulated on the base-emitter junction, allowing a faster transistor turn-off.
Behavior:
The Darlington pair has more phase shift at high frequencies than a single transistor and hence can more easily become unstable with negative feedback (i.e., systems that use this configuration can have poor performance due to the extra transistor delay).
Packaging:
Darlington pairs are available as integrated packages or can be made from two discrete transistors; Q1, the left-hand transistor in the diagram, can be a low power type, but normally Q2 (on the right) will need to be high power. The maximum collector current IC(max) of the pair is that of Q2. A typical integrated power device is the 2N6282, which includes a switch-off resistor and has a current gain of 2400 at IC=10 A.
Packaging:
Integrated devices can take less space than two individual transistors because they can use a shared collector. Integrated Darlington pairs come packaged singly in transistor-like packages or as an array of devices (usually eight) in an integrated circuit.
Darlington triplet:
A third transistor can be added to a Darlington pair to give even higher current gain, making a Darlington triplet. The emitter of the second transistor in the pair is connected to the base of the third, as the emitter of first transistor is connected to the base of the second, and the collectors of all three transistors are connected together. This gives current gain approximately equal to the product of the gains of the three transistors. However the increased current gain often does not justify the sensitivity and saturation current problems, so this circuit is seldom used.
Applications:
Darlington pairs are often used in the push-pull output stages of the power audio amplifiers that drive most sound systems. In a fully symmetrical push-pull circuit two Darlington pairs are connected as emitter followers driving the output from the positive and negative supply: an NPN Darlington pair connected to the positive rail providing current for positive excursions of the output, and a PNP Darlington pair connected to the negative rail providing current for negative excursions. Before good quality PNP power transistors were available, the quasi-symmetrical push-pull circuit was used, in which only the two transistors connected to the positive supply rail were an NPN Darlington pair, and the pair from the negative rail were two more NPN transistors connected as common-emitter amplifiers.
Applications:
Safety A Darlington pair can be sensitive enough to respond to the current passed by skin contact even at safe zone voltages. Thus it can form a new input stage of a touch-sensitive switch.
Amplification Darlington transistors can be used in high-current circuits such as the LM1084 voltage regulator. Other High current applications could include those involving computer control of motors or relays, where the current is amplified from a safe low level of the computer output line to the amount needed by the connected device. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**False sharing**
False sharing:
In computer science, false sharing is a performance-degrading usage pattern that can arise in systems with distributed, coherent caches at the size of the smallest resource block managed by the caching mechanism. When a system participant attempts to periodically access data that is not being altered by another party, but that data shares a cache block with data that is being altered, the caching protocol may force the first participant to reload the whole cache block despite a lack of logical necessity. The caching system is unaware of activity within this block and forces the first participant to bear the caching system overhead required by true shared access of a resource.
Multiprocessor CPU caches:
By far the most common usage of this term is in modern multiprocessor CPU caches, where memory is cached in lines of some small power of two word size (e.g., 64 aligned, contiguous bytes). If two processors operate on independent data in the same memory address region storable in a single line, the cache coherency mechanisms in the system may force the whole line across the bus or interconnect with every data write, forcing memory stalls in addition to wasting system bandwidth. In some cases, the elimination of false sharing can result in order-of-magnitude performance improvements. False sharing is an inherent artifact of automatically synchronized cache protocols and can also exist in environments such as distributed file systems or databases, but current prevalence is limited to RAM caches.
Example:
This code shows the effect of false sharing. It creates an increasing number of threads from one thread to the number of physical threads in the system. Each thread sequentially increments one byte of a cache line atomically, which as a whole is shared among all threads. The higher the level of contention between threads, the longer each increment takes. This are the results on a Zen1 system with eight cores and sixteen threads: 1: 6 2: 22 3: 33 4: 44 5: 71 6: 76 7: 102 8: 118 9: 131 10: 142 11: 159 12: 189 13: 209 14: 229 15: 248 16: 262 As you can see, on the system in question it can take up to a quarter microsecond to complete an increment operation on the shared cache line, which corresponds to approx. 1,000 clock cycles on this CPU.
Mitigation:
There are ways of mitigating the effects of false sharing. For instance, false sharing in CPU caches can be prevented by reordering variables or adding padding (unused bytes) between variables. However, some of these program changes may increase the size of the objects, leading to higher memory use. Compile-time data transformations can also mitigate false-sharing. However, some of these transformations may not always be allowed. For instance, the C++ programming language standard draft of C++23 mandates that data members must be laid out so that later members have higher addresses.There are tools for detecting false sharing. There are also systems that both detect and repair false sharing in executing programs. However, these systems incur some execution overhead. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Treponematosis**
Treponematosis:
Treponematosis is a term used to individually describe any of the diseases caused by four members of the bacterial genus Treponema. The four diseases are collectively referred to as treponematoses: Syphilis (Treponema pallidum pallidum) Yaws (Treponema pallidum pertenue) Bejel (Treponema pallidum endemicum) Pinta (Treponema carateum)Traditional laboratory tests cannot distinguish the treponematoses. However, sequence differences among the T. pallidum subspecies have been identified. Molecular approaches involving PCR to identify these sequences are being developed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Automation technician**
Automation technician:
Automation technicians repair and maintain the computer-controlled systems and robotic devices used within industrial and commercial facilities to reduce human intervention and maximize efficiency. Their duties require knowledge of electronics, mechanics and computers. Automation technicians perform routine diagnostic checks on automated systems, monitor automated systems, isolate problems and perform repairs. If a problem occurs, the technician needs to be able to troubleshoot the issue and determine if the problem is mechanical, electrical or from the computer systems controlling the process. Once the issue has been diagnosed, the technician must repair or replace any necessary components, such as a sensor or electrical wiring. In addition to troubleshooting, Automation technicians design and service control systems ranging from electromechanical devices and systems to high-speed robotics and programmable logic controllers (PLCs). These types of systems include robotic assembly devices, conveyors, batch mixers, electrical distribution systems, and building automation systems. These machines and systems are often found within industrial and manufacturing plants, such as food processing facilities. Alternate job titles include field technician, bench technician, robotics technician, PLC technician, production support technician and maintenance technician.
Education and training:
Automation technician programs integrate computer programming with mechanics, electronics and process controls, They also commonly include coursework in hydraulics, pneumatics, programmable logic controllers, electrical circuits, electrical machinery and human-machine interfaces. Typical courses include math, communications, circuits, digital devices and electrical controls. Other courses include robotics, automation, electrical motor controls, programmable logic controllers, and computer-aided design. Good math and analytic skills are essential to understand automated systems and isolate problems. In addition to programming, Automation Technicians are expected to become proficient with various instruments and hand tools for troubleshooting, such as electrical multimeters, signal analyzers, and frequency counters.
Education and training:
Employers generally prefer applicants who have completed an automation technician certificate or associate degree. These programs can be completed at colleges and universities in either an in-class or online format. Some colleges, such as George Brown College, offer an online automation technician program that uses simulation software, LogixSim, to complete automation lab projects and assignments.Other relevant credentials to become an automation technician include mechatronics, robotics, and PLCs. Up-to-date credentials and certifications can enhance employment opportunities and keep technicians current with the latest technological developments. In addition to colleges and universities, other organizations and companies also offer credential programs in automation, including equipment manufacturers such as Rockwell and professional associations, such as the Electronics Technicians Association, Robotics Industries Association and the Manufacturing Skill Standards Council.
Career prospects:
Career opportunities for automation technicians include a wide range of manufacturing and service industries such as automotive, pharmaceutical, power distribution, food processing, mining, and transportation. Other career prospects include areas as machine assembly, troubleshooting and testing, systems integration, application support, maintenance, component testing and assembly, automation programming, robot maintenance and programming, technical sales and services.
Career prospects:
Typical job-related activities may involve: assembly installation maintenance testing troubleshooting repair upgrading of associated automation equipment and systems.Experienced automation technicians with advanced training may become specialists or troubleshooters who help other technicians diagnose difficult problems, or work with engineers in designing equipment and developing maintenance procedures. Automation technicians with leadership ability also may eventually become maintenance supervisors or service managers. Due to the highly specialized skills and knowledge required, there are many opportunities available to automation technicians in the service sector where there is a great demand for contract and sub-contract work with smaller manufacturing and distribution companies. Some experienced automation technicians open their own design, installation and maintenance companies. They can also become wholesalers or retailers of automation equipment, including inside and outside sales of automation equipment and systems. Because of their familiarity with control systems and equipment, automation technicians are particularly well qualified to become manufacturers' sales representatives. Other related opportunities include customer service, quality-control, quality-assurance and consulting. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sather**
Sather:
Sather is an object-oriented programming language. It originated circa 1990 at the International Computer Science Institute (ICSI) at the University of California, Berkeley, developed by an international team led by Steve Omohundro. It supports garbage collection and generics by subtypes.
Originally, it was based on Eiffel, but it has diverged, and now includes several functional programming features.
The name is inspired by Eiffel; the Sather Tower is a recognizable landmark at Berkeley, named after Jane Krom Sather, the widow of Peder Sather, who donated large sums to the foundation of the university.
Sather also takes inspiration from other programming languages and paradigms: iterators, design by contract, abstract classes, multiple inheritance, anonymous functions, operator overloading, contravariant type system.
Sather:
The original Berkeley implementation (last stable version 1.1 was released in 1995, no longer maintained) has been adopted by the Free Software Foundation therefore becoming GNU Sather. Last stable GNU version (1.2.3) was released in July 2007 and the software is currently not maintained. There were several other variants: Sather-K from the University of Karlsruhe; Sather-W from the University of Waikato (implementation of Sather version 1.3); Peter Naulls' port of ICSI Sather 1.1 to RISC OS; and pSather, a parallel version of ICSI Sather addressing non-uniform memory access multiprocessor architectures but presenting a shared memory model to the programmer.
Sather:
The former ICSI Sather compiler (now GNU Sather) is implemented as a compiler to C, i.e., the compiler does not output object or machine code, but takes Sather source code and generates C source code as an intermediate language. Optimizing is left to the C compiler.
The GNU Sather compiler, written in Sather itself, is dual licensed under the GNU GPL & LGPL.
Hello World:
A few remarks: Class names are ALL CAPS; this is not only a convention but it's enforced by the compiler.
The method called main is the entry point for execution. It may belong to any class, but if this is different from MAIN, it must be specified as a compiler option.
# is the constructor symbol: It calls the create method of the class whose name follows the operator. In this example, it's used for instantiating the OUT class, which is the class for the standard output.
The + operator has been overloaded by the class to append the string passed as argument to the stream.
Operators such as + are syntactic sugar for conventionally named method calls: a + b stands for a.plus(b). The usual arithmetic precedence conventions are used to resolve the calling order of methods in complex formulae.
Example of iterators:
This program prints numbers from 1 to 10.
Example of iterators:
The loop ... end construct is the preferred means of defining loops, although while and repeat-until are also available. Within the construct, one or more iterators may be used. Iterator names always end with an exclamation mark. (This convention is enforced by the compiler.) upto! is a method of the INT class accepting one once argument, meaning its value won't change as the iterator yields. upto! could be implemented in the INT class with code similar to the following one.
Example of iterators:
Type information for variables is denoted by the postfix syntax variable:CLASS. The type can often be inferred and thus the typing information is optional, as in anInteger::=1. SAME is a pseudo-class referring to the current class. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PF-592,379**
PF-592,379:
PF-592,379 is a drug developed by Pfizer which acts as a potent, selective and orally active agonist for the dopamine D3 receptor, which is under development as a potential medication for the treatment of female sexual dysfunction and male erectile dysfunction. Unlike some other less selective D3 agonists, a research study showed that PF-592,379 has little abuse potential in animal studies, and so was selected for further development and potentially human clinical trials. Development has since been discontinued. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Endocannabinoid reuptake inhibitor**
Endocannabinoid reuptake inhibitor:
Endocannabinoid reuptake inhibitors (eCBRIs), also called cannabinoid reuptake inhibitors (CBRIs), are drugs which limit the reabsorption of endocannabinoid neurotransmitters by the releasing neuron.
Pharmacology:
The method of transport of endocannabinoids through the cell membrane and cytoplasm to their respective degradation enzymes has been rigorously debated for nearly two decades, and a putative endocannabinoid membrane transporter was proposed. However, as lipophilic molecules endocannabinoids readily pass through the cell lipid bilayer without assistance and would more likely need a chaperone through the cytoplasm to the endoplasmic reticulum where the enzyme FAAH is located. More recently fatty acid-binding proteins (FABPs) and heat shock proteins (Hsp70s) have been described and verified as such chaperones, and their inhibitors have been synthesized. The inhibition of endocannabinoid reuptake raises the amount of those neurotransmitters available in the synaptic cleft and therefore increases neurotransmission. Following the increase of neurotransmission in the endocannabinoid system is the stimulation of its functions which, in humans, include: suppression of pain perception (analgesia), increased appetite, mood elevation and inhibition of short-term memory.
Examples of eCBRIs:
AM404 - an active metabolite of Paracetamol.
AM1172 LY-2183240 O-2093 OMDM-2 UCM-707 VDM-11 Guineensine WOBE437 and RX-055 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trench drain**
Trench drain:
A trench drain (also channel drain, line drain, slot drain, linear drain or strip drain) is a specific type of floor drain containing a dominant trough- or channel-shaped body. It is used for the rapid evacuation of surface water or for the containment of utility lines or chemical spills. Employing a solid cover or grating that is flush with the adjoining surface, this drain is commonly made of concrete in-situ and may utilize polymer- or metal-based liners or a channel former to aid in channel crafting and slope formation. Characterized by its long length and narrow width, the cross-section of the drain is a function of the maximum flow volume anticipated from the surrounding surface. Channels can range from 1 inch (25 mm) to 2 feet in width, with depths that can reach 4 feet (120 cm).
Trench drain:
Trench drains are commonly confused with French drains, which consist of a perforated pipe that is buried in a gravel bed, and which are used to evacuate ground water. A slot drain, also wrongly associated with a trench drain, consists of a drainage pipe with a thin neck (or slot) that opens at the ground surface with sufficient opening to drain storm water.
Types:
There are four common types of trench drains which are based on forming or installation method. These are cast-in-place, pre-cast concrete, liner systems and former systems. Newer stainless steel drains are available for residential and commercial shower installs and more commonly called "channel drains".
Types:
Cast in place This is the original standard for trench drain systems. Here, a concrete trench is formed in the ground using wood forms, reinforcing bar and manual labor. It involves several steps: Building a wooden mold that will ultimately form the channel for the trench drain; Attaching a set of metal frames to the top edge of the form (which will hold the trench grating); Suspending the form inside the trough (and flush with the surface elevation), such that there is space (6 inches or more) below and to each side of the form; Attaching the drainage pipes to the suspended form; Filling the trench with concrete (surrounding the form base and sides) and finishing the concrete flush with the metal frame; And after drying, removing the wooden form, cleaning the pipe inverts and placing the grates in the frame.This installation method is by far the most labor-intensive. In addition, material cost is a function of the width of the grate used in the trench.
Types:
There are three main reasons why this is labor-intensive: the outlet pipe position in the floor, the levelling of the trench grate in the floor, and integration of the waterproofing. If the outlet is not exactly in the centre, then the channel must be customized, increasing cost and time to construct.
Types:
This can be overcome by the use of modular trench drain systems. These allow the drain pipe to be connected anywhere along the trench, giving the builder and plumber more freedom to place the services, reducing construction time and costs, particularly in high-rise situations where moving plumbing services can be nearly impossible. Achieving a curbless or hobless entrance for special needs access is more readily achieved with a modular system, as the levelling of the shower floor to the bathroom floor is inherently problematic with non-modular systems as they have no on-site adjustment.
Types:
Waterproofing is possibly the most critical aspect of the bathroom when integrating a channel/trench drain. There are two ways to approach the waterproofing issue. First is to develop a proprietary method where the waterproofing and trench drain are a kit. Second is to separate the waterproofing and trench drain.
Types:
The first method has the benefit of an 'all-in-one' solution, but the downside is cost and limitation of the system in varying applications, i.e. limited generally to bathroom application. The second method is generally safest as the waterproofing is completed using established proven methods that most building contractors are familiar with. A second benefit is when the waterproofing is a separate item all methods of waterproofing (cementeous, bitumized, fibreglassed, paint-on, sweat-on, lead-lined, copper pan etc.) can be utilized. This will allow a modular system to be used in many applications such as pools, balconies, thresholds, pedestrian areas, public areas, essentially most wet areas where a waterproofing system is required, be it on ground or above (suspended slab).
Types:
Stainless steel trench drains can also be installed in a home's shower and in commercial locations like hospital rooms, changing rooms and operating rooms.
Types:
Former systems This installation method is the logical successor to the cast-in-place method. The former system gives a cast-in-place product without the hassle of making the form. Rather than wood, the forms are made of lightweight expanded polystyrene (EPS) or cardboard. The forms attach to a prefabricated frame and grate system that can then be easily set in the trough and aligned for the pouring of concrete. And, like the cast-in-place method, the form is removed after the concrete has dried. The real savings with the former method is in time required for making and setting the form. The efficiency of the former system helps speed up the installation thus reducing labor costs.
Types:
One downside to the former system is the waste generated by the disposal of the EPS and cardboard.
Types:
Pre-cast concrete Pre-cast trench drains are made in a factory that specializes in making concrete shapes. The channel pieces range in width and length, larger channels requiring heavy equipment to move them, however most channels can be picked up and moved easily by hand. The channels are formed in large metal forms that (usually) have a pre-determined channel width, depth, and slope. Like in the cast-in-place method, a metal frame is attached to the form and concrete is poured and finished in a factory atmosphere. The advantage to the pre-cast trench drain is again time savings—big time savings at the job site. Pre-cast trench drains made of a polymer concrete are also more sturdy and reliable than cast-in-place trenches. Once a trough is dug, the pre-cast trench sections can be installed and quickly be put into service through numerous methods. A patty method can be used by placing clumps of concrete at each trench drain channel joint and the channels can be levelled and set as such. Further installation methods involve clipping rebar through installation device used with the channel. Concrete will need to be poured to surround the trench drain so that the load can be transferred from the channel and grate to the surrounding areas.
Types:
Pre-cast trench drains generally come in 4-inch widths but can range anywhere from a 1-3/4-inch slot to 2-inch-wide channels with grates, and up to any size imaginable through custom trench drain divisions. A home owner could consider a pre-cast trench for a landscaping project as there are many pre-cast trench drain systems being manufactured specifically for the residential market. A person generally can go to a pre-caster or a distributor and buy 50 feet of trench drain out of the yard; the cost of the material to create the trench drains can be more expensive than simply using cast-in-place systems, however the money saved through installation, maintenance, and longevity heavily outweighs those costs.
Types:
Linear systems The popular trend in trench drains are linear systems. Linear as in line drain. Made from materials such as polymer concrete, fiberglass, structural plastic and steel, liner systems are the channel and grate components that are assembled in the trench and around which concrete is poured to form a drain system. By themselves, these liner systems do not have the strength and integrity to hold up under the physical requirements needed for the drain. A concrete (or asphalt) drain body is required to encase the channel to give the channel compressive strength and rigidity to ensure the drain will be able to withstand the traffic load it was designed to handle.
Construction:
Airports The use of trench drains in construction began with the commission by the British Airports Authority of a company called Gatic. Airports were in need of a form of trench drainage with fewer movable parts and less tendency to collapse under heavy traffic than the traditional drainage gratings. Gatic designed and engineered the first stainless steel slot drain, and it was installed first at Stansted Airport in the United Kingdom, following which it was specified at Britain's most famous airport – Heathrow. These airports continue to use stainless steel slot drainage both airside and landside for surface water drainage requirements.
Construction:
Industrial, Development and Urban Landscapes Following the use of slot drainage at airports, the manufacturers began engineering slot drainage for other types of construction project, including ports, docks, industrial areas, motorways, roads, car parks and urban developments. Differing sizes and load ratings variations of the original slot drain product now exist, manufactured by Gatic and a handful of other manufacturers, most notably Halfords and Aco technologies.
Construction:
Standards The American Society of Mechanical Engineers (ASME) publishes the following Standard: ASME A112.6.3 on Floor and Trench Drains | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.