text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Ether**
Ether:
In organic chemistry, ethers are a class of compounds that contain an ether group—an oxygen atom connected to two alkyl or aryl groups. They have the general formula R−O−R′, where R and R′ represent the alkyl or aryl groups. Ethers can again be classified into two varieties: if the alkyl or aryl groups are the same on both sides of the oxygen atom, then it is a simple or symmetrical ether, whereas if they are different, the ethers are called mixed or unsymmetrical ethers. A typical example of the first group is the solvent and anaesthetic diethyl ether, commonly referred to simply as "ether" (CH3−CH2−O−CH2−CH3). Ethers are common in organic chemistry and even more prevalent in biochemistry, as they are common linkages in carbohydrates and lignin.
Structure and bonding:
Ethers feature bent C–O–C linkages. In dimethyl ether, the bond angle is 111° and C–O distances are 141 pm. The barrier to rotation about the C–O bonds is low. The bonding of oxygen in ethers, alcohols, and water is similar. In the language of valence bond theory, the hybridization at oxygen is sp3.
Oxygen is more electronegative than carbon, thus the alpha hydrogens of ethers are more acidic than those of simple hydrocarbons. They are far less acidic than alpha hydrogens of carbonyl groups (such as in ketones or aldehydes), however.
Ethers can be symmetrical of the type ROR or unsymmetrical of the type ROR'. Examples of the former are dimethyl ether, diethyl ether, dipropyl ether etc. Illustrative unsymmetrical ethers are anisole (methoxybenzene) and dimethoxyethane.
Vinyl- and acetylenic ethers Vinyl- and acetylenic ethers are far less common than alkyl or aryl ethers. Vinylethers, often called enol ethers, are important intermediates in organic synthesis. Acetylenic ethers are especially rare. Di-tert-butoxyacetylene is the most common example of this rare class of compounds.
Nomenclature:
In the IUPAC Nomenclature system, ethers are named using the general formula "alkoxyalkane", for example CH3–CH2–O–CH3 is methoxyethane. If the ether is part of a more-complex molecule, it is described as an alkoxy substituent, so –OCH3 would be considered a "methoxy-" group. The simpler alkyl radical is written in front, so CH3–O–CH2CH3 would be given as methoxy(CH3O)ethane(CH2CH3).
Nomenclature:
Trivial name IUPAC rules are often not followed for simple ethers. The trivial names for simple ethers (i.e., those with none or few other functional groups) are a composite of the two substituents followed by "ether". For example, ethyl methyl ether (CH3OC2H5), diphenylether (C6H5OC6H5). As for other organic compounds, very common ethers acquired names before rules for nomenclature were formalized. Diethyl ether is simply called ether, but was once called sweet oil of vitriol. Methyl phenyl ether is anisole, because it was originally found in aniseed. The aromatic ethers include furans. Acetals (α-alkoxy ethers R–CH(–OR)–O–R) are another class of ethers with characteristic properties.
Nomenclature:
Polyethers Polyethers are generally polymers containing ether linkages in their main chain. The term polyol generally refers to polyether polyols with one or more functional end-groups such as a hydroxyl group. The term "oxide" or other terms are used for high molar mass polymer when end-groups no longer affect polymer properties.
Crown ethers are cyclic polyethers. Some toxins produced by dinoflagellates such as brevetoxin and ciguatoxin are extremely large and are known as cyclic or ladder polyethers.
The phenyl ether polymers are a class of aromatic polyethers containing aromatic cycles in their main chain: polyphenyl ether (PPE) and poly(p-phenylene oxide) (PPO).
Related compounds Many classes of compounds with C–O–C linkages are not considered ethers: Esters (R–C(=O)–O–R′), hemiacetals (R–CH(–OH)–O–R′), carboxylic acid anhydrides (RC(=O)–O–C(=O)R′).
Physical properties:
Ethers have boiling points similar to those of the analogous alkanes. Simple ethers are generally colorless.
Reactions:
The C-O bonds that comprise simple ethers are strong. They are unreactive toward all but the strongest bases. Although generally of low chemical reactivity, they are more reactive than alkanes. Specialized ethers such as epoxides, ketals, and acetals are unrepresentative classes of ethers and are discussed in separate articles. Important reactions are listed below.
Cleavage Although ethers resist hydrolysis, they are cleaved by hydrobromic acid and hydroiodic acid. Hydrogen chloride cleaves ethers only slowly. Methyl ethers typically afford methyl halides: ROCH3 + HBr → CH3Br + ROHThese reactions proceed via onium intermediates, i.e. [RO(H)CH3]+Br−.
Some ethers undergo rapid cleavage with boron tribromide (even aluminium chloride is used in some cases) to give the alkyl bromide. Depending on the substituents, some ethers can be cleaved with a variety of reagents, e.g. strong base.
Reactions:
Peroxide formation When stored in the presence of air or oxygen, ethers tend to form explosive peroxides, such as diethyl ether hydroperoxide. The reaction is accelerated by light, metal catalysts, and aldehydes. In addition to avoiding storage conditions likely to form peroxides, it is recommended, when an ether is used as a solvent, not to distill it to dryness, as any peroxides that may have formed, being less volatile than the original ether, will become concentrated in the last few drops of liquid. The presence of peroxide in old samples of ethers may be detected by shaking them with freshly prepared solution of a ferrous sulfate followed by addition of KSCN. Appearance of blood red color indicates presence of peroxides. The dangerous properties of ether peroxides are the reason that diethyl ether and other peroxide forming ethers like tetrahydrofuran (THF) or ethylene glycol dimethyl ether (1,2-dimethoxyethane) are avoided in industrial processes.
Reactions:
Lewis bases Ethers serve as Lewis bases. For instance, diethyl ether forms a complex with boron trifluoride, i.e. diethyl etherate (BF3·OEt2). Ethers also coordinate to the Mg center in Grignard reagents. Tetrahydrofuran is more basic than acyclic ethers. It forms complexes with many metal halides.
Alpha-halogenation This reactivity is similar to the tendency of ethers with alpha hydrogen atoms to form peroxides. Reaction with chlorine produces alpha-chloroethers.
Synthesis:
Ethers can be prepared by numerous routes. In general alkyl ethers form more readily than aryl ethers, with the later species often requiring metal catalysts.The synthesis of diethyl ether by a reaction between ethanol and sulfuric acid has been known since the 13th century.
Synthesis:
Dehydration of alcohols The dehydration of alcohols affords ethers: 2 R–OH → R–O–R + H2O at high temperatureThis direct nucleophilic substitution reaction requires elevated temperatures (about 125 °C). The reaction is catalyzed by acids, usually sulfuric acid. The method is effective for generating symmetrical ethers, but not unsymmetrical ethers, since either OH can be protonated, which would give a mixture of products. Diethyl ether is produced from ethanol by this method. Cyclic ethers are readily generated by this approach. Elimination reactions compete with dehydration of the alcohol: R–CH2–CH2(OH) → R–CH=CH2 + H2OThe dehydration route often requires conditions incompatible with delicate molecules. Several milder methods exist to produce ethers.
Synthesis:
Williamson ether synthesis Nucleophilic displacement of alkyl halides by alkoxides R–ONa + R′–X → R–O–R′ + NaXThis reaction is called the Williamson ether synthesis. It involves treatment of a parent alcohol with a strong base to form the alkoxide, followed by addition of an appropriate aliphatic compound bearing a suitable leaving group (R–X). Suitable leaving groups (X) include iodide, bromide, or sulfonates. This method usually does not work well for aryl halides (e.g. bromobenzene, see Ullmann condensation below). Likewise, this method only gives the best yields for primary halides. Secondary and tertiary halides are prone to undergo E2 elimination on exposure to the basic alkoxide anion used in the reaction due to steric hindrance from the large alkyl groups.
Synthesis:
In a related reaction, alkyl halides undergo nucleophilic displacement by phenoxides. The R–X cannot be used to react with the alcohol. However phenols can be used to replace the alcohol while maintaining the alkyl halide. Since phenols are acidic, they readily react with a strong base like sodium hydroxide to form phenoxide ions. The phenoxide ion will then substitute the –X group in the alkyl halide, forming an ether with an aryl group attached to it in a reaction with an SN2 mechanism.
Synthesis:
C6H5OH + OH− → C6H5–O− + H2OC6H5–O− + R–X → C6H5OR Ullmann condensation The Ullmann condensation is similar to the Williamson method except that the substrate is an aryl halide. Such reactions generally require a catalyst, such as copper.
Electrophilic addition of alcohols to alkenes Alcohols add to electrophilically activated alkenes.
R2C=CR2 + R–OH → R2CH–C(–O–R)–R2Acid catalysis is required for this reaction. Often, mercury trifluoroacetate (Hg(OCOCF3)2) is used as a catalyst for the reaction generating an ether with Markovnikov regiochemistry. Using similar reactions, tetrahydropyranyl ethers are used as protective groups for alcohols.
Preparation of epoxides Epoxides are typically prepared by oxidation of alkenes. The most important epoxide in terms of industrial scale is ethylene oxide, which is produced by oxidation of ethylene with oxygen. Other epoxides are produced by one of two routes: By the oxidation of alkenes with a peroxyacid such as m-CPBA.
By the base intramolecular nucleophilic substitution of a halohydrin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Noren**
Noren:
Noren (暖簾) are traditional Japanese fabric dividers hung between rooms, on walls, in doorways, or in windows. They usually have one or more vertical slits cut from the bottom to nearly the top of the fabric, allowing for easier passage or viewing. Noren are rectangular and come in many different materials, sizes, colours, and patterns.
Homes:
Noren were originally used to protect a house from wind, dust, and rain, as well as to keep a house warm on cold days and to provide shade on hot summer days. They can also be used for decorative purposes or for dividing a room into two separate spaces.
Businesses:
Exterior noren are traditionally used by shops and restaurants as a means of protection from sun, wind, and dust, and for displaying a shop's name or logo. Names are often Japanese characters, especially kanji, but may be mon emblems, Japanese rebus monograms, or abstract designs. Noren designs are generally traditional to complement their association with traditional establishments, but modern designs also exist. Interior noren are often used to separate dining areas from kitchens or other preparation areas, which also prevents smoke or smells from escaping.
Businesses:
Because a noren often features the shop name or logo, the word in Japanese may also refer to a company's brand value. Most notably, in Japanese accounting, the word noren is used to describe the goodwill of a company after an acquisition.Sentō (commercial bathhouses) also place noren across their entrances with the kanji yu (湯, lit. "hot water") or the corresponding hiragana ゆ, typically blue in color for men and red for women. They are also hung in the front entrance to a shop to signify that the establishment is open for business, and they are always taken down at the end of the business day. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trivers–Willard hypothesis**
Trivers–Willard hypothesis:
In evolutionary biology and evolutionary psychology, the Trivers–Willard hypothesis, formally proposed by Robert Trivers and Dan Willard in 1973, suggests that female mammals adjust the sex ratio of offspring in response to maternal condition, so as to maximize their reproductive success (fitness). For example, it may predict greater parental investment in males by parents in "good conditions" and greater investment in females by parents in "poor conditions" (relative to parents in good conditions). The reasoning for this prediction is as follows: Assume that parents have information on the sex of their offspring and can influence their survival differentially. While selection pressures exist to maintain a 1:1 sex ratio, evolution will favor local deviations from this if one sex has a likely greater reproductive payoff than is usual.
Trivers–Willard hypothesis:
Trivers and Willard also identified a circumstance in which reproducing individuals might experience deviations from expected offspring reproductive value—namely, varying maternal condition. In polygynous species, males may mate with multiple females, and low-condition males will achieve fewer or no matings. Parents in relatively good condition would then be under selection for mutations causing production and investment in sons (rather than daughters), because of the increased chance of mating experienced by these good-condition sons. Mating with multiple females conveys a large reproductive benefit, whereas daughters could translate their condition into only smaller benefits. An opposite prediction holds for poor-condition parents—selection will favor production and investment in daughters, so long as daughters are likely to be mated, while sons in poor condition are likely to be out-competed by other males and end up with zero mates (i.e., those sons will be a reproductive dead end).
Trivers–Willard hypothesis:
The hypothesis was used to explain why, for example, red deer mothers would produce more sons when they are in good condition, and more daughters when in poor condition. In polyandrous species where some females mate with multiple males (and others get no matings) and males mate with one/few females (i.e., "sex-role reversed" species), these predictions from the Trivers–Willard hypothesis are reversed: parents in good condition will invest in daughters in order to have a daughter that can out-compete other females to attract multiple males, whereas parents in poor condition will avoid investing in daughters who are likely to get out-competed, and will instead invest in sons in order to gain at least some grandchildren.
Trivers–Willard hypothesis:
"Condition" can be assessed in multiple ways, including body size, parasite loads, or dominance, which has also been shown in macaques (Macaca sylvanus) to affect the sex of offspring, with dominant females giving birth to more sons and non-dominant females giving birth to more daughters. Consequently, high-ranking females give birth to a higher proportion of males than those who are low-ranking.
Trivers–Willard hypothesis:
In their original paper, Trivers and Willard were unaware of a biochemical mechanism which could result in biased sex ratios. One possible explanation is that a high level of circulating glucose in the mother's bloodstream favors the survival of male blastocysts. This conclusion is based on the observed male-skewed survival rates (to expanded blastocyst stages) when bovine blastocysts were exposed to heightened levels of glucose. As blood glucose levels are highly correlated with access to high-quality food, they may serve as a proxy for maternal condition.
Humans:
The Trivers–Willard hypothesis has been applied to resource differences among individuals in a society as well as to resource differences among societies. Investigations in humans pose a number of practical and methodological difficulties, but whilst a 2007 review of previous research found that empirical evidence for the hypothesis was mixed, the author noted that it received greater support from better-designed studies. One such example cited was a 1997 analysis of Hungarian Romani – a low-status group with a preference for females, who "had a female-biased sex ratio at birth, were more likely to abort a child after having had one or more daughters, nursed their daughters longer, and sent their daughters to school for longer". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Theatre for development**
Theatre for development:
Theatre for development (TfD) is a type of community-based or interactive theatre practice that aims to promote civic dialogue and engagement. Theatre for development can be a kind of participatory theatre that encourages improvisation and allows audience members to take roles in the performance, or it can be fully scripted and staged, with the audience simply observing. Many productions are a blend of the two. The Theatre of the Oppressed, an influential collection of theatrical forms developed by Augusto Boal in the 1970s, aims to create dialogue and interaction between audience and performer as a means of promoting social and political change.
Theatre for development:
Hundreds, if not thousands, of organizations and initiatives have used theatre as a development tool: for education or propaganda, as therapy, as a participatory tool, or as an exploratory tool in development.
Definitions and aims:
Theatre for development can be seen as a progression from less interactive theatre forms to a more dialogical process, where theatre is practiced with the people or by the people as a way of empowering communities, listening to their concerns, and then encouraging them to voice and solve their own problems.For Kabaso Sydney (2013), as reflected in Theatre for Development in Zambia, the term describes "modes of theatre whose objective is to disseminate messages, or to conscientize communities about their objective social political situation" (1993:48). Penina Mlama, referring to the enterprise as "popular theatre", summarizes its aims as follows: "…it aims to make the people not only aware of but also active participants in the development process by expressing their viewpoints and acting to better their conditions. Popular theatre is intended to empower the common man with a critical consciousness crucial to the struggle against the forces responsible for his poverty." (1991:67) Theatre for development may encompass any of the following live performance types: spoken-word drama or comedy; music, singing and/or dance; movement without sound (mime); participatory or improvisational techniques; and/or the Living Newspaper.
Subject matter:
Theatre for development typically endeavors to build awareness about critical topics within a political or developmental context, often using an agitprop style. Especially in oppressive regimes, it may not be safe or possible to perform overtly political plays. Apart from political issues, common topics are non-formal education, hygiene, disposal of sewage, environment, women's rights, child abuse, prostitution, street children, health education, HIV/AIDS, literacy, etc.
Techniques:
Developmental theatre can utilize one or more of the following techniques or forms.
Techniques:
Forum theatre Forum theatre, one of the interactive theatrical forms developed by Augusto Boal as part of his Theatre of the Oppressed, begins with the performance of a short scene.Often times, it is a scene in which a character is being oppressed in some way (for example, a typically chauvinist man mistreating a woman or a factory owner mistreating an employee). Audience members are now encouraged to not only imagine changing the situation of oppression but to actually practice that change, by coming on stage as "spect-actors" to replace the protagonist and act out an intervention to "break the oppression". Through this process, the participant is also able to realize and experience the challenges of achieving the improvements they suggested. The actors who welcome the spectator volunteering onto the stage play against the spectator's attempts to intervene and change the story, offering a strong resistance so that the difficulties in making any change are also acknowledged. By becoming part of the scene, participants dive into the situation performed, which makes the whole topic feel more real for the person who came in to change the situation. The technique provides an alternative process of problem solving, where creativity is asked for and different approaches are tried. Forum theatre functions as "a rehearsal for reality", as Augusto Boal called it.
Techniques:
Street theatre Theatrical forms such as invisible theatre or image theatre can be performed in public spaces to be witnessed by passersby. Invisible theatre is intended to be indistinguishable from real-life, unstaged situations, so as to provoke thought or raise awareness among observing members of the public. Invisible theatre in the streets has the advantage of potentially reaching audiences who would never attend a workshop or watch a play.
Techniques:
Collaboration with community members It is very important for actors and organisers of the performance or TfD-project (Theatre for Development Project) to get to know the community and the problems its people face. Therefore, the play that is going to be performed and worked with has to be developed with local people who know the cultural behaviors and social problems of the community. Moreover, it is very helpful to have local authority persons and opinion leaders in the team of a TfD-project, whom the community listens to and trusts. In this way it is even possible to take advantage of the knowledge that locals have about best dates for performances or even to advertise for the ongoing TfD-performance.
Techniques:
Documentary theatre Documentary Theatre uses accounts from documentary material such as articles, interviews, and public transcripts in order to create a performance that reflects upon specific events or movements in history. This type of theatre can be used to educate the audience on the subject matter and it is often used to prompt conversation about potentially sensitive topics or topics that do not receive extensive media coverage or coverage in popular culture.
Techniques:
Examples of Documentary theatre The Laramie Project: Playwrights conducted interviews on the people of the town of Laramie, took news clippings, and also journals entries of the townspeople in order to implement them in the script.
Gloria a Life: The producers of the play recreated archival interviews and TV appearances of Gloria Steinem and brought them to life on stage through the actor playing Steinem.
One-Third of a Nation: This play incorporates accounts from the crisis surrounding The New York Housing Department using the technique of the Living Newspaper.
Techniques:
Talkbacks In a talkback, members of the cast, crew, and/or the creative team of a production remain after a production to host conversation. Talkbacks are most often utilized in conjunction with any of the other techniques listed above. They exist so not only the audience can ask questions and engage with the art and artists, but also so that the audience can share their perspective on the work that was performed.
Theatre for young audiences:
Efforts are also made to create impactful social and developmental theatre for younger audiences. Plays, musicals, and other performances can be created to specifically show to younger age demographics in order to teach them about topics they do not usually learn about or that may not be prevalent in their lives. The Yellow Boat, produced by Childsplay, comes with a learning guide for educators to open up conversations about HIV/AIDS and hemophilia.
Theatre for young audiences:
Talkbacks are also utilized in order to get younger audiences to engage and come up with their own questions about any performance or the themes and issues that were mentioned in the show. They allow the audience to analyze the topics themselves and explore the real world topics that are being taught. This invites the younger demographic to become part of the solution and development in their community.
Sources:
"Theatre and Development", a list of various TfD initiatives, compiled by KIT (Royal Tropical Institute), Amsterdam Sloman, Annie (2011): "Using Participatory Theatre in International Community Development", Community Development Journal.
Amnesty International AI (2005): "Ben ni walen: Mobilising for human rights using participatory theatre".
Epskamp, Kees (2006): Theatre for Development: An Introduction to Context, Applications & Training. London: Zed Books.
Enacting Participatory Development: Theatre-based Techniques, by McCarthy, J., Cambridge University Press. 2004. The bibliography cites 22 books devoted specifically to art and theater as tools for development, and an additional 16 books on specific techniques.
Theatre and Empowerment: Community Drama on the World Stage, by Boon, R. and Plastow, J. University of Leeds. 2004. Case studies from around the world of TfD.
Theater as a Means of Moral Education and Socialization in the Development of Nauvoo, Illinois, 1839-1845, thesis by Hurd, L., California State University, Dominguez Hills. 2004.
Online Discussion Group - Art4Development Mda, Zakes (1993): When People Play People. Development Communication Through Theatre. Johannesburg: Witwatersrand University Press.
Sources:
ActNow Theatre for Social Change Australia Collective Encounters: Theatre for Social Change UK Communication for Development Network UK Creative Social Change UK CTO - Center of Theatre of the Oppressed Brazil International Theatre of the Oppressed Organisation KURINGA - a space for Theatre of the Oppressed in Berlin Germany Jana Sanskriti - Center for Theatre of the Oppressed India CCDC - Centre for Community Dialogue and Change, Bengaluru, India - Promoting Theatre of the Oppressed Act Out - Theatre for Transformation Perth, Australia "The problem with Theatre for Development in contemporary Malawi" by Zindaba Chisiza, Leeds African Studies Bulletin, 78 (2016-17). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hyperextension (exercise)**
Hyperextension (exercise):
Hyperextension means a movement where extension is performed at any joint beyond its normal range of motion.A back extension is an exercise that works the lower back as well as the mid and upper back, specifically the Erector spinae. Each of us have two Erector spinae muscles, one of either side of the spine that run along the entire length of the spine. Erector spinae muscles are actually formed of 3 smaller muscles - spinalis, longissimus, and iliocostalis.The name hyperextensions are being used for back extension exercises that are done using a hyperextension bench in a fitness gym. However the name 'hyperextensions' is a misnomer, because what you are trying to do here is only to extend the spine within its normal range and not beyond its normal range of motion. When you extend the back from the flexed position, at the end range, your head and neck stays in neutral position.In fact, back extension beyond the normal range of motion has been found to be detrimental for the exerciser. Hyperextensions during dead lift have been found to lead to lumbar disc pathologies and muscular spasms.
Equipment used:
Without any equipment: It may be performed on the ground by lying prone with arms overhead and lifting the arms, upper torso, and legs as far as possible. Here you use gravity as resistance to strengthen the back extensor muscles.
Equipment used:
Using a Roman chair: A Roman chair helps to stabilize the legs up until the hip joints while performing low back extension. To perform the exercise, the torso from above the hip joints is flexed forwards and down towards the floor. And to complete the exercise, the person contracts his back muscles (Erector spinae) and raises his torso up till his whole body is in a straight line from his head to heels. Exercises could be more challenging by adding the person hugging weights to his chest. Lighter weights may be used to begin with to prevent straining the back muscles with over-exertion. Hold the weight lower in position, if you are a beginner and then gradually bring it higher, to feel more resistance. Please take note to do this exercise slowly and to not extend the back beyond the normal range of motion as this may lead to low back hyperextension injury.
Equipment used:
Using Hyperextension bench: There are two varieties of Hyperextension bench depending upon the angle that they support your lower body, the 45 degrees and 90 degrees hyperextension bench. The 90 degrees Hyper extension bench is also called Roman chair that we discussed above. Here your body lies horizontally and the person can experience full back range of motion. As compared to the 45 degree Hyper extension bench, where the person would be almost standing and it allows extension only up to partial range of motion. In both the versions of the Hyperextension bench, the person is requested to fold his arms in front of himself or place your hands on the back of your head with the elbows pointing to the sides, while performing the exercise.
Equipment used:
Using Reverse Hyperextension machine: This machine has been used to strengthen not only the erector spinae muscle, but also gluteus maximus and part of hamstring muscles (biceps femoris). When back extension is attempted with this machine, the range of motion at hip was found to be relatively more, while the accompanying stresses at hip and back were found not to relatively less. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Extraocular implant**
Extraocular implant:
An extraocular implant (also known as eyeball jewelry) is a cosmetic implant involving a tiny piece of decorative jewelry which is implanted within the superficial, interpalpebral conjunctiva or sclera of the human eye.
History and culture:
Eyeball jewelry was developed first in the Netherlands as a radical new form of body modification in 2002. It was first designed at the Netherlands Institute for Innovative Ocular Surgery and is marketed there under the name JewelEye. The procedure is completely legal in the Netherlands, as long as it is performed by a licensed ophthalmologist under sterile conditions.In Canada, multiple provinces have passed laws banning eyeball jewelry and scleral tattooing due to potential health risks, including Ontario and Saskatchewan.
Procedure:
Unlike subdermal implants and other new body modification procedures, the extraocular implant is currently only being performed in a medical clinic environment. The procedure is relatively quick, but it does require that both eyes be immobilized with anesthetic drops, and that the layers of the eyeball where the implant is situated must be separated by the injection of liquid. As very few people have undergone this procedure, and it is relatively new, the long term health effects are currently unknown.
Procedure:
However, the Website of the Netherlands Institute for Innovative Ocular Surgery states that the implant does not interfere with the ocular functions, i.e. the visual performance and mobility. Additionally, patient satisfaction remains high and no side effects of the treatment have been noticed with a follow-up of more than one year.
Jewelry:
Currently, the only supplier of jewelry for this implant is Hippocratech b.v., a company in Rotterdam, Netherlands. The implant is manufactured from a platinum alloy and is available in several basic shapes, including the Euro sign, heart, musical note, clover or star shapes, with other shapes custom made by the company upon request. The size of the jewellery is about 1/8" (3 mm) across. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Skin dimple**
Skin dimple:
Skin dimples (also known as "Skin fossa") are deep cutaneous depressions that are seen most commonly on the cheeks or chin, occurring in a familial pattern suggestive of autosomal dominant inheritance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nefiracetam**
Nefiracetam:
Nefiracetam is a nootropic drug of the racetam family. Preliminary research suggests that it may possess certain antidementia properties in rats.
Effects:
Nefiracetam's cytoprotective actions are mediated by enhancement of GABAergic, cholinergic, and monoaminergic neuronal systems. Preliminary studies suggest that it improves apathy and motivation in post-stroke patients. It may also exhibit antiamnesia effects for the Alzheimer's type and cerebrovascular type of dementia. In addition, research in animal models suggests antiamnesic effects against a number of memory impairing substances, including: ethanol, chlorodiazepoxide, scopolamine, bicuculline, picrotoxin, and cycloheximide.
Pharmacology:
Unlike other racetams, nefiracetam shows high affinity for the GABAA receptor (IC50) = 8.5 nM), where it is presumed to be an agonist. It was able to potently inhibit 80% of muscimol binding to the GABAA receptor, although it failed to displace the remaining 20% of specific muscimol binding. Nefiracetam is able to reverse the amnesia caused by the GABAA receptor antagonists picrotoxin and bicuculline in mice, although it failed to prevent seizures induced by these drugs.
Concerns:
Studies of long-term consumption of nefiracetam in humans and primates have shown it to have no toxicity. However, animals which metabolize nefiracetam differently from humans and primates are at risk for renal and testicular toxicity. Dogs especially are particularly sensitive, which has been shown to be caused by a specific metabolite, M-18. Higher doses than those in dogs were needed to cause testicular toxicity in rats, although no toxicity was seen in monkeys. Additionally, there has been no evidence of toxicity during clinical trials. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tarashikomi**
Tarashikomi:
Tarashikomi (meaning "dripping in") is a Japanese painting technique, in which a second layer of paint is applied before the first layer is dry. This effect creates a dripping form for fine details such as ripples in water or flower petals on a tree.
Tarashikomi:
Japanese paintings in the past were usually done on paper (or silk) with watercolors. The paintings in the Tomb of Kyushu are some of the earliest Japanese art, painted on the tomb’s walls between the fifth and seventh centuries AD. Silk and paper came from China, and in the seventh century was used primarily for writing; however, it began to be used for art during the eighth century. Silk was most common for hanging scroll paintings, while paper was used for calligraphy on handscrolls. Nikawa (animal glue) was used for paint; the glue was made from cowhide or other animal skins.
Hon'ami Koētsu:
Hon'ami Kōetsu (1558–1637) was inspired by the Heian period, which was a model of art from the distant past. These works were popular among the samurai, who tried to evoke the past without losing the beauty of the Heien period. Masters of different artistic media and schools inspired other artists, who created their own styles of art or schools. Honami inspired Tawaraya Sōtatsu, who is noted for his tarashikomi technique; Tawaraya inspired Ogata Kōrin, who consolidated the Rinpa school with his brother Kenzan. The tarashikomi technique is part of the Rinpa style of decorative arts.
Tawaraya Sōtatsu:
Tawaraya and Honami created a new decorative-painting school, which later influenced Ogata Kōrin. Tawaraya made a living by selling his decorated scrolls, screens and fans from his shop (eya), and is known for his tarashikomi paintings on fans and screens. Tawaraya's depth of style is reminiscent of realistic Chinese painting, but freer.Tawaraya's new style of painting was seen mainly in his paintings on screens; examples of his tarashikomi works are Flowers and Grasses of the Four Seasons and Lotus and Waterfowl. His handscroll entitled Kitano Tenjin engi is known for its tarashikomi rendering of clouds and the puffs surrounding them.
Tawaraya Sōtatsu:
Tawaraya's school (1624–1644) painted many folded screens, which were functional as well as beautiful; they could be set up and put away easily, allowing people to enjoy them seasonally, separately or for a special occasion. Themes were common, inspired by tales or poems of other artists. The screens were not meant to remain in a corner, like wall art in modern Western houses. Sometimes a single object was repeated on the screen, causing the images to apparently move across the screens. The screens were arranged to fold in on each other; the motion of folding enhanced the movement of the panels, often giving images more dimension.
Tawaraya Sōtatsu:
Tawaraya's paintings were referred to as the "Tawaraya style". Several of his paintings may be seen on fans and scrolls, the best-known being images from The Tale of Genji. Ogata's paintings also employed this style, but are simpler. Although Tawaraya preceded Ogata, Ogata's new style would come to bear the name of Rinpa (Rin from the end of his first name and pa from the Japanese word for "school"). Of course, there were some differences between Tawaraya Sotatsu's works and Ogata Kōrin's style. The main differences between Tarawaya's style and the new Rinpa style was that the latter used sharper contours and lines and also increased the amount of color used in paintings (especially on screens).
Rinpa school:
Rinpa was a style of decorative painting. It was common to add silver or gold leaf to paintings for effect. The metallic look gave the background a sheen, which gave the painted objects on top a layered appearance. In addition, this gave the paintings more solidity so that screens would be less permeable. Japanese artists painted on screens using paint components of different layers. Silk was the usual surface; with its open weave, an artist could paint on both sides of the screen (which made the screen more durable). This durability is what made tarashikomi possible, gives screens (and other artwork) a detailed look. Tarashikomi could add details (such as leaves or flowers on a tree), which made them stand out vibrantly against the background. The dripped paint layers made buds on a tree shine, and moss glow against shadowed bark; not only did it strengthen the screen, it imparted a three-dimensional quality.
Rinpa school:
Buddhist painters are best known for these techniques; the ukiyo ("floating world") pictures are an example. These pictures were popular among the middle class during the Edo period. Outside the Edo limits, the Floating World became a popular place of escape and pleasure from the strict Tokugawa shogunate. When the water was high, the Floating World existed on raised wooden planks; when the waters receded, the people gathered on the banks. Carefree activity was found there, providing interesting material for artists. Working people could escape, for a time, their everyday world. With no family or obligations, one could relax.
Ogata Kōrin:
Tarashikomi was enhanced by Ogata Kōrin (1658–1716). Ogata’s original name was Ichinojo Koretomi; he changed it to escape from debt. He had four children with different women, and was known for frivolity; however, Ogata became one of Japan’s master Rimpa painters. Some of his early works were paintings on fans which he made for the Empress dowager. After 1709, Ogata began dedicating himself to the Rimpa style. He made many screen paintings, such as Irises (from The Tales of Ise). Irises is based on the part of the tale when a traveler composes a poem after seeing a pond with beautiful Japanese irises (although Ogata omits the poet, bridge, and pond). The flowers are used in six screens.
Ogata Kōrin:
Another example of Ogata's tarashikomi screens is Hakurakuten, which demonstrates Tawaraya’s influence. The pool of water in which the bridge sits is colored by using a second pigment of color, added while the first coating of paint was still wet.
Ogata Kōrin:
Ogata’s best-known screen was Red and White Plum Blossoms (1712–1713). This was a pair of screens with two trees, attractive separately but beautiful when unified. The silver stream swirls between the two trees on a golden background. Points of red and white color highlight the leaves and fruit on the plum trees; they break up the colors by allowing them to bleed while partially wet. The twigs, stalks and tree trunks are detailed by the tarashikomi technique. The imagery looks random, but is not (characteristic of the Rimpa school). Another Rimpa master who used the tarashikomi technique was Sakai Hōitsu (1761–1828). His scroll, Night View of the Arched Bridge at the Sumiyoshi Shrine uses the style to blur the effects of his painting.Artistic styles have been passed down for generations, producing their own masters. These styles were passed on, and their students would create other styles. Although Ogata is credited for creating Rimpa, Tawaraya developed tarashikomi (without which the Rimpa school would be quite different). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Baunscheidtism**
Baunscheidtism:
Baunscheidtism is a form of alternative medicine created in the 19th century. The practice, a form of homeopathy, is named for its founder Carl Baunscheidt (1809–1873), a German mechanic and inventor.
The legitimacy of baunscheidtism as an effective medical practice was questioned by at least 1880, when a Melbourne practitioner named Samuel Fischer lost a lawsuit he brought against a patient who failed to pay him, based on the objection that Fischer (a bootmaker) was not a qualified medical practitioner.
Lebenswecker:
The lebenswecker ("life awakener") or "artificial leech" was a medical device invented by Baunscheidt to pierce the skin with many fine needles. Billed as being able to cure myriad illnesses, the lebenswecker was used on skin treated with toxic oil. The resulting inflammation was alleged to draw the body's attention away from the patient's illness, thus effecting a cure. The diseases that could allegedly be cured with the lebenswecker included whooping cough, baldness, toothaches, and mental disorders. The device's popularity was great enough to support a market for "counterfeit" versions of the lebenswecker that were produced by Baunscheidt. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glucocorticoids in hippocampal development**
Glucocorticoids in hippocampal development:
The hippocampus is an area of the brain integral to learning and memory. Removal of this structure can result in the inability to form new memories (i.e. anterograde amnesia) as most famously demonstrated in a patient referred to as HM. The unique morphology of the hippocampus can be appreciated without the use of special stains and this distinct circuitry has helped further the understanding of neuronal signal potentiation. The following will provide an introduction to hippocampal development with particular focus on the role of glucocorticoid signaling.
Hippocampal Development:
Overview The hippocampus arises from the medial telencephalon. In lower mammals, the hippocampus is located dorsally. Considerable expansion of the cerebral cortex in higher mammals (e.g. humans) displaces the hippocampus ventrally where it protrudes inferiorly into the lateral ventricles. (More complete discussion of hippocampal anatomy is found here).
Hippocampal Development:
Principal Cells Neural progenitors that become hippocampal principal neurons (pyramidal and granular cells) arise from the ventricular zone of the lateral ventricle. In contrast to neural proliferation that leads to cortical formation, hippocampal precursors are produced directly in the ventricular zone because there is no subventricular zone or outer subventricular zone adjacent to the hippocampus. Pyramidal CA1 and CA3 precursor cells, therefore, do not have to migrate far to reach their final destination. The figure to the right indicates migration of pyramidal neurons forming the CA3 (orange) and CA1 (red) cell body layers. These cells populate the hippocampus early in development and can be morphologically distinguished from one another in the embryo by 4 months. Granular cells populate the hippocampus slightly after pyramidal cell migration. These cells have farther distance to travel and follow along the pyramidal cells before entering the hilus; this is represented in the figure as the continuation of migration with the green arrows. Granular cell precursors that will populate the dentate gyrus proliferate locally in the hilus. This area, also known as the subgranular zone, retains a portion of neurogenic precursors in the adult.
Hippocampal Development:
Role of Reelin As in the cortex, it is believed that reelin plays an important role in layering of hippocampal neurons through inhibition of migration. Reelin knockout mice lack a single, distinct pyramidal cell body layer due to excess migration. Unexpectedly, these mice have reduced migration into dentate gyrus. The mechanism of this involves disruption in radial glial scaffolding.
Glucocorticoid Signaling:
Overview Cortisol is the primary glucocorticoid produced in humans (equivalent to rodent corticosterone). This steroid hormone is both synthesized and released from the adrenal cortex in response to physical or emotional stress. Additionally, basal serum levels of cortisol display circadian variations. Cortisol receptors are located throughout the body and are involved in a variety of processes including inflammation and lung maturation.
Glucocorticoid Signaling:
Adult Hippocampus The adult hippocampus is highly enriched in type I (mineralocorticoid, MR) and type II (glucocorticoid, GR) glucocorticoid receptors. Despite receptor name, cortisol has ten times greater affinity for MRs than GRs. At basal GC levels, most MRs are activate. Therefore, increasing concentrations of cortisol will preferentially activate GRs. The role these receptors play in cognition is discussed elsewhere.
Glucocorticoid Signaling:
Developing Hippocampus Despite having a high level of receptor expression, the physiological role of glucocorticoid signaling in the developing hippocampus is not well defined. Animal studies have shown fetal exposure to elevated levels of GCs(either by direct corticosterone mimetic injection or stressing of the mother) has adverse outcomes. In addition to having reduced birth weights, stressed rat pups have a decreased ability to regulate the hypothalamus-pituitary-adrenal axis. The hippocampus provides negative feedback to this loop and stressed pups have less sensitive glucocorticoid signaling resulting in elevated levels of glucocorticoids basally and an exaggerated response during stress. As adults, these rats can have impaired cognitive function. Understanding the role of glucocorticoid exposure is important; mothers at risk of preterm delivery are commonly given dexamethasone, a GR agonist, to accelerate fetal lung development and reduce morbidity associated with prematurity. These animals studies have found that postnatal care to prenatally stressed animals can reverse the adverse effects of glucocorticoid signaling. More research is needed to understand the role of glucocorticoids in the context of human hippocampal development. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vocal resonation**
Vocal resonation:
Vocal resonance may be defined as "the process by which the basic product of phonation is enhanced in timbre and/or intensity by the air-filled cavities through which it passes on its way to the outside air." Throughout the vocal literature, various terms related to resonation are used, including: amplification, filtering, enrichment, enlargement, improvement, intensification, and prolongation. Acoustic authorities would question many of these terms from a strictly scientific perspective. However, the main point to be drawn from these terms by a singer or speaker is that the result of resonation is to make a better sound, or at least suitable to a certain esthetical and practical domain.
Human resonating chambers:
The voice, like all acoustic instruments such as the guitar, trumpet, piano, or violin, has its own special chambers for resonating the tone. Once the tone is produced by the vibrating vocal cords, it vibrates in and through the open resonating ducts and chambers. Since the vocal tract is often associated with different regions of the body, different resonance chambers might be referred to as: chest, mouth, nose/"mask", or head.
Human resonating chambers:
In more symbolic/perceptual way, rather than physical, the various terms applied can represent vocal "colors" in a continuous scale: from dark (chest) resonance to bright (head-nasal) resonance. We may call this spectrum a resonance track. In the lower range, the chest/dark color predominates; in the middle range, the mouth-nasal resonance is dominant; and in the higher range, the head-nasal resonance (bright color) predominates. The objective of using such images by several teachers and coaches is to achieve command of all the "colors of the spectrum". That, ultimately, may allow a greater scope of emotional expression. The emotional content of the lyric or phrase suggests the color and volume of the tone and is the personal choice of the artist.
Human resonating chambers:
Head resonance should not be confused with head register or falsetto. It is used primarily for softer singing in either register throughout the range.
Mouth resonance is used for a conversational vocal color in singing and, in combination with nasal resonance, it creates forward placement or mask resonance.
Chest resonance adds richer, darker, and deeper tone coloring for a sense of power, warmth, and sensuality. It creates a feeling of depth and drama in the voice.
Human resonating chambers:
Nasal (mask resonance) is present at all times in a well-produced tone, except perhaps in pure head tone or at very soft volume. Nasal resonance is bright and edgy and is used in combination with mouth resonance to create forward placement (mask resonance). In an overall sense, it adds overtones that give clarity and projection to the voice.There are some singers who are recognized by their pronounced nasal quality; whereas others are noted for their deep, dark, and chesty sound; and still others are noted for their breathy or heady sound; and so on. In part, such individuality depends on the structure of the singer's vocal instrument, that is, the inherent shape and size of the vocal cords and of the vocal tract.
Human resonating chambers:
The quality or color of a voice also depends on the singer's ability to develop and use various resonances by controlling the shape and size of the chambers through which the sound flows. It has been demonstrated electrographically in the form of "voice-prints" that, like fingerprints, no two voices are exactly alike.
Sympathetic and forced vibration:
In a technical sense resonance is a relationship that exists between two bodies vibrating at the same frequency or a multiple thereof. In other words, the vibrations emanating from one body cause the other body to start vibrating in tune with it. A resonator may be defined as a secondary vibrator which is set into motion by the main vibrator and which adds its own characteristics to the generated sound waves.There are two kinds of resonance: sympathetic resonance (or free resonance) and forced resonance (or conductive resonance) The essential difference between both types is what causes the resonator to start vibrating. In sympathetic resonance there is no need of a direct physical contact between the two bodies. The resonator starts functioning because it receives vibrations through the air and responds to them sympathetically, as long as the resonator's natural frequencies of vibration coincides with the exciting oscillations. In forced resonance the resonator starts vibrating because it is in physical contact with a vibrating body, which "forces" the resonator to replicate its oscillations.Both types of resonance are at work in the human voice during speaking and singing. Much of the vibration felt by singers while singing is a result of forced resonance. The waves originated by the airflow modulated by the vibrating vocal folds travel along the bones, cartilages, and muscles of the neck, head, and upper chest, causing them to vibrate by forced resonance. There is little evidence that these vibrations, sensed by tactile nerves, make any significant contribution to the external sound.These same forced vibrations, however, may serve as sensation guides for the singer, regardless of their effect on the external sound. These sensations may provide evidence to the singer that their vocal folds are forming strong primary vibrations which are being carried from them to the head and chest. Thus these vibratory sensations can supply sensory feedback about the efficiency of the whole phonatory process to the singer.
Sympathetic and forced vibration:
In contrast, the sound a person hears from a singer is a product of sympathetic resonance. Air vibrations generated at the level of the vocal folds in the larynx propagate through the vocal tract (e.g. the ducts and cavities of the airways). In other words, the voice's resultant glottal wave is filtered by the vocal tract: a phenomenon of sympathetic resonance. The vocal resonator is not a sounding board comparable with stringed instruments. Rather, it's a column of air traveling through the vocal tract, with a shape that is not only complex, but highly variable. Vennard says: Thus it may vibrate as a whole or in any of its parts. It should not be too hard to think of it as vibrating several ways at once. Indeed most vibrators do this, otherwise we would not have timbre, which consists of several frequencies of different intensities sounding together. Air is fully as capable of this as any other medium; indeed, the sounds of many diverse instruments are carried to the ear by the same air, are funnelled into the same tiny channel, and can still be heard as one sound or as sounds from the individual sources, depending upon the manner in which we give attention.
Factors affecting resonators:
There are a number of factors which determine the resonance characteristics of a resonator. Included among them are the following: size, shape, type of opening, composition and thickness of the walls, surface, and combined resonators. The quality of a sound can be appreciably changed by rather small variations in these conditioning factors.In general, the larger a resonator is, the lower the frequency it will respond to; the greater the volume of air, the lower its pitch. But the pitch also will be affected by the shape of resonator and by the size of opening and amount of lip or neck the resonator has.A conical shaped resonator, such as a megaphone, tends to amplify all pitches indiscriminately. A cylindrical shaped resonator is affected primarily by the length of the tube through which the sound wave travels. A spherical resonator will be affected by the amount of opening it has and by whether or not that opening has a lip.Three factors relating to the walls of a resonator will affect how it functions: the material it is made of, the thickness of its walls, and the type of surface it has. The resonance characteristics of a musical instrument obviously will vary with different materials and the amount of material used will have some effect.Of special importance to singing is the relationship of the surface of a resonator to its tonal characteristics. Resonators can be highly selective-meaning that they will respond to only one frequency (or multiples of it)-or they can be universal-meaning that they can respond to a broad range of frequencies. In general, the harder the surface of the resonator, the more selective it will be, and the softer the surface, the more universal it will become. "[A] hard resonator will respond only when the vibrator contains an overtone that is exactly in tune with the resonator, while a soft resonator permits a wide range of fundamentals to pass through un-dampened but adds its own frequency as on overtone, harmonic or inharmonic as the case may be."Hardness carried to the extreme will result in a penetrating tone with a few very strong high partials. Softness carried to the extreme will result in a mushy, non-directional tone of little character. Between these two extremes lies a whole gamut of tonal possibilities.The final factor to be mentioned is the effect of joining two or more resonators together. In general, the effect of joining two or more resonators is that the resonant frequency of each is lowered in different proportions according to their capacities, their orifices, and so forth. The rules governing combined resonators apply to the human voice: for the throat, mouth and sometimes the nose all function in this manner.
The vocal resonators in detail:
There are seven areas that may be listed as possible vocal resonators. In sequence from the lowest within the body to the highest, these areas are the chest, the tracheal tree, the larynx itself, the pharynx, the oral cavity, the nasal cavity, and the sinuses.
The vocal resonators in detail:
The chest The chest is not an effective resonator, despite numerous voice books and teachers referring to “chest resonance”. Although strong vibratory sensations may be experienced in the upper chest, it can make no significant contribution to the resonance system of the voice, simply by virtue of its structure and location. The chest is mostly connected to the upstream structures of the airways, such as the lungs and trachea (e.g. under the vocal folds). There, it has a high degree of vibrational absorption, with little or no acoustical function to reflect sound waves back toward the larynx.
The vocal resonators in detail:
The tracheal tree The tracheal tree makes no significant contribution to the resonance system except for a negative effect around its resonant frequency. The trachea and the bronchial tubes combine to form an inverted Y-shaped structure known as the tracheal tree. It lies just below the larynx, and, unlike the interior of the lungs, has a definite tubular shape and comparatively hard surfaces. The response of the tracheal tree is the same for all pitches except for its own resonant frequency. When this resonant frequency is reached, the response of the subglottic tube is to act as an acoustical impedance or interference which tends to upset the phonatory function of the larynx. Research has placed the resonant frequency of the subglottal system or tracheal tree around the E-flat above "middle C" for both men and women, varying somewhat with the size of the individual.
The vocal resonators in detail:
The larynx Due to its small size, the larynx acts as a resonator only for high frequencies. Research indicates that one of the desirable attributes of good vocal tone is a prominent overtone lying between 2800 and 3200 hertz, with male voices nearer the lower limit and female voices nearer the upper. This attribute is identified as brilliance, or more frequently as ring or the singer's formant, as fully described by Sundberg. There are several areas in or adjacent to the larynx which might resonate such a high pitch. Among them are the collar of the larynx, the ventricles of Morgani, the vallecula, and the pyriform sinuses. The larynx is not under conscious control, but whatever produces "ring" can be encouraged indirectly by awareness on the part of the student and the teacher of the sounds which contain it.
The vocal resonators in detail:
The pharynx The pharynx is the most important resonator by virtue of its position, size, and degree of adjustability. It is the first cavity of any size through which the product of the laryngeal vibrator passes; the other supraglottal cavities have to accept whatever the pharynx passes on to them. Greene states: "The supraglottic resonators being in the main muscular and moveable structures must be voluntarily controlled to produce conditions of optimal resonance either by varying degrees of tension in their walls, or by alterations in the size of their orifices and cavities during the articulatory movements." The oral cavity The oral cavity is the second most effective resonator.
The vocal resonators in detail:
The nasal cavity The nasal cavity is the third most effective resonator.
The vocal resonators in detail:
The sinuses In spite of being traditionally referred to as resonators by many singers and teachers, the sinuses consist of small closed air pockets, not acoustically connected to the vocal tract, and with no proven role in voice resonance. One could argue that head surface and deeper nerves close to the sinuses may detect passive vibrations entailed by the voice generated and transmitted across the vocal tract. These sensations might support the preservation of the image of the sinuses as effective resonators. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spanaway Lake High School**
Spanaway Lake High School:
Spanaway Lake High School is a high school in Spanaway, Washington, for grade levels nine through 12.
History:
Spanaway Lake High School underwent a major remodel which included the students being taught at Liberty Junior High for the 2009–2010 school year. The school was reopened in fall 2010.
The Spanaway Lake wrestling team was co-champions of the 2001 wrestling season. Coach Greeley led the Sentinels to three consecutive top four finishes including the school's first state title in 2001 and a runner-up finish in 2003. The wrestling room of the new remodeled school was named in Paul Greeley's honor.
Academics:
In 2007, six percent of 10th graders meet standard on the Washington Assessment of Student Learning (WASL) writing test and seven percent meet standard on the WASL reading test. The overall writing scores have increased 28 percentage points in five years. SLHS currently has to improve in the areas of math and science. Among 10th graders, 33 percent meet the standard in math and 23 percent meet the standard in science. The school is offering a greater selection of math courses to its students, as well as providing increased training to teachers to help them reach struggling students in math and science.
Academics:
SLHS made adequate yearly progress (AYP) in 79% of student categories. The school did not meet the standard in math among several sub-groups and reading among special education students. Each year individual schools and the school district must "raise the bar" in gradual increments so that by 2014, 100 percent of students achieve proficiency in each subject.
Notable alumni:
Jerry Cantrell, lead singer and guitarist of Alice in Chains Jacob Castro, soccer player and goalkeeper for Seattle Sounders FC Jo Koy, comedian and actor (attended) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Armour**
Armour:
Armour (Commonwealth English) or armor (American English; see spelling differences) is a covering used to protect an object, individual, or vehicle from physical injury or damage, especially direct contact weapons or projectiles during combat, or from a potentially dangerous environment or activity (e.g. cycling, construction sites, etc.). Personal armour is used to protect soldiers and war animals. Vehicle armour is used on warships, armoured fighting vehicles, and some mostly ground attack combat aircraft.
Armour:
A second use of the term armour describes armoured forces, armoured weapons, and their role in combat. After the development of armoured warfare, tanks and mechanised infantry and their combat formations came to be referred to collectively as "armour".
Etymology:
The word "armour" began to appear in the Middle Ages as a derivative of Old French. It is dated from 1297 as a "mail, defensive covering worn in combat". The word originates from the Old French armure, itself derived from the Latin armatura meaning "arms and/or equipment", with the root armare meaning "arms or gear".
Personal:
Armour has been used throughout recorded history. It has been made from a variety of materials, beginning with the use of leathers or fabrics as protection and evolving through chain mail and metal plate into today's modern composites. For much of military history the manufacture of metal personal armour has dominated the technology and employment of armour.
Personal:
Armour drove the development of many important technologies of the Ancient World, including wood lamination, mining, metal refining, vehicle manufacture, leather processing, and later decorative metal working. Its production was influential in the industrial revolution, and furthered commercial development of metallurgy and engineering. Armour was the single most influential factor in the development of firearms, which in turn revolutionised warfare.
Personal:
History Significant factors in the development of armour include the economic and technological necessities of its production. For instance, plate armour first appeared in Medieval Europe when water-powered trip hammers made the formation of plates faster and cheaper. At times the development of armour has paralleled the development of increasingly effective weaponry on the battlefield, with armourers seeking to create better protection without sacrificing mobility.
Personal:
Well-known armour types in European history include the lorica hamata, lorica squamata, and the lorica segmentata of the Roman legions, the mail hauberk of the early medieval age, and the full steel plate harness worn by later medieval and renaissance knights, and breast and back plates worn by heavy cavalry in several European countries until the first year of World War I (1914–15). The samurai warriors of feudal Japan utilised many types of armour for hundreds of years up to the 19th century.
Personal:
Early Cuirasses and helmets were manufactured in Japan as early as the 4th century. Tankō, worn by foot soldiers and keikō, worn by horsemen were both pre-samurai types of early Japanese armour constructed from iron plates connected together by leather thongs. Japanese lamellar armour (keiko) passed through Korea and reached Japan around the 5th century. These early Japanese lamellar armours took the form of a sleeveless jacket, leggings and a helmet.Armour did not always cover all of the body; sometimes no more than a helmet and leg plates were worn. The rest of the body was generally protected by means of a large shield. Examples of armies equipping their troops in this fashion were the Aztecs (13th to 15th century CE).In East Asia many types of armour were commonly used at different times by various cultures, including scale armour, lamellar armour, laminar armour, plated mail, mail, plate armour, and brigandine. Around the dynastic Tang, Song, and early Ming Period, cuirasses and plates (mingguangjia) were also used, with more elaborate versions for officers in war. The Chinese, during that time used partial plates for "important" body parts instead of covering their whole body since too much plate armour hinders their martial arts movement. The other body parts were covered in cloth, leather, lamellar, or Mountain pattern. In pre-Qin dynasty times, leather armour was made out of various animals, with more exotic ones such as the rhinoceros.
Personal:
Mail, sometimes called "chainmail", made of interlocking iron rings is believed to have first appeared some time after 300 BC. Its invention is credited to the Celts; the Romans are thought to have adopted their design.Gradually, small additional plates or discs of iron were added to the mail to protect vulnerable areas. Hardened leather and splinted construction were used for arm and leg pieces. The coat of plates was developed, an armour made of large plates sewn inside a textile or leather coat.
Personal:
13th to 18th century Europe Early plate in Italy, and elsewhere in the 13th–15th century, were made of iron. Iron armour could be carburised or case hardened to give a surface of harder steel. Plate armour became cheaper than mail by the 15th century as it required much less labour and labour had become much more expensive after the Black Death, though it did require larger furnaces to produce larger blooms. Mail continued to be used to protect those joints which could not be adequately protected by plate, such as the armpit, crook of the elbow and groin. Another advantage of plate was that a lance rest could be fitted to the breast plate.The small skull cap evolved into a bigger true helmet, the bascinet, as it was lengthened downward to protect the back of the neck and the sides of the head. Additionally, several new forms of fully enclosed helmets were introduced in the late 14th century.
Personal:
Probably the most recognised style of armour in the world became the plate armour associated with the knights of the European Late Middle Ages, but continuing to the early 17th century Age of Enlightenment in all European countries.
By 1400, the full harness of plate armour had been developed in armouries of Lombardy. Heavy cavalry dominated the battlefield for centuries in part because of their armour.
Personal:
In the early 15th century, advances in weaponry allowed infantry to defeat armoured knights on the battlefield. The quality of the metal used in armour deteriorated as armies became bigger and armour was made thicker, necessitating breeding of larger cavalry horses. If during the 14–15th centuries armour seldom weighed more than 15 kg, then by the late 16th century it weighed 25 kg. The increasing weight and thickness of late 16th century armour therefore gave substantial resistance.
Personal:
In the early years of low velocity firearms, full suits of armour, or breast plates actually stopped bullets fired from a modest distance. Crossbow bolts, if still in use, would seldom penetrate good plate, nor would any bullet unless fired from close range. In effect, rather than making plate armour obsolete, the use of firearms stimulated the development of plate armour into its later stages. For most of that period, it allowed horsemen to fight while being the targets of defending arquebusiers without being easily killed. Full suits of armour were actually worn by generals and princely commanders right up to the second decade of the 18th century. It was the only way they could be mounted and survey the overall battlefield with safety from distant musket fire.
Personal:
The horse was afforded protection from lances and infantry weapons by steel plate barding. This gave the horse protection and enhanced the visual impression of a mounted knight. Late in the era, elaborate barding was used in parade armour.
Later Gradually, starting in the mid-16th century, one plate element after another was discarded to save weight for foot soldiers.
Personal:
Back and breast plates continued to be used throughout the entire period of the 18th century and through Napoleonic times, in many European heavy cavalry units, until the early 20th century. From their introduction, muskets could pierce plate armour, so cavalry had to be far more mindful of the fire. In Japan, armour continued to be used until the late 19th century, with the last major fighting in which armour was used, this occurred in 1868. Samurai armour had one last short lived use in 1877 during the Satsuma Rebellion.Though the age of the knight was over, armour continued to be used in many capacities. Soldiers in the American Civil War bought iron and steel vests from peddlers (both sides had considered but rejected body armour for standard issue). The effectiveness of the vests varied widely, some successfully deflected bullets and saved lives, but others were poorly made and resulted in tragedy for the soldiers. In any case the vests were abandoned by many soldiers due to their increased weight on long marches, as well as the stigma they got for being cowards from their fellow troops.At the start of World War I, thousands of the French Cuirassiers rode out to engage the German Cavalry. By that period, the shiny metallic cuirass was covered in a dark paint and a canvas wrap covered their elaborate Napoleonic style helmets, to help mitigate the sunlight being reflected off the surfaces, thereby alerting the enemy of their location. Their armour was only meant for protection against edged weapons such as bayonets, sabres, and lances. Cavalry had to be wary of repeating rifles, machine guns, and artillery, unlike the foot soldiers, who at least had a trench to give them some protection.
Personal:
Present Today, ballistic vests, also known as flak jackets, made of ballistic cloth (e.g. kevlar, dyneema, twaron, spectra etc.) and ceramic or metal plates are common among police forces, security staff, corrections officers and some branches of the military.
Personal:
The US Army has adopted Interceptor body armour, which uses Enhanced Small Arms Protective Inserts (ESAPIs) in the chest, sides, and back of the armour. Each plate is rated to stop a range of ammunition including 3 hits from a 7.62×51 NATO AP round at a range of 10 m (33 ft). Dragon Skin is another ballistic vest which is currently in testing with mixed results. As of 2019, it has been deemed too heavy, expensive, and unreliable, in comparison to more traditional plates, and it is outdated in protection compared to modern US IOTV armour, and even in testing was deemed a downgrade from the IBA.
Personal:
The British Armed Forces also have their own armour, known as Osprey. It is rated to the same general equivalent standard as the US counterpart, the Improved Outer Tactical Vest, and now the Soldier Plate Carrier System and Modular Tactical Vest.
The Russian Armed Forces also have armour, known as the 6B43, all the way to 6B45, depending on variant. Their armour runs on the GOST system, which, due to regional conditions, has resulted in a technically higher protective level overall.
Vehicle:
The first modern production technology for armour plating was used by navies in the construction of the ironclad warship, reaching its pinnacle of development with the battleship. The first tanks were produced during World War I. Aerial armour has been used to protect pilots and aircraft systems since the First World War.
Vehicle:
In modern ground forces' usage, the meaning of armour has expanded to include the role of troops in combat. After the evolution of armoured warfare, mechanised infantry were mounted in armoured fighting vehicles and replaced light infantry in many situations. In modern armoured warfare, armoured units equipped with tanks and infantry fighting vehicles serve the historic role of heavy cavalry, light cavalry, and dragoons, and belong to the armoured branch of warfare.
Vehicle:
History Ships The first ironclad battleship, with iron armour over a wooden hull, Gloire, was launched by the French Navy in 1859 prompting the British Royal Navy to build a counter. The following year they launched HMS Warrior, which was twice the size and had iron armour over an iron hull. After the first battle between two ironclads took place in 1862 during the American Civil War, it became clear that the ironclad had replaced the unarmoured line-of-battle ship as the most powerful warship afloat.Ironclads were designed for several roles, including as high seas battleships, coastal defence ships, and long-range cruisers. The rapid evolution of warship design in the late 19th century transformed the ironclad from a wooden-hulled vessel which carried sails to supplement its steam engines into the steel-built, turreted battleships and cruisers familiar in the 20th century. This change was pushed forward by the development of heavier naval guns (the ironclads of the 1880s carried some of the heaviest guns ever mounted at sea), more sophisticated steam engines, and advances in metallurgy which made steel shipbuilding possible.
Vehicle:
The rapid pace of change in the ironclad period meant that many ships were obsolete as soon as they were complete, and that naval tactics were in a state of flux. Many ironclads were built to make use of the ram or the torpedo, which a number of naval designers considered the crucial weapons of naval combat. There is no clear end to the ironclad period, but towards the end of the 1890s the term ironclad dropped out of use. New ships were increasingly constructed to a standard pattern and designated battleships or armoured cruisers.
Vehicle:
Trains Armoured trains saw use during the 19th century and early 20th century in the American Civil War (1861–1865), the Franco-Prussian War (1870–1871), the First and Second Boer Wars (1880–81 and 1899–1902), the Polish–Soviet War (1919–1921), the First (1914–1918) and Second World Wars (1939–1945) and the First Indochina War (1946–1954). The most intensive use of armoured trains was during the Russian Civil War (1918–1920).
Vehicle:
Armoured fighting vehicles Ancient siege engines were usually protected by wooden armour, often covered with wet hides or thin metal to prevent being easily burned.
Medieval war wagons were horse-drawn wagons that were similarly armoured. These contained guns or crossbowmen that could fire through gun-slits.
Vehicle:
The first modern armoured fighting vehicles were armoured cars, developed circa 1900. These started as ordinary wheeled motor-cars protected by iron shields, typically mounting a machine gun.During the First World War, the stalemate of trench warfare during on the Western Front spurred the development of the tank. It was envisioned as an armoured machine that could advance under fire from enemy rifles and machine guns, and respond with its own heavy guns. It used caterpillar tracks to cross ground broken up by shellfire and trenches.
Vehicle:
Aircraft With the development of effective anti-aircraft artillery in the period before the Second World War, military pilots, once the "knights of the air" during the First World War, became far more vulnerable to ground fire. As a response, armour plating was added to aircraft to protect aircrew and vulnerable areas such as engines and fuel tanks. Self-sealing fuel tanks functioned like armour in that they added protection but also increased weight and cost.
Vehicle:
Present Tank armour has progressed from the Second World War armour forms, now incorporating not only harder composites, but also reactive armour designed to defeat shaped charges. As a result of this, the main battle tank (MBT) conceived in the Cold War era can survive multiple rocket-propelled grenade strikes with minimal effect on the crew or the operation of the vehicle. The light tanks that were the last descendants of the light cavalry during the Second World War have almost completely disappeared from the world's militaries due to increased lethality of the weapons available to the vehicle-mounted infantry.
Vehicle:
The armoured personnel carrier (APC) was devised during the First World War. It allows the safe and rapid movement of infantry in a combat zone, minimising casualties and maximising mobility. APCs are fundamentally different from the previously used armoured half-tracks in that they offer a higher level of protection from artillery burst fragments, and greater mobility in more terrain types. The basic APC design was substantially expanded to an infantry fighting vehicle (IFV) when properties of an APC and a light tank were combined in one vehicle.
Vehicle:
Naval armour has fundamentally changed from the Second World War doctrine of thicker plating to defend against shells, bombs and torpedoes. Passive defence naval armour is limited to kevlar or steel (either single layer or as spaced armour) protecting particularly vital areas from the effects of nearby impacts. Since ships cannot carry enough armour to completely protect against anti-ship missiles, they depend more on defensive weapons destroying incoming missiles, or causing them to miss by confusing their guidance systems with electronic warfare.
Vehicle:
Although the role of the ground attack aircraft significantly diminished after the Korean War, it re-emerged during the Vietnam War, and in the recognition of this, the US Air Force authorised the design and production of what became the A-10 dedicated anti-armour and ground-attack aircraft that first saw action in the Gulf War.
Vehicle:
High-voltage transformer fire barriers are often required to defeat ballistics from small arms as well as projectiles from transformer bushings and lightning arresters, which form part of large electrical transformers, per NFPA 850. Such fire barriers may be designed to inherently function as armour, or may be passive fire protection materials augmented by armour, where care must be taken to ensure that the armour's reaction to fire does not cause issues with regards to the fire barrier being armoured to defeat explosions and projectiles in addition to fire, especially since both functions must be provided simultaneously, meaning they must be fire-tested together to provide realistic evidence of fitness for purpose.
Vehicle:
Combat drones use little to no vehicular armour as they are not manned vessels, this results in them being lightweight and small in size.
Animal armour:
Horse armour Body armour for war horses has been used since at least 2000 BC. Cloth, leather, and metal protection covered cavalry horses in ancient civilisations, including ancient Egypt, Assyria, Persia, and Rome. Some formed heavy cavalry units of armoured horses and riders used to attack infantry and mounted archers. Armour for horses is called barding (also spelled bard or barb) especially when used by European knights.
Animal armour:
During the late Middle Ages as armour protection for knights became more effective, their mounts became targets. This vulnerability was exploited by the Scots at the Battle of Bannockburn in the 14th century, when horses were killed by the infantry, and for the English at the Battle of Crécy in the same century where longbowmen shot horses and the then dismounted French knights were killed by heavy infantry. Barding developed as a response to such events.
Animal armour:
Examples of armour for horses could be found as far back as classical antiquity. Cataphracts, with scale armour for both rider and horse, are believed by many historians to have influenced the later European knights, via contact with the Byzantine Empire.Surviving period examples of barding are rare; however, complete sets are on display at the Philadelphia Museum of Art, the Wallace Collection in London, the Royal Armouries in Leeds, and the Metropolitan Museum of Art in New York. Horse armour could be made in whole or in part of cuir bouilli (hardened leather), but surviving examples of this are especially rare.
Animal armour:
Elephant armour War elephants were first used in ancient times without armour, but armour was introduced because elephants injured by enemy weapons would often flee the battlefield. Elephant armour was often made from hardened leather, which was fitted onto an individual elephant while moist, then dried to create a hardened shell. Alternatively, metal armour pieces were sometimes sewn into heavy cloth. Later lamellar armour (small overlapping metal plates) was introduced. Full plate armour was not typically used due to its expense and the danger of the animal overheating. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**VTuber**
VTuber:
A VTuber (Japanese: ブイチューバー, Hepburn: BuiChūbā), or virtual YouTuber (バーチャルユーチューバー, bācharu YūChūbā), is an online entertainer who uses a virtual avatar generated using computer graphics. Real-time motion capture software or technology are often—but not always—used to capture movement. The digital trend originated in Japan in the mid-2010s, and has become an international online phenomenon in the 2020s. A majority of VTubers are English and Japanese-speaking YouTubers or live streamers who use avatar designs. By 2020, there were more than 10,000 active VTubers. Although the term is an allusion to the video platform YouTube, they also use websites such as Niconico, Twitch, Facebook, Twitter, and Bilibili.
VTuber:
The first entertainer to use the phrase "virtual YouTuber", Kizuna AI, began creating content on YouTube in late 2016. Her popularity sparked a VTuber trend in Japan, and spurred the establishment of specialized agencies to promote them, including major ones such as Hololive Production, Nijisanji, and VShojo. Fan translations and foreign-language VTubers have marked a rise in the trend's international popularity. Virtual YouTubers have appeared in domestic advertising campaigns, and have broken livestream-related world records.
Overview:
Virtual YouTubers (although more commonly referred to as VTubers) are online entertainers who are typically YouTubers or live streamers. They use avatars created with programs such as Live2D, portraying characters designed by online artists. VTubers are not bound by physical limitations, and many of them engage in activities that are unconstrained by their real-world identity. Some VTubers, particularly those from marginalized communities, choose to use avatars to reflect their online identity for personal comfort and safety reasons. Transgender VTubers may use their avatars as a means to better reflect their preferred presentation to their audience.VTubers often portray themselves as a kayfabe character, not unlike professional wrestling; Mace, a WWE wrestler who himself began streaming on Twitch as a VTuber in 2021, remarked that the two professions were "literally the same thing".VTubers are associated with Japanese popular culture and aesthetics, such as anime and manga, and moe anthropomorphism with human or non-human traits. Some VTubers use anthropomorphic avatars, non-human characters such as animals.
Overview:
Technology A VTuber's avatar is typically animated using a webcam and software, which captures the streamer's motions, expressions, and mouth movements, and maps them to a two- or three-dimensional model. Both free and paid programs have been developed for loading models and performing motion capture, with some capable of being used without a webcam (albeit with pre-determined animations), and some also supporting virtual reality hardware, or hand tracking devices such as the Leap Motion Controller. Some programs use iPhone smartphones—particularly, those that include Face ID—as an external webcam, using their infrared-illuminated sensor for more precise motion capture.The proprietary animation software Live2D is typically used to rig two-dimensional models constructed from drawn textures, while programs such as VRoid Studio can be used to create three-dimensional models. Commissioned models can cost as high as US$2,000 depending on their level of detail. By contrast, some VTubers, colloquially known as "PNGTubers" (in reference to the PNG image format), use static sprites as opposed to a rigged model.Alternative open-source software has been introduced, such as Inochi2D, which aims to serve the same purpose of Live2D in an open source manner. Such software is not backwards compatible with the Live2D standard, due to Live2D's anti-competitive license compatibility. There is also open source 3D VTubing software such as the Virtual Puppet Project (vpuppr) that is compatible with former proprietary 3D Vtubing software due to the model format's open source nature.More recently, some Vtubers have introduced artificial intelligence into the design of their characters using AI-generated art and have even integrated AI into their core personality, gameplay, and chat interaction.
Overview:
Agencies and commercialization Major VTubers are often employed by talent agencies, with business models influenced by those used by Japanese idol agencies. Streamers are employed by an agency to portray characters developed by the company, which are then commercialized via merchandising and other promotional appearances, as well as traditional revenue streams such as monetization of their videos, and viewer donations. The use of the term "graduation" to refer to a streamer retiring their character and/or leaving an agency, is also a holdover from the idol industry.
History:
Predecessors On February 12, 2010, visual novel maker Nitroplus began uploading videos to its YouTube channel featuring an animated 3D version of its mascot Super Sonico, who would usually talk to the audience about herself or about releases related to the company. On June 13, 2011, UK-based Japanese vlogger Ami Yamato uploaded her first video, which featured an animated, virtual avatar speaking to the camera. In 2012, Japanese company Weathernews Inc. debuted a Vocaloid-styled character called Weatheroid Type A Airi on SOLiVE24, a 24-hour weather live stream on Nico Nico Douga, on YouTube and their website. In 2014, Airi got her own solo program every Thursday and began live broadcasting with motion capture.In 2014 the FaceRig indie software launched on Indiegogo as an EU crowdfunding project, and later that year it was released on Steam, becoming the first software suite that enabled live avatars at home via face motion capture that started being actively used on streaming websites and YouTube. The Live2D software module enabling 2D avatars and was added one year later in 2015 in collaboration with Live2D, Inc.
History:
Breakout In late 2016, Kizuna AI, the first VTuber to achieve breakout popularity, made her debut on YouTube. She was the first to coin and use the term "virtual YouTuber". Created by digital production company Activ8 and voice-acted by Nozomi Kasuga, Kizuna AI created a sense of "real intimacy" with fans, as she was responsive to their questions. Within ten months, she had over two million subscribers and later became a culture ambassador of the Japan National Tourism Organization. Kizuna Ai's popularity can be attributed to the oversaturation of traditional webcam YouTubers and for aspects of characters that the audience would not expect. For example, despite having a friendly appearance, Kizuna Ai often swears in her videos when she gets frustrated while playing a game.
History:
The VTuber trend Kizuna AI's sudden popularity sparked a VTuber trend. Between May and mid-July 2018, the number of active VTubers increased from 2,000 to 4,000. Kaguya Luna and Mirai Akari followed Kizuna as the second and third most popular VTubers, with 750,000 and 625,000 subscribers respectively. Nekomiya Hinata and Siro, two other early VTubers, each gained followings of 500,000 in six months.In the beginning of 2018, Anycolor Inc. (then known as Ichikara) founded the VTuber agency Nijisanji. Nijisanji helped popularise the use of Live2D models instead of the prior focus on 3D models as well as the shift towards livestreaming instead of edited video and clips that was the standard for VTubers like Kizuna Ai. Cover Corporation, a company that was originally developing augmented and virtual reality software, shifted its focus to VTubers by establishing Hololive.After their initial success in Japan, the trend began to expand internationally via their appeal to the anime and manga fandom. Agencies like Hololive and Nijisanji created branches in China, South Korea, Indonesia, and India, as well as English-language branches targeting a global audience. Meanwhile, independent VTubers began to appear in many countries, from Japan to the United States. In July 2018, VTubers had a collective subscriber count of 12.7 million, and more than 720 million total views. By January 2020, there were over 10,000 VTubers.The COVID-19 pandemic led to an overall increase in viewership of video game live streaming in general in 2020, which helped contribute to the growth of VTubers into a mainstream phenomenon. Searches on Google for VTuber related content increased over 2020, leading towards the September 2020 launch of Hololive's English branch. In August 2020, seven of the ten largest Super Chat earners of all time on YouTube were VTubers, including Hololive member Kiryu Coco at number one, who by that time had earned approximately ¥85 million (approximately US$800,000 in 2020). VTubers accounted for 38% of YouTube's 300 most profitable channels, with a total revenue of US$26,229,911 (roughly half of which being viewer donations).At the same time, the popularity of VTubers continued to rise on Twitch, with a host of several notable English-speaking VTubers such as VShojo members Projekt Melody and Ironmouse. Pokimane also experimented with avatar-based streams using a model commissioned from a VTuber artist.In September 2020, Anycolor created an "Aggressive Acts and Slander Countermeasure Team" to offer counselling to victims of harassment and take legal measures against perpetrators of harassment, specifically the online harassment plaguing the Japanese entertainment industry. This announcement came in the wake of Hololive's VTuber Mano Aloe's retirement after only two weeks of activity due to online harassment.YouTube's 2020 Culture and Trends report highlights VTubers as one of the notable trends of that year, with 1.5 billion views per month by October.On March 30, 2021, Kizuna AI was chosen as one of Asia's top 60 influencers.In May 2021, Twitch added a VTuber tag for streams as part of a wider expansion of its tag system.
History:
In July 2021, Gawr Gura—a member of Hololive's first English branch—overtook Kizuna Ai as the most-subscribed VTuber on YouTube.Cover's CEO Motoaki "Yagoo" Tanigo was selected as one of the Japan's Top 20 Entrepreneurs by Forbes Japan in its January 2022 issue. The following month, in the midst of a subathon event, Ironmouse accumulated the largest number of active paid subscriptions of all streamers on the platform at that point in time, although still behind an overall record previously set by Ludwig. According to data provided by parent company Amazon, VTubing content on Twitch grew by 467% in 2021 compared with a year earlier.
Use in marketing:
Due to their popularity, companies and organizations have used virtual YouTubers as a method of advertising or bringing attention to a product or service. When SoftBank announced the release of the iPhone XS and XS Max in 2018, Kizuna AI appeared at the event and promoted the products on her channel.In August 2018, Wright Flyer Live Entertainment released a mobile application allowing VTubers to live stream videos while monetizing them and connecting with their viewers. In a news conference in Tokyo, the head of Wright Flyer Live Entertainment stated, "just increasing the number [of VTubers] is not that effective. We want them to keep on doing their activities. [To do that], gaining fans and monetization are essential. So, we are providing a platform to support that". This followed Wright Flyer Live Entertainment's parent company Gree, Inc.'s ¥10 billion ($89.35 million) investment in VTubers, as well as a ¥10 billion sales target by 2020.On June 24, 2019, VTuber Kaguya Luna, in collaboration with Nissin Foods to advertise its Yakisoba UFO noodles, held a live stream with a smartphone attached to a helium balloon. By the end of the stream, the smartphone reached an altitude of 30 kilometres (19 mi) above sea level and was noted by Guinness World Records as being the live stream recorded at the highest altitude, breaking the previous record of 18.42 kilometres (11.45 mi).Some organizations and companies have employed their own VTuber characters as mascots within marketing. These include the government of Japan's Ibaraki Prefecture (which developed the character of Ibaraki Hiyori), the streaming service Netflix (which developed the character N-ko to appear in videos promoting its anime content), Sega (who planned to have in-character streams with Sonic the Hedgehog and his Japanese voice actor Jun'ichi Kanemaru), and anime streaming service Crunchyroll (which launched a YouTube channel for its mascot Crunchyroll-Hime in October 2021). The Fukuoka SoftBank Hawks baseball team has two VTuber mascots, named Takamine Umi (also known as Hawk Kannon Sea) and Aritaka Hina, both unveiled in 2020. They have their own YouTube channel and their own Twitter accounts. Occasionally, they make appearances on the Fukuoka PayPay Dome's videoboard.In 2021, Hololive English member Gawr Gura made a cameo appearance in an anime-themed ad by American fast food chain Taco Bell (which premiered to coincide with the 2020 Summer Olympics in Tokyo).Good Smile Company has begun producing nendoroids of Kizuna Ai in 2018, with a full push for various Japanese and international VTuber PVC-made statues since the 2020s.In 2020, Japanese VTubers Ayapan and Jajami were invited by the Brazilian embassy in Tokyo, Japan, in November to present their content made for the Brazilian public and how VTuber works, where they had a meeting with Ambassador Eduardo Paes Saboia, being the first contact of VTubers with a Brazilian authority. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bioserenity**
Bioserenity:
BioSerenity is medtech company created in 2014 that develops ambulatory medical devices to help diagnose and monitor patients with chronic diseases such as epilepsy. The medical devices are composed of medical sensors, smart clothing, a smartphone app for Patient Reported Outcome, a web platform to perform data analysis through Medical Artificial Intelligence for detection of digital biomarkers. The company initially focused on Neurology, a domain in which it reported contributing to the diagnosis of 30 000 patients per year. It now also operates in Sleep Disorders and Cardiology. BioSerenity reported it provides pharmaceutical companies with solutions for companion diagnostics.
Company history:
BioSerenity was founded in 2014, by Pierre-Yves Frouin. The company was initially hosted in the ICM Institute (Institut du Cerveau et de la Moëlle épinière), in Paris, France.
Company history:
Fund Raising June 8, 2015 : The company raises a $4 million seed round with Kurma Partners and IdInvest Partners September 20, 2017 : The company raises a $17 million series A round with LBO France, IdInvest Partners and BPI France June 18, 2019 : The company raises a $70 million series B round with Dassault Systèmes, IdInvest Partners, LBO France et BPI FranceAcquisitions In 2019, BioSerenity announces the acquisition of the American Company SleepMed and working with over 200 Hospitals.
Company history:
In 2020, Bioserenity is one of the five French manufacturers (Savoy, BB Distrib, Celluloses de Brocéliande, Chargeurs) working on the production of sanitary equipment including FFP2 masks at request of the French government.In 2021, the Neuronaute would be used by approximately 30,000 patients per year.
Awards:
BioSerenity is one of the Disrupt 100 BioSerenity joins the Next40 BioSerenity selected by Microsoft and AstraZeneca in their initiative AI Factory for Health BioSerenity accelerated at Stanford's University StartX program | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Piano pedagogy**
Piano pedagogy:
Piano pedagogy is the study of the teaching of piano playing. Whereas the professional field of music education pertains to the teaching of music in school classrooms or group settings, piano pedagogy focuses on the teaching of musical skills to individual piano students. This is often done via private or semiprivate instructions, commonly referred to as piano lessons. The practitioners of piano pedagogy are called piano pedagogues, or simply, piano teachers.
Professional training:
The range of professionalism among teachers of piano is undoubtedly wide. "Competent instruction is not always assured by the number of years one has taken lessons", warned piano pedagogue and writer of numerous pedagogical books, James Bastien. The factors which affect the professional quality of a piano teacher include one's competence in musical performance, knowledge of musical genres, music history and theory, piano repertoire, experience in teaching, ability to adapt one's teaching method to students of different personalities and learning styles, education level, and so on.
Professional training:
Musicians without degrees in piano pedagogy In the United States, piano lessons may be offered by teachers without higher education specifically focused in piano performance or piano pedagogy. Some teachers may hold degrees in another discipline in music, such as music education or another performance area (voice, orchestral instrument, etc.). Other teachers, without higher education in music, may have studied piano playing independently or have been self-taught.
Professional training:
Undergraduate and graduate studies in piano pedagogy The field of piano pedagogy may be studied through academic programs culminating in the attainment of a bachelor, master, or doctoral degree at music colleges or conservatories. The undergraduate level may require many years of prior piano studies and previous teaching experience as prerequisites for application. At the graduate level, many schools require applicants to have some teaching experience and at least a bachelor of music or equivalent experience in piano performance and/or pedagogy.Although virtually all piano pedagogy programs include a significant portion of performance requirement, the pedagogy major may be distinct from the performance major at some schools. Some members of the latter group may have the option to take courses in the teaching of piano, but not all do.
Professional training:
Professional organizations in the United States Many piano teachers hold memberships in professional organizations, to maintain their commitment to pedagogy and to network with peers and others in music. These organizations often offer teachers' workshops, conferences, mentorship programs, publications on piano pedagogy, and opportunities for scholarships, competitions, and performances for the students of members. Some prominent organizations in the United States include: American Council of Piano Performers – ACPP Music Teachers National Association – MTNA National Federation of Music Clubs National Guild of Piano Teachers Piano Teachers Congress of New York Professional Organizations in Canada The main organization that offers certificates and testing curriculum in Canada is Royal Conservatory of Music. There are three levels in their certificate program; elementary, intermediate and advanced. Elementary pedagogy certificate enables teachers to teach beginners up to grade two piano, while intermediate certificate allows teachers to teach up to grade 6 piano. Advanced piano pedagogy is known as "ARCT" (Associate of Royal Conservatory of Toronto), which enables teachers to teach up to grade 10. There are also a number of theory and history examinations that accompany each certificate program which must be completed. There is also a Piano Teachers Federation based in Vancouver, British Columbia.
Notable piano pedagogues in history:
Johann Nepomuk Hummel (Austria, 1778–1837) Carl Czerny (Austria, 1791–1857) Carl Philipp Emanuel Bach (Germany, 1714–1788) Maria Szymanowska (Poland, 1789–1831) Frédéric Chopin (Poland, 1810–1849) Theodor Leschetizky (Poland, 1830–1915) Franz Liszt (Hungary, 1811–1886) Tobias Matthay (England, 1858–1945) Nadia Boulanger (France, 1887–1979 Heinrich Neuhaus (Russia, 1888–1964) Dimitri Bashkirov (Russia, 1931–) Leila Fletcher (Canada, 1899–1988) Ontario, Mayfair Montgomery Publishing Neil A. Kjos (US, 1931–2009) Illinois, known for the James Bastien books Abby Whiteside (US, 1881–1956) Dorothy Taubman (US, 1917–2013) Isidor Philipp (France, 1863–1958) Harold Bradley (Canada 1906–1984) Frances Clark (US, 1905–1998) Stefan Ammer (Germany, 1942–) Ilana Vered (Israel, 1943–) Peter Arnold (United Kingdom) Graham Fitch (United Kingdom)
Topics of study:
Piano pedagogy involves the study and teaching of motor, intellectual, problem-solving, and artistic skills involved in playing the piano effectively. Citing the influence of Zoltán Kodály, Carl Orff, Émile Jaques-Dalcroze, Russian-American piano pedagogue at Longy School of Music, Dr. Faina Bryanskaya, advocates a holistic approach which integrates as many aspects of music-making as possible at once would result in the most effective piano teaching.
Topics of study:
Ear training Dr. Bryanskaya argues that the foremost task for piano teachers at the beginning of a student's study is the introduction of a habit of listening to quality performances of "descriptive and strikingly expressive music", as a means for "sensitizing [the student] to the meaning of music".
Rhythm Teaching rhythm is important for the student to be able to learn a piece accurately, and also to confidently perform a practiced piece. Developing an internal metronome plays a significant role when teaching rhythm. Teachers may encourage students to count out loud when practicing, or practice with a metronome to develop a steady internal beat.
Topics of study:
Notation Learning to read music is a critical skill for most pianists. There are generally three approaches to teaching students to read music, although combined approaches are increasingly common. The "Middle C Method", a "single note identification" method, was the most commonly taught method through the 20th century. It was introduced by W.S.B. Mathews in 1892 but popularized by Thompson's Modern Course for Piano (1936). "Middle C" teaches positions relative to the middle C; in other "single note identification" methods, other notes might be used.
Topics of study:
The "intervocalic method", developed by Frances Clark with her Time to Begin (1955) curriculum, teaches recognition of patterns, and adds "landmark notes".
The "multi-key method", developed by Robert Pace and published in 1954, teaches students all major and minor keys fairly quickly.
Topics of study:
Technique Good piano playing technique involves the simultaneous understanding in both the mind and the body of the relationships between the elements of music theory, recognition of musical patterns in notation and at the fingertips, the physical landscape of the entire range of the keyboard, finger dexterity and independence, and a wide range of touch and tone production for a variety of emotional expressions. Skills in all of these areas are typically nurtured and developed for the sake of expressing oneself more effectively and naturally through the sound of the piano, so that the elements of technique will sound alive with musicality.
Topics of study:
Improvisation Modern piano lessons tend to emphasize learning notation, and may neglect developing the creative spirit and sensitive ears which lead to expressive music-making. Studies point to the need for using multiple approaches in learning musical skills which engage both sides of the brain—the analytical and the intuitive—for students to master all aspects of playing. Therefore, teaching improvisation skills may help students take ownership of the expressive quality of the music they make, and to keep music learning and practicing alive and interesting. One way to do so is to make up stories full of different emotions through improvising, in order to reinforce music theory concepts already introduced and to develop a wide range of touch and tone production.
Topics of study:
Sight reading Sight reading heavily depends on the students' ability to understand rhythm, and recognize musical patterns. Teaching sight reading can include teaching students to recognize intervals, scale passage patterns, note reading and the ability to internalize rhythm. The ability to have strong knowledge of different major and minor key signatures can also help students anticipate the accidentals they should expect when sight reading.
Topics of study:
Memorization Memorization is useful to perform a piece confidently. It gives the student ability and freedom to experience the music for all of its intricacies as opposed to focusing on the technicalities of notes and rhythm. Memorization can come easily to some students, and harder for others. The most common memorization technique is muscle memory. However reliance on muscle memory alone can hinder students if they have not made the cognitive connection between every note they play, and leaves room for many memory slips. To have a strong foundation of memorization, students should be able to visualize everything that they play, and be able to start from any passage.
Topics of study:
Effective memorization results from the "combination of visual, kinaesthetic, aural and analytical skills".
Topics of study:
Repertoire Well-known keyboard works written with special attention for pedagogical purposes in mind include: Notebook for Anna Magdalena Bach (1725) by family and friends of J.S. Bach Klavierbüchlein für Wilhelm Friedemann Bach, Little Preludes and Fugues, Inventions and Sinfonias, & the Well-Tempered Clavier by J.S. Bach Sonatinas by Muzio Clementi Album For the Young, Op. 68 (1848) by Robert Schumann Album For the Young, Op. 39 (1878) by Pyotr Ilyich Tchaikovsky Music for Children, Op. 65 (1935) by Sergei Prokofiev Pieces by Igor Stravinsky, Dmitri Kabalevsky and Aram Khatchaturian Mikrokosmos, Sz. 107, BB 105 (1926–39) by Béla Bartók
Venues offering instruction in piano playing:
The teaching of piano playing most often take place in the form of weekly private lessons, in which a student and a teacher have one-on-one meetings. Instructions may sometimes be offered semi-privately (one teacher meeting with a small group of two or more students) or in classes of larger groups, in other intervals of time. Piano lessons are offered in a variety of different settings, including the following: Studios of independent piano teachers Piano and music stores Community music schools Continuing education programs Preparatory division of music colleges or conservatories Music colleges or conservatories Online Distance-learning Courses In-home/mobile music schools that travel to student's homes | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Balancing selection**
Balancing selection:
Balancing selection refers to a number of selective processes by which multiple alleles (different versions of a gene) are actively maintained in the gene pool of a population at frequencies larger than expected from genetic drift alone. Balancing selection is rare compared to purifying selection. It can occur by various mechanisms, in particular, when the heterozygotes for the alleles under consideration have a higher fitness than the homozygote. In this way genetic polymorphism is conserved.Evidence for balancing selection can be found in the number of alleles in a population which are maintained above mutation rate frequencies. All modern research has shown that this significant genetic variation is ubiquitous in panmictic populations.
Balancing selection:
There are several mechanisms (which are not exclusive within any given population) by which balancing selection works to maintain polymorphism. The two major and most studied are heterozygote advantage and frequency-dependent selection.
Mechanisms:
Heterozygote advantage In heterozygote advantage, or heterotic balancing selection, an individual who is heterozygous at a particular gene locus has a greater fitness than a homozygous individual. Polymorphisms maintained by this mechanism are balanced polymorphisms. Due to unexpected high frequencies of heterozygotes, and an elevated level of heterozygote fitness, heterozygotic advantage may also be called "overdominance" in some literature. A well-studied case is that of sickle cell anemia in humans, a hereditary disease that damages red blood cells. Sickle cell anemia is caused by the inheritance of an allele (HgbS) of the hemoglobin gene from both parents. In such individuals, the hemoglobin in red blood cells is extremely sensitive to oxygen deprivation, which results in shorter life expectancy. A person who inherits the sickle cell gene from one parent and a normal hemoglobin allele (HgbA) from the other, has a normal life expectancy. However, these heterozygote individuals, known as carriers of the sickle cell trait, may suffer problems from time to time.
Mechanisms:
The heterozygote is resistant to the malarial parasite which kills a large number of people each year. This is an example of balancing selection between the fierce selection against homozygous sickle-cell sufferers, and the selection against the standard HgbA homozygotes by malaria. The heterozygote has a permanent advantage (a higher fitness) wherever malaria exists. Maintenance of the HgbS allele through positive selection is supported by significant evidence that heterozygotes have decreased fitness in regions where malaria is not prevalent. In Surinam, for example, the allele is maintained in the gene pools of descendants of African slaves, as the Surinam suffers from perennial malaria outbreaks. Curacao, however, which also has a significant population of individuals descending from African slaves, lacks the presence of widespread malaria, and therefore also lacks the selective pressure to maintain the HgbS allele. In Curacao, the HgbS allele has decreased in frequency over the past 300 years, and will eventually be lost from the gene pool due to heterozygote disadvantage.
Mechanisms:
Frequency-dependent selection Frequency-dependent selection occurs when the fitness of a phenotype is dependent on its frequency relative to other phenotypes in a given population. In positive frequency-dependent selection the fitness of a phenotype increases as it becomes more common. In negative frequency-dependent selection the fitness of a phenotype decreases as it becomes more common. For example, in prey switching, rare morphs of prey are actually fitter due to predators concentrating on the more frequent morphs. As predation drives the demographic frequencies of the common morph of prey down, the once rare morph of prey becomes the more common morph. Thus, the morph of advantage now is the morph of disadvantage. This may lead to boom and bust cycles of prey morphs. Host-parasite interactions may also drive negative frequency-dependent selection, in alignment with the Red Queen hypothesis. For example, parasitism of freshwater New Zealand snail (Potamopyrgus antipodarum) by the trematode Microphallus sp. results in decreasing frequencies of the most commonly hosted genotypes across several generations. The more common a genotype became in a generation, the more vulnerable to parasitism by Microphallus sp. it became. Note that in these examples that no one phenotypic morph, nor one genotype is entirely extinguished from a population, nor is one phenotypic morph nor genotype selected for fixation. Thus, polymorphism is maintained by negative frequency-dependent selection.
Mechanisms:
Fitness varies in time and space The fitness of a genotype may vary greatly between larval and adult stages, or between parts of a habitat range. Variation over time, unlike variation over space, is not in itself enough to maintain multiple types, because in general the type with the highest geometric mean fitness will take over, but there are a number of mechanisms that make stable coexistence possible.
More complex examples:
Species in their natural habitat are often far more complex than the typical textbook examples.
Grove snail The grove snail, Cepaea nemoralis, is famous for the rich polymorphism of its shell. The system is controlled by a series of multiple alleles. Unbanded is the top dominant trait, and the forms of banding are controlled by modifier genes (see epistasis).
More complex examples:
In England the snail is regularly preyed upon by the song thrush Turdus philomelos, which breaks them open on thrush anvils (large stones). Here fragments accumulate, permitting researchers to analyse the snails taken. The thrushes hunt by sight, and capture selectively those forms which match the habitat least well. Snail colonies are found in woodland, hedgerows and grassland, and the predation determines the proportion of phenotypes (morphs) found in each colony.
More complex examples:
A second kind of selection also operates on the snail, whereby certain heterozygotes have a physiological advantage over the homozygotes. Thirdly, apostatic selection is likely, with the birds preferentially taking the most common morph. This is the 'search pattern' effect, where a predominantly visual predator persists in targeting the morph which gave a good result, even though other morphs are available.
More complex examples:
The polymorphism survives in almost all habitats, though the proportions of morphs varies considerably. The alleles controlling the polymorphism form a supergene with linkage so close as to be nearly absolute. This control saves the population from a high proportion of undesirable recombinants.
More complex examples:
In this species predation by birds appears to be the main (but not the only) selective force driving the polymorphism. The snails live on heterogeneous backgrounds, and thrush are adept at detecting poor matches. The inheritance of physiological and cryptic diversity is preserved also by heterozygous advantage in the supergene. Recent work has included the effect of shell colour on thermoregulation, and a wider selection of possible genetic influences is also considered.
More complex examples:
Chromosome polymorphism in Drosophila In the 1930s Theodosius Dobzhansky and his co-workers collected Drosophila pseudoobscura and D. persimilis from wild populations in California and neighbouring states. Using Painter's technique, they studied the polytene chromosomes and discovered that all the wild populations were polymorphic for chromosomal inversions. All the flies look alike whatever inversions they carry, so this is an example of a cryptic polymorphism. Evidence accumulated to show that natural selection was responsible: Values for heterozygote inversions of the third chromosome were often much higher than they should be under the null assumption: if no advantage for any form the number of heterozygotes should conform to Ns (number in sample) = p2+2pq+q2 where 2pq is the number of heterozygotes (see Hardy–Weinberg equilibrium).
More complex examples:
Using a method invented by L'Heretier and Teissier, Dobzhansky bred populations in population cages, which enabled feeding, breeding and sampling whilst preventing escape. This had the benefit of eliminating migration as a possible explanation of the results. Stocks containing inversions at a known initial frequency can be maintained in controlled conditions. It was found that the various chromosome types do not fluctuate at random, as they would if selectively neutral, but adjust to certain frequencies at which they become stabilised.
More complex examples:
Different proportions of chromosome morphs were found in different areas. There is, for example, a polymorph-ratio cline in D. robusta along an 18-mile (29 km) transect near Gatlinburg, TN passing from 1,000 feet (300 m) to 4,000 feet. Also, the same areas sampled at different times of year yielded significant differences in the proportions of forms. This indicates a regular cycle of changes which adjust the population to the seasonal conditions. For these results selection is by far the most likely explanation.
More complex examples:
Lastly, morphs cannot be maintained at the high levels found simply by mutation, nor is drift a possible explanation when population numbers are high.By 1951 Dobzhansky was persuaded that the chromosome morphs were being maintained in the population by the selective advantage of the heterozygotes, as with most polymorphisms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Taussig–Bing syndrome**
Taussig–Bing syndrome:
Taussig–Bing syndrome is a cyanotic congenital heart defect in which the patient has both double outlet right ventricle (DORV) and subpulmonic ventricular septal defect (VSD).In DORV, instead of the normal situation where blood from the left ventricle (LV) flows out to the aorta and blood from the right ventricle (RV) flows out to the pulmonary artery, both aorta and pulmonary artery are connected to the RV, and the only path for blood from the LV is across the VSD. When the VSD is subpulmonic (sitting just below the pulmonary artery), the LV blood then flows preferentially to the pulmonary artery. Then the RV blood, by default, flows mainly to the aorta.
Taussig–Bing syndrome:
The clinical manifestations of a Taussig-Bing anomaly, therefore, are much like those of dextro-Transposition of the great arteries (but the surgical repair is different). It can be corrected surgically also with the arterial switch operation (ASO).
It is managed with Rastelli procedure. It is named after Helen B. Taussig and Richard Bing, who first described it in 1949. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pulmonary artery sling**
Pulmonary artery sling:
Pulmonary artery sling is a rare condition in which the blood vessels between the heart and the lungs have formed incorrectly before birth. It is a type of cardiovascular condition called a vascular ring.The main treatment is surgery.
Symptoms and signs:
Symptoms include cyanosis, dyspnoea and apnoeic spells. Rarely it is asymptomatic and is detected incidentally in asymptomatic adults.
Cause:
In pulmonary artery sling, the left pulmonary artery anomalously originates from a normally positioned right pulmonary artery. The left pulmonary artery arises anterior to the right main bronchus near its origin from the trachea, courses between the trachea and the esophagus and enters the left hilum.
Treatment:
It almost always requires surgical intervention. The surgery is usually open heart surgery with an incision through the sternum.
History:
The first known case of pulmonary artery sling was diagnosed and surgically repaired by Willis J. Potts at Lurie Children's Hospital in 1953. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intrinsic hyperpolarizability**
Intrinsic hyperpolarizability:
Intrinsic hyperpolarizability in physics, mathematics and statistics, is a scale invariant quantity that can be used to compare molecules of different sizes. The intrinsic hyperpolarizability is defined as the hyperpolarizability divided by the Kuzyk Limit. This quantity is scale invariant and thus is independent of the energy scale and number of electrons in a molecule that is being evaluated for its nonlinear optical response. Therefore, it can be used to compare molecules of different shapes and sizes.
Intrinsic hyperpolarizability:
The Intrinsic Hyperpolarizability can be used as a figure of merit for comparing molecules for their usefulness in electro-optics applications. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multiphonic**
Multiphonic:
A multiphonic is an extended technique on a monophonic musical instrument (one that generally produces only one note at a time) in which several notes are produced at once. This includes wind, reed, and brass instruments, as well as the human voice. Multiphonic-like sounds on string instruments, both bowed and hammered, have also been called multiphonics, for lack of better terminology and scarcity of research.
Multiphonic:
Multiphonics on wind instruments are primarily a 20th-century technique, though the brass technique of singing while playing has been known since the 18th century and used by composers such as Carl Maria von Weber. Commonly, no more than four notes will be produced at once, though for some chords on some instruments it is possible to get several more.
Technique:
Woodwind instruments On woodwind instruments—e.g., saxophone, clarinet, oboe, bassoon, flute, and recorder—multiphonics can be produced either with new fingerings, by using different embouchures, or voicing the throat with conventional fingerings. There have been numerous fingering guides published for the woodwind player to achieve harmonics. Multiphonics on reed instruments can also be produced in the manners described below for brass instruments.
Technique:
It is said to be impossible to recreate exactly the conditions between one player and the next, due to minute differences in instruments, reeds, embouchure, and other things. This, however, is not entirely true; the multiphonic will depend on the room temperature and other such things, but essentially multiphonics sound the same due to the harmonic structure of the multiphonic. A multiphonic fingering that works for one player may not work for that same player on a different instrument, or a different player on the same instrument, or even after switching reeds. This is often the result of slightly different construction of two instruments from different makers.
Technique:
Brass instruments In brass instruments, the most common method of producing multiphonics is by simultaneously playing the instrument and singing into it. When the sung note has a different frequency than the played note (preferably within the harmonic series of the played note), several new notes that are the sums/differences of the frequencies of the sung note and the played note are produced; leading to the popular term trumpet/trombone/horn growl. This technique is also called "horn chords". The tone sung doesn't necessarily have to be in the played tone's harmonic series, but the effect is more audible if it is. The tone quality of brass multiphonics is influenced strongly by the voice of the player.
Technique:
Another method is referred to as "lip multiphonics", in which a brass player alters the airflow to blow between partials, in the harmonic series of the slide position/valve. The outcome is just as stable as any multiphonic and perfectly structured. When the frequencies add together or subtract from each other (essentially merge), the fundamental is recreated. For example: A 440 and A 220. This would combine to make 660, creating a new fundamental of the second lowest B of the piano.
Technique:
A third method, known as 'split tones' or double buzz, produces multiphonics when players make their lips vibrate at different speeds against each other. The most common result is a perfect interval, but the range of intervals produced can vary broadly.
Technique:
String instruments String instruments can also produce multiphonic tones when strings are bowed or hammered (as in piano multiphonics) between the harmonic nodes. This works best on larger instruments like double bass and cello. Another technique involves the rotational oscillation mode of the string, which might be twisted to adjust the rotational tension. Other multiphonic extended techniques used are prepared piano, prepared guitar and 3rd bridge.
Technique:
Vocal multiphonics The technique of producing multiphonics with the voice is called overtone singing (typically with secondary resonant structure) or throat singing (typically with additional tones from throat trills).
There is another technique done in whistling, where whistlers hum in their throats while whistling with the front parts of their mouths. This is well known for achieving a spacey "ring modulation" sound (e.g. by Jim Carrey in The Truman Show). All three vibrations—whistle, voice and throat trill—can be combined also.
How multiphonics work:
In general, when playing a wind instrument, the tone that comes out consists of the fundamental—the pitch usually identified as the note being played—as well as pitches with frequencies that are integer multiples of the frequency of the fundamental. (Only pure sine wave tones lack these overtones.) Normally, only the fundamental pitch is perceived as being played.
By controlling the air flow through the instrument and the shape of the column (by changing fingering or valve position), a player may produce two distinct tones not part of the same harmonic series.
Notation:
Multiphonics may be notated in score in a variety of ways. When exact pitches are specified, one method of notation is simply to indicate a chord, leaving the performer to figure out what techniques are necessary to achieve it. Common on woodwind music is to specify a particular fingering underneath the required note; as different fingerings produce different qualities of sound, a composer who is concerned about the precise effect created may wish to do this. (The same fingering can cause different result on instruments from different manufacturers, due to variations in construction.) Approximate pitches may be specified by wavy lines or in cluster notation to designate acceptable ranges of sound. There is, however, a wide range of notation used to designate multiphonics, with several individual composers preferring notations not in common use. Piano multiphonic notation can include, among other factors, the numbers of sounding partials or fingering distances on the string. Such notations have been developed in recent studies by C. J. Walter and J. Vesikkala.
Use in literature:
The first real use of multiphonics in literature are of the brass "horn chord" style. Carl Maria von Weber used this technique in horn compositions, leading up to his well-known Concertino for horn and orchestra of 1815.
Use in literature:
Woodwind multiphonics and brass lip multiphonics did not make appearances in classical music until the 20th century, with pioneering compositions such as Luciano Berio's Sequenzas for solo wind instruments and Proporzioni for solo flute by Franco Evangelisti using them extensively in 1958. Multiphonics are widely today used in contemporary classical music.The technique is used in jazz as early as the 1920s by Adrian Rollini on his bass saxophone. Then it was largely forgotten until Illinois Jacquet used them in the 1940s. Multiphonics were also widely used by John Coltrane, and jazz flautist Jeremy Steig. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kuwahara filter**
Kuwahara filter:
The Kuwahara filter is a non-linear smoothing filter used in image processing for adaptive noise reduction. Most filters that are used for image smoothing are linear low-pass filters that effectively reduce noise but also blur out the edges. However the Kuwahara filter is able to apply smoothing on the image while preserving the edges. It is named after Michiyoshi Kuwahara, Ph.D., who worked at Kyoto and Osaka Sangyo Universities in Japan, developing early medical imaging of dynamic heart muscle in the 1970s and 80s.
The Kuwahara operator:
Suppose that I(x,y) is a grey scale image and that we take a square window of size 2a+1 centered around a point (x,y) in the image. This square can be divided into four smaller square regions Qi=1⋯4 each of which will be if if if if i=4 where × is the cartesian product. Pixels located on the borders between two regions belong to both regions so there is a slight overlap between subregions.
The Kuwahara operator:
The arithmetic mean mi(x,y) and standard deviation σi(x,y) of the four regions centered around a pixel (x,y) are calculated and used to determine the value of the central pixel. The output of the Kuwahara filter Φ(x,y) for any point (x,y) is then given by {\textstyle \Phi (x,y)=m_{i}(x,y)} where min jσj(x,y) This means that the central pixel will take the mean value of the area that is most homogenous. The location of the pixel in relation to an edge plays a great role in determining which region will have the greater standard deviation. If for example the pixel is located on a dark side of an edge it will most probably take the mean value of the dark region. On the other hand, should the pixel be on the lighter side of an edge it will most probably take a light value. On the event that the pixel is located on the edge it will take the value of the more smooth, least textured region. The fact that the filter takes into account the homogeneity of the regions ensures that it will preserve the edges while using the mean creates the blurring effect.
The Kuwahara operator:
Similarly to the median filter the Kuwahara filter uses a sliding window approach to access every pixel in the image. The size of the window is chosen in advance and may vary depending on the desired level of blur in the final image. Bigger windows typically result in the creation of more abstract images whereas small windows produce images that retain their detail. Typically windows are chosen to be square with sides that have an odd number of pixels for symmetry. However, there are variations of the Kuwahara filter that use rectangular windows. Additionally, the subregions do not need to overlap or have the same size as long as they cover all of the window.
Color images:
For color images, the filter should not be performed by applying the filter to each RGB channel separately, and then recombining the three filtered color channels to form the filtered RGB image. The main problem with that is that the quadrants will have different standard deviations for each of the channels. For example, the upper left quadrant may have the lowest standard deviation in the red channel, but the lower right quadrant may have the lowest standard deviation in the green channel. This situation would result in the color of the central pixel to be determined by different regions, which might result in color artifacts or blurrier edges.
Color images:
To overcome this problem, for color images a slightly modified Kuwahara filter must be used. The image is first converted into another color space, the HSV color model. The modified filter then operates on only the "brightness" channel, the Value coordinate in the HSV model. The variance of the "brightness" of each quadrant is calculated to determine the quadrant from which the final filtered color should be taken from. The filter will produce an output for each channel which will correspond to the mean of that channel from the quadrant that had the lowest standard deviation in "brightness". This ensures that only one region will determine the RGB values of the central pixel.
Color images:
ImageMagick uses a similar approach, but using the Rec. 709 Luma as the brightness metric.
Julia Implementation
Applications:
Originally the Kuwahara filter was proposed for use in processing RI-angiocardiographic images of the cardiovascular system. The fact that any edges are preserved when smoothing makes it especially useful for feature extraction and segmentation and explains why it is used in medical imaging. The Kuwahara filter however also finds many applications in artistic imaging and fine-art photography due to its ability to remove textures and sharpen the edges of photographs. The level of abstraction helps create a desirable painting-like effect in artistic photographs especially in the case of the colored image version of the filter. These applications have known great success and have encouraged similar research in the field of image processing for the arts.
Applications:
Although the vast majority of applications have been in the field of image processing there have been cases that use modifications of the Kuwahara filter for machine learning tasks such as clustering.The Kuwahara filter has been implemented in CVIPtools.
Drawbacks and restrictions:
The Kuwahara filter despite its capabilities in edge preservation has certain drawbacks.
Drawbacks and restrictions:
At a first glance it is noticeable that the Kuwahara filter does not take into account the case where two regions have equal standard deviations. This is not often the case in real images since it is rather hard to find two regions with exactly the same standard deviation due to the noise that is always present. In cases where two regions have similar standard deviations the value of the center pixel could be decided at random by the noise in these regions. Again this would not be a problem if the regions had the same mean. However, it is not unusual for regions of very different means to have the same standard deviation. This makes the Kuwahara filter susceptible to noise. Different ways have been proposed for dealing with this issue, one of which is to set the value of the center pixel to {\textstyle (m_{1}+m_{2})/2} in cases where the standard deviation of two regions do not differ more than a certain value D The Kuwahara filter is also known to create block artifacts in the images especially in regions of the image that are highly textured. These blocks disrupt the smoothness of the image and are considered to have a negative effect in the aesthetics of the image. This phenomenon occurs due to the division of the window into square regions. A way to overcome this effect is to take windows that are not rectangular(i.e. circular windows) and separate them into more non-rectangular regions. There have also been approaches where the filter adapts its window depending on the input image.
Extensions of the Kuwahara filter:
The success of the Kuwahara filter has spurred an increase the development of edge-enhancing smoothing filters. Several variations have been proposed for similar use most of which attempt to deal with the drawbacks of the original Kuwahara filter.
Extensions of the Kuwahara filter:
The "Generalized Kuwahara filter" proposed by P. Bakker considers several windows that contain a fixed pixel. Each window is then assigned an estimate and a confidence value. The value of the fixed pixel then takes the value of the estimate of the window with the highest confidence. This filter is not characterized by the same ambiguity in the presence of noise and manages to eliminate the block artifacts.
Extensions of the Kuwahara filter:
The "Mean of Least Variance"(MLV) filter, proposed by M.A. Schulze also produces edge-enhancing smoothing results in images. Similarly to the Kuwahara filter it assumes a window of size 2d−1×2d−1 but instead of searching amongst four subregions of size d×d for the one with minimum variance it searches amongst all possible d×d subregions. This means the central pixel of the window will be assigned the mean of the one subregion out of a possible d2 that has the smallest variance.
Extensions of the Kuwahara filter:
The "Adaptative Kuwahara filter", proposed by K. Bartyzel, is a combination of the anisotropic Kuwahara filter and the adaptative median filter. In comparison with the standard Kuwahara filter, both the objects and the edges retain a better quality. As opposed to the standard Kuwahara filter, the window size is changing, depending on the local properties of the image. For each of the four basic areas surrounding a pixel, the mean and variance are calculated. Then, the window size of each of the four basic areas is increased by 1. If the variance of a new window is smaller than before the resizing of the filter window, then the mean and variance of the basic area will take the newly calculated values. The window size continues to be increased until the new variance is greater than the previous one, or the maximum allowable window size is reached. The variance of the four areas are then compared, and the value of the output pixel is the average value of the basic area for which the variance was the smallest.
Extensions of the Kuwahara filter:
A more recent attempt in edge-enhancing smoothing was also proposed by J. E. Kyprianidis. The filter's output is a weighed sum of the local averages with more weight given the averages of more homogenous regions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**American Academy of Periodontology**
American Academy of Periodontology:
The American Academy of Periodontology (AAP) is the non-profit membership association for periodontists-dental professionals specializing in the prevention, diagnosis, and treatment of diseases affecting the gums and supporting structures of the teeth and in the placement and maintenance of dental implants.
American Academy of Periodontology:
The AAP was founded in 1914 by Drs. Gillette Hayden and Grace Rogers Spalding. In 1916 Gillette Hayden served as its first female president.The AAP currently has 7,500 members including periodontists and general dentists from all 50 states and around the world. The AAP also publishes the Journal of Periodontology, a monthly scholarly journal.The mission of the AAP is to champion member success and professional partnerships for optimal patient health and quality of life. Periodontics is one of nine dental specialties recognized by the American Dental Association. Additionally, the AAP aims to educate the public about the link between periodontal disease and systemic diseases and advocates for periodontal science, research, and clinical advances.Membership in the AAP is open to all licensed dentists and offers important professional benefits and services. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aller Formation**
Aller Formation:
The Aller Formation is a geologic formation in Germany. It preserves fossils dating back to the Permian period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nanogeoscience**
Nanogeoscience:
Nanogeoscience is the study of nanoscale phenomena related to geological systems. Predominantly, this is investigated by studying environmental nanoparticles between 1–100 nanometers in size. Other applicable fields of study include studying materials with at least one dimension restricted to the nanoscale (e.g. thin films, confined fluids) and the transfer of energy, electrons, protons, and matter across environmental interfaces.
The atmosphere:
As more dust enters the atmosphere due to the consequences of human activity (from direct effects, such as clearing of land and desertification, versus indirect effects, such as global warming), it becomes more important to understand the effects of mineral dust on the gaseous composition of the atmosphere, cloud formation conditions, and global-mean radiative forcing (i.e., heating or cooling effects).
The ocean:
Oceanographers generally study particles that measure 0.2 micrometres and larger, which means a lot of nanoscale particles are not examined, particularly with respect to formation mechanisms.
The soils:
Water–rock–bacteria nanoscience Although by no means developed, nearly all aspects (both geo- and bioprocesses) of weathering, soil, and water–rock interaction science are inexorably linked to nanoscience. Within the Earth's near-surface, materials that are broken down, as well as materials that are produced, are often in the nanoscale regime. Further, as organic molecules, simple and complex, as well as bacteria and all flora and fauna in soils and rocks interact with the mineral components present, nanodimensions and nanoscale processes are the order of the day.Metal transport nanoscience On land, researchers study how nanosized minerals capture toxins such as arsenic, copper, and lead from the soil. Facilitating this process, called soil remediation, is a tricky business.Nanogeoscience is in a relatively early stage of development. The future directions of nanoscience in the geosciences will include a determination of the identity, distribution, and unusual chemical properties of nanosized particles and/or films in the oceans, on the continents, and in the atmosphere, and how they drive Earth processes in unexpected ways. Further, nanotechnology will be the key to developing the next generation of Earth and environmental sensing systems.
Size-dependent stability and reactivity of nanoparticles:
Nanogeoscience deals with structures, properties and behaviors of nanoparticles in soils, aquatic systems and atmospheres. One of the key features of nanoparticles is the size-dependence of the nanoparticle stability and reactivity. This arises from the large specific surface area and differences in surface atomic structure of nanoparticles at small particle sizes. In general, the free energy of nanoparticles is inversely proportional to their particle size. For materials that can adopt two or more structures, size-dependent free energy may result in phase stability crossover at certain sizes. Free energy reduction drives crystal growth (atom-by-atom or by oriented attachment ), which may again drive the phase transformation due to the change of the relative phase stability at increasing sizes. These processes impact the surface reactivity and mobility of nanoparticles in natural systems.
Size-dependent stability and reactivity of nanoparticles:
Well-identified size-dependent phenomena of nanoparticles include: Phase stability reversal of bulk (macroscopic) particles at small sizes. Usually, a less stable bulk-phase at low temperature (and/or low pressure) becomes more stable than the bulk-stable phase as the particle size decreases below a certain critical size. For instance, bulk anatase (TiO2) is metastable with respect to bulk rutile (TiO2). However, in air, anatase becomes more stable than rutile at particle sizes below 14 nm. Similarly, below 1293 K, wurtzite (ZnS) is less stable than sphalerite (ZnS). In vacuum, wurtzite becomes more stable than sphalerite when the particle size is less than 7 nm at 300 K. At very small particle sizes, the addition of water to the surface of ZnS nanoparticles can induce a change in nanoparticle structure and surface-surface interactions can drive a reversible structural transformation upon aggregation/disaggregation. Other examples of size-dependent phase stability include systems of Al2O3, ZrO2, C, CdS, BaTiO3, Fe2O3, Cr2O3, Mn2O3, Nb2O3, Y2O3, and Au-Sb.
Size-dependent stability and reactivity of nanoparticles:
Phase transformation kinetics is size-dependent and transformations usually occur at low temperatures (less than several hundred degrees). Under such conditions, rates of surface nucleation and bulk nucleation are low due to their high activation energies. Thus, phase transformation occurs predominantly via interface nucleation that depends on contact between nanoparticles. As a consequence, the transformation rate is particle number (size)-dependent and it proceeds faster in densely packed (or highly aggregated) than in loosely packed nanoparticles. Complex concurrent phase transformation and particle coarsening often occur in nanoparticles.
Size-dependent stability and reactivity of nanoparticles:
Size-dependent adsorption on nanoparticles and oxidation of nanominerals.These size-dependent properties highlight the importance of the particle size in nanoparticle stability and reactivity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stellar population**
Stellar population:
In 1944, Walter Baade categorized groups of stars within the Milky Way into stellar populations.
Stellar population:
In the abstract of the article by Baade, he recognizes that Jan Oort originally conceived this type of classification in 1926.Baade observed that bluer stars were strongly associated with the spiral arms, and yellow stars dominated near the central galactic bulge and within globular star clusters. Two main divisions were defined as population I and population II, with another newer, hypothetical division called population III added in 1978.
Stellar population:
Among the population types, significant differences were found with their individual observed stellar spectra. These were later shown to be very important and were possibly related to star formation, observed kinematics, stellar age, and even galaxy evolution in both spiral and elliptical galaxies. These three simple population classes usefully divided stars by their chemical composition or metallicity.By definition, each population group shows the trend where decreasing metal content indicates increasing age of stars. Hence, the first stars in the universe (very low metal content) were deemed population III, old stars (low metallicity) as population II, and recent stars (high metallicity) as population I. The Sun is considered population I, a recent star with a relatively high 1.4% metallicity. Note that astrophysics nomenclature considers any element heavier than helium to be a "metal", including chemical non-metals such as oxygen.
Stellar development:
Observation of stellar spectra has revealed that stars older than the Sun have fewer heavy elements compared with the Sun. This immediately suggests that metallicity has evolved through the generations of stars by the process of stellar nucleosynthesis.
Stellar development:
Formation of the first stars Under current cosmological models, all matter created in the Big Bang was mostly hydrogen (75%) and helium (25%), with only a very tiny fraction consisting of other light elements such as lithium and beryllium. When the universe had cooled sufficiently, the first stars were born as population III stars, without any contaminating heavier metals. This is postulated to have affected their structure so that their stellar masses became hundreds of times more than that of the Sun. In turn, these massive stars also evolved very quickly, and their nucleosynthetic processes created the first 26 elements (up to iron in the periodic table).Many theoretical stellar models show that most high-mass population III stars rapidly exhausted their fuel and likely exploded in extremely energetic pair-instability supernovae. Those explosions would have thoroughly dispersed their material, ejecting metals into the interstellar medium (ISM), to be incorporated into the later generations of stars. Their destruction suggests that no galactic high-mass population III stars should be observable. However, some population III stars might be seen in high-redshift galaxies whose light originated during the earlier history of the universe. Scientists have found evidence of an extremely small ultra metal-poor star, slightly smaller than the Sun, found in a binary system of the spiral arms in the Milky Way. The discovery opens up the possibility of observing even older stars.Stars too massive to produce pair-instability supernovae would have likely collapsed into black holes through a process known as photodisintegration. Here some matter may have escaped during this process in the form of relativistic jets, and this could have distributed the first metals into the universe.
Stellar development:
Formation of the observed stars The oldest stars observed thus far, known as population II, have very low metallicities; as subsequent generations of stars were born, they became more metal-enriched, as the gaseous clouds from which they formed received the metal-rich dust manufactured by previous generations of stars from population III.
As those population II stars died, they returned metal-enriched material to the interstellar medium via planetary nebulae and supernovae, enriching further the nebulae, out of which the newer stars formed. These youngest stars, including the Sun, therefore have the highest metal content, and are known as population I stars.
Chemical classification by Baade:
Population I stars Population I, or metal-rich, stars are young stars with the highest metallicity out of all three populations and are more commonly found in the spiral arms of the Milky Way galaxy. The Sun is an example of a metal-rich star and is considered as an intermediate population I star, while the sun-like μ Arae is much richer in metals.Population I stars usually have regular elliptical orbits of the Galactic Center, with a low relative velocity. It was earlier hypothesized that the high metallicity of population I stars makes them more likely to possess planetary systems than the other two populations, because planets, particularly terrestrial planets, are thought to be formed by the accretion of metals. However, observations of the Kepler Space Telescope data have found smaller planets around stars with a range of metallicities, while only larger, potential gas giant planets are concentrated around stars with relatively higher metallicity – a finding that has implications for theories of gas-giant formation. Between the intermediate population I and the population II stars comes the intermediate disc population.
Chemical classification by Baade:
Population II stars Population II, or metal-poor, stars are those with relatively little of the elements heavier than helium. These objects were formed during an earlier time of the universe. Intermediate population II stars are common in the bulge near the centre of the Milky Way, whereas population II stars found in the galactic halo are older and thus more metal-deficient. Globular clusters also contain high numbers of population II stars.A characteristic of population II stars is that despite their lower overall metallicity, they often have a higher ratio of "alpha elements" (elements produced by the alpha process, like oxygen and neon) relative to iron (Fe) as compared with population I stars; current theory suggests that this is the result of type II supernovas being more important contributors to the interstellar medium at the time of their formation, whereas type Ia supernova metal-enrichment came at a later stage in the universe's development.Scientists have targeted these oldest stars in several different surveys, including the HK objective-prism survey of Timothy C. Beers et al. and the Hamburg-ESO survey of Norbert Christlieb et al., originally started for faint quasars. Thus far, they have uncovered and studied in detail about ten ultra-metal-poor (UMP) stars (such as Sneden's Star, Cayrel's Star, BD +17° 3248) and three of the oldest stars known to date: HE 0107-5240, HE 1327-2326 and HE 1523-0901. Caffau's star was identified as the most metal-poor star yet when it was found in 2012 using Sloan Digital Sky Survey data. However, in February 2014 the discovery of an even lower-metallicity star was announced, SMSS J031300.36-670839.3 located with the aid of SkyMapper astronomical survey data. Less extreme in their metal deficiency, but nearer and brighter and hence longer known, are HD 122563 (a red giant) and HD 140283 (a subgiant).
Chemical classification by Baade:
Population III stars Population III stars are a hypothetical population of extremely massive, luminous and hot stars with virtually no "metals", except possibly for intermixing ejecta from other nearby, early population III supernovae. The term was first introduced by Neville J. Woolf in 1965. Such stars are likely to have existed in the very early universe (i.e., at high redshift) and may have started the production of chemical elements heavier than hydrogen, which are needed for the later formation of planets and life as we know it.The existence of population III stars is inferred from physical cosmology, but they have not yet been observed directly. Indirect evidence for their existence has been found in a gravitationally lensed galaxy in a very distant part of the universe. Their existence may account for the fact that heavy elements – which could not have been created in the Big Bang – are observed in quasar emission spectra. They are also thought to be components of faint blue galaxies. These stars likely triggered the universe's period of reionization, a major phase transition of the hydrogen gas composing most of the interstellar medium. Observations of the galaxy UDFy-38135539 suggest that it may have played a role in this reionization process. The European Southern Observatory discovered a bright pocket of early population stars in the very bright galaxy Cosmos Redshift 7 from the reionization period around 800 million years after the Big Bang, at z = 6.60. The rest of the galaxy has some later redder population II stars. Some theories hold that there were two generations of population III stars.
Chemical classification by Baade:
Current theory is divided on whether the first stars were very massive or not. One possibility is that these stars were much larger than current stars: several hundred solar masses, and possibly up to 1,000 solar masses. Such stars would be very short-lived and last only 2–5 million years. Such large stars may have been possible due to the lack of heavy elements and a much warmer interstellar medium from the Big Bang. Conversely, theories proposed in 2009 and 2011 suggest that the first star groups might have consisted of a massive star surrounded by several smaller stars. The smaller stars, if they remained in the birth cluster, would accumulate more gas and could not survive to the present day, but a 2017 study concluded that if a star of 0.8 solar masses (M☉) or less was ejected from its birth cluster before it accumulated more mass, it could survive to the present day, possibly even in our Milky Way galaxy.Analysis of data of extremely low-metallicity population II stars such as HE 0107-5240, which are thought to contain the metals produced by population III stars, suggest that these metal-free stars had masses of 20~130 solar masses. On the other hand, analysis of globular clusters associated with elliptical galaxies suggests pair-instability supernovae, which are typically associated with very massive stars, were responsible for their metallic composition. This also explains why there have been no low-mass stars with zero metallicity observed, although models have been constructed for smaller population III stars. Clusters containing zero-metallicity red dwarfs or brown dwarfs (possibly created by pair-instability supernovae) have been proposed as dark matter candidates, but searches for these types of MACHOs through gravitational microlensing have produced negative results.Detection of population III stars is a goal of NASA's James Webb Space Telescope. New spectroscopic surveys, such as SEGUE or SDSS-II, may also locate population III stars.On 8 December 2022, astronomers reported the possible detection of Population III stars. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lean Hog**
Lean Hog:
Lean Hog is a type of hog (pork) futures contract that can be used to hedge and to speculate on pork prices in the US.
Lean Hog:
Lean Hog futures and options are traded on the Chicago Mercantile Exchange (CME), which introduced Lean Hog futures contracts in 1966. The contracts are for 40,000 pounds of Lean Hogs, and call for cash settlement based on the CME Lean Hog Index, which is a two-day weighted average of cash markets. Minimum tick size for the contract is $0.025 per pound, with each tick valued at $10 USD. Trades on the contract are subject to price limits of $0.0375 per pound above or below the previous day's contract settlement price, with an exception that there shall be no daily price limits in the expiring month contract during the last 2 Trading Days.Below are the Contract Specifications for Lean Hog futures on the CME: Lean Hog futures prices are widely used by U.S. pork producers as reference prices in marketing contracts for selling their hogs. Usage of marketing contracts tied to pork futures prices are correlated, and tend to increase, with the size of the producer. In addition, hog producers often trade pork futures contracts directly as part of a risk management program.Lean Hog futures prices are also a part of both the Bloomberg Commodity Index and the S&P GSCI commodity index, which are benchmark indices widely followed in financial markets by traders and institutional investors. Its weighting in these commodity indices give Lean Hog futures prices non-trivial influence on returns on a wide range of investment funds and portfolios. Conversely, traders and investors have become non-trivial participants in the market for Lean Hog futures.Lean hog futures contracts are often grouped together with feeder cattle and live cattle futures contracts as livestock futures contracts. These commodities share many fundamental demand and supply risks, such long feeding periods, weather, feed prices, and consumer sentiment toward meat consumption, which makes grouping them together useful for commercial discussions about both the commodities and their futures contracts. Commodity indices have followed this practice and grouped these futures contracts together in livestock futures contracts categories. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stair climbing**
Stair climbing:
Stair climbing is the climbing of a flight of stairs. It is often described as a "low-impact" exercise, often for people who have recently started trying to get in shape. A common exhortation in health pop culture is "Take the stairs, not the elevator".
Energy expenditure:
In one study based on mean oxygen uptake and heart rate, researchers estimated that ascending a 15 cm (5.9 inches) step expends 0.46 kJ (0.11 kcal) for the average person, and descending a step expends 0.21 kJ (0.05 kcal). The study concluded that stair-climbing met the minimum requirements for cardiorespiratory benefits, and considered stair-climbing suitable for promotion of physical activity.
Competitive sport:
Stair climbing has developed into the organized sport tower running. Every year several stair climbing races are held around the world with the competitors running up the stairs of some of the world's tallest buildings and towers (e.g., the Empire State Building, Gran Hotel Bali), or on outside stairs such as the Niesenbahn Stairway. World class athletes from the running and cycling worlds regularly compete in such events. Some have specialized exclusively in stair climbing races. Prizes, awards, and other accolades are given for the top performers by gender and age group. Stair climbing is one of the most grueling of sports, requiring competitors to move their entire body weight vertically, as well as horizontally.
Competitive sport:
The results of more than 160 races on all continents are evaluated each year for the Towerrunning World Cup. The most important - about 18 so called "Masters Races" - have a predefined factor of 1.5 to 2.5, whereas all other races are given 0.4, 0.7 or 1 depending on class and internationality of the participants. 2010 World Cup winners were Melissa Moon (NZL) and Thomas Dold (GER). 2011 winners are Dold (3rd time) and Cristina Bonacina (ITA). The World Cup Final 2012 will be hosted on December 8 in Bogota (COL).
Competitive sport:
An annual competition, 'Girnar Arohan Spardha', is held in Junagadh, India, and involves a race to climb and descend the steps of the Girnar mountain.
ESPN8 The Ocho has a televised event called "Slippery Stairs".
Infants and safe stair descent:
Falling down a flight of stairs or just a couple of steps is very common during infants’ first exposure to stair descent. Infants are more likely to fall down stairs than any other age group. In the United States, approximately 73,000 children between the ages of 6 months and 2 years have reported injury on stairs or steps in 2009.Stair descent involves perceptual, cognitive and motor abilities. It relies heavily on visual information to enable balance and accuracy. Seeing obstacles ahead helps stair descent, but for infants the action of keeping their heavy head balanced enough to look down at their feet and the objective together, make the process very difficult. (Hurlke, 1998). Not seeing the task ahead causes confusion and disrupts concentration.
Infants and safe stair descent:
Infants tend to adopt one of several strategies closely associated with stair descent: Scooting: where the infant sits on the step and thrusts forward using their bottom to land on the next step.
Backing: where the infant turns around (to counter the motion of climbing), and slowly lowers one foot at a time to descend to the lower step. Backing distributes the weight evenly on all four limbs, but means that the child cannot see what it is doing.
Walking: where a child descends in an upright position facing the bottom of the staircase, lowering one foot at a time to the next step.Some limited norms for stair climbing motor milestones have been established, but the process had historically been viewed like any other motor milestone - as a universal skill acquired through development.
Infants and safe stair descent:
One study looked at the typical age onset for stair ascent and descent, and compared them to other developmental milestones. It also looked at the stair climbing strategies that infants use. Consisting of 732 infants, and including parental assessment and documentation of motor skill achievements, along with in-depth interviews parents about the strategies involved and child assessment using laboratory stair apparatus. The results showed that children younger than 9 months of age were unable to go up or down stairs at all, or were only able to go up. By around 13 months, most infants could go upstairs and about half could ascend and descend stairs. Infants typically learned to descend stairs after they have already learned to ascend, with only about 12% achieved both stair-climbing skills at the same time.On average in this study, infants learned to crawl and cruise before learning to ascend stairs independently. Infants were able to climb up the stairs before they could walk, but walking tended to come before independent stair descent. While most of the infants had prior stair experience, the presence or absence of stairs in the home did not influence the onsets of crawling, cruising or stair descent. However, lack of exposure to stairs resulted in a significant time-lag between first learning to ascend and to descend. Differences in housing types created a so-called 'suburban advantage' (i.e. houses with stairs versus flats/apartments without).
Infants and safe stair descent:
Sliding backwards feet first is the safest approach to descending stairs due to the fact that the midline of the body is closer to the staircase providing an even weight distribution on all four limbs. This might explain why it is exceptionally difficult for older people to descend stairs, because their midline is so far way due to longer arms and legs.Other research suggests that infants’ descent strategies may be related to their cognitive abilities. This is why most parents teach their children to back down stairs, even though it's the safest it is also the most cognitively difficult descent strategy.
Records:
On 28 September 2014, Christian Riedl climbed Tower 185 in Frankfurt, Germany 71 times in 12 hours for a total of 43,128 ft (13.14 km).
From 5–6 October 2007, Kurt Hess climbed Esterli Tower in Switzerland 413 times in less than 24 hours for a total of 60,974 ft (18.585 km). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Uncapping**
Uncapping:
Uncapping, in the context of cable modems, refers to a number of activities performed to alter an Internet service provider's modem settings. It is sometimes done for the sake of bandwidth (i.e. by buying a 512kbit/s access modem and then altering it to 10Mbit/s), pluggable interfaces (as by using more than one public ID), or any configurable options a DOCSIS modem can offer. However, uncapping may be considered an illegal activity, such as theft of service.
Methods:
There are several methods used to uncap a cable modem, by hardware, software, tricks, alterations, and modifications.
Methods:
One of the most popular modifications is used on Motorola modems (such as the SB3100, SB4100, and SB4200 models); by spoofing the Internet service provider's TFTP server, the modem is made to accept a different configuration file than the one provided by the TFTP server. This configuration file tells the modem the download and upload caps it should enforce. An example of spoofing would be to edit the configuration file, which requires a DOCSIS editor, or replacing the configuration file with one obtained from a faster modem (e.g. through a Gnutella network). An alternate method employs DHCPforce. By flooding a modem with faked DHCP packets (which contain configuration filename, TFTP, IP, etc.), one can convince the modem to accept any desired configuration file, even one from one's own server (provided the server is routed, of course).
Methods:
Another more advanced method is to attach a TTL to the modem's RS-232 adapter, and get access to the modem's console directly to make it download new firmware, which can then be configured via a simple web interface. Examples include SIGMA, a firmware add-on that expands the current features of an underlying firmware, and others. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rhenium disulfide**
Rhenium disulfide:
Rhenium disulfide is an inorganic compound of rhenium and sulfur with the formula ReS2. It has a layered structure where atoms are strongly bonded within each layer. The layers are held together by weak Van der Waals bonds, and can be easily peeled off from the bulk material.
Production:
ReS2 is found in nature as the mineral rheniite. It can be synthesized from the reaction between rhenium and sulfur at 1000 °C, or the decomposition of rhenium(VII) sulfide at 1100 °C: Re + 2 S → ReS2 Re2S7 → 2 ReS2 + 3 SNanostructured ReS2 can usually be achieved through mechanical exfoliation, chemical vapor deposition (CVD), and chemical and liquid exfoliations. Larger crystals can be grown with the assistance of liquid carbonate flux at high pressure. It is widely used in electronic and optoelectronic device, energy storage, photocatalytic and electrocatalytic reactions.
Properties:
It is a two-dimensional (2D) group VII transition metal dichalcogenide (TMD). ReS2 was isolated down to monolayers which is only one unit cell in thickness for the first time in 2014. These monolayers have shown layer-independent electrical, optical, and vibrational properties much different from other TMDs.
Structure:
Bulk ReS2 has a layered structure and a platelet-like habit. Different crystal structures were proposed for ReS2 based on single-crystal X-ray diffraction studies. While all authors agree that the lattice is triclinic, the reported cell parameters and atomic arrangements slightly differ. The earliest work describes ReS2 in a triclinic unit cell (sp. gr. P 1¯ , a = 0.6455 nm, b = 0.6362 nm, c = 0.6401 nm, α = 105.04°, β = 91.60°, γ = 118.97°) as a distorted variant of the CdCl2 prototype (1T structure, trigonal space group R 3¯ m). In comparison with ideal octahedral coordination of the metal atoms in CdCl2, the Re atoms in ReS2 are displaced from the centers of the surrounding Se6 octahedra and form Re4 clusters that are linked to chains in the b direction. A later study proposed a more accurate description of the crystal structure. It reports a different triclinic cell (sp. gr. P 1¯ , a = 0.6352 nm, b = 0.6446 nm, c = 1.2779 nm, α = 91.51°, β = 105.17°, γ = 118.97°) with the doubled c parameter and swapped a and b, α and β. There are two layers in this unit cell, related by symmetry centers, and the chains of clusters run along the a axis. Each layer form parallelogram-shaped connected clusters with Re-Re distances of ca. 0.27-0.28 nm in the cluster, and ca. 0.29 nm between clusters. There is one more structure description of ReS2 published in in yet another triclinic cell (sp. gr. P 1¯ , a = 0.6417 nm, b = 0.6510 nm, c = 0.6461 nm, α = 121.10°, β = 88.38°, γ = 106.47°) where only one layer is present and the centers of symmetry are in the Re layer. The current consent is that the latter work might have overlooked the doubling of the c parameter captured in. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Abelian 2-group**
Abelian 2-group:
In mathematics, an Abelian 2-group is a higher dimensional analogue of an Abelian group, in the sense of higher algebra, which were originally introduced by Alexander Grothendieck while studying abstract structures surrounding Abelian varieties and Picard groups. More concretely, they are given by groupoids A which have a bifunctor +:A×A→A which acts formally like the addition an Abelian group. Namely, the bifunctor + has a notion of commutativity, associativity, and an identity structure. Although this seems like a rather lofty and abstract structure, there are several (very concrete) examples of Abelian 2-groups. In fact, some of which provide prototypes for more complex examples of higher algebraic structures, such as Abelian n-groups.
Definition:
An Abelian 2-group is a groupoid A with a bifunctor +:A×A→A and natural transformations τ:X+Y⇒Y+Xσ:(X+Y)+Z⇒X+(Y+Z) which satisfy a host of axioms ensuring these transformations behave similarly to commutativity ( τ ) and associativity (σ) for an Abelian group. One of the motivating examples of such a category comes from the Picard category of line bundles on a scheme (see below).
Examples:
Picard category For a scheme or variety X , there is an Abelian 2-group Pic (X) whose objects are line bundles L and morphisms are given by isomorphisms of line bundles. Notice over a given line bundle L End Aut (L)≅OX∗ since the only automorphisms of a line bundle are given by a non-vanishing function on X . The additive structure + is given by the tensor product ⊗ on the line bundles. This makes is more clear why there should be natural transformations instead of equality of functors. For example, we only have an isomorphism of line bundles L⊗L′≅L′⊗L but not direct equality. This isomorphism is independent of the line bundles chosen and are functorial hence they give the natural transformation τ:(−⊗−)→(−⊗−) switching the components. The associativity similarly follows from the associativity of tensor products of line bundles.
Examples:
Two term chain complexes Another source for Picard categories is from two-term chain complexes of Abelian groups A−1→dA0 which have a canonical groupoid structure associated to them. We can write the set of objects as the abelian group A0 and the set of arrows as the set A−1⊕A0 . Then, the source morphism s of an arrow (a−1,a0) is the projection map s(a−1+a0)=a0 and the target morphism t is t(a−1+a0)=d(a−1)+a0 Notice this definition implies the automorphism group of any object a0 is Ker (d) . Notice that if we repeat this construction for sheaves of abelian groups over a site X (or topological space), we get a sheaf of Abelian 2-groups. It could be conjectured if this can be used to construct all such categories, but this is not the case. In fact, this construction must be generalized to spectra to give a precise generalization pg 88.
Examples:
Example of Abelian 2-group in algebraic geometry One example is the Cotangent complex for a local complete intersection scheme X which is given by the two-term complex LX∙=i∗I/I2→i∗ΩY for an embedding i:X→Y . There is a direct categorical interpretation of this Abelian 2-group from deformation theory using the Exalcomm category.Note that in addition to using a 2-term chain complex, would could instead consider a chain complex Ab ) and construct an Abelian n-group (or infinity-group).
Examples:
Abelian 2-group of morphisms For a pair of Abelian 2-groups A,A′ there is an associated Abelian 2-group of morphisms Hom (A,A′) whose objects are given by functors between these two categories, and the arrows are given by natural transformations. Moreover, the bifunctor +′ on A′ induces a bifunctor structure on this groupoid, giving it an Abelian 2-group structure.
Classifying abelian 2-groups:
On order to classify abelian 2-groups, strict picard categories using two-term chain complexes is not enough. One approach is in stable homotopy theory using spectra which only have two non-trivial homotopy groups. While studying an arbitrary Picard category, it becomes clear that there is additional data used to classify the structure of the category, it is given by the Postnikov invariant.
Classifying abelian 2-groups:
Postnikov invariant For an Abelian 2-group A and a fixed object Ob (A) the isomorphisms of the functors x+(−) and (−)+x given by the commutativity arrow τ:x+x⇒x+x gives an element of the automorphism group Aut A(x) which squares to 1 , hence is contained in some Z/2 . Sometimes this is suggestively written as π1(A) . We can call this element ε and this invariant induces a morphism from the isomorphism classes of objects in A , denoted π0(A) , to Aut A(x) , i.e. it gives a morphism Aut A(x) which corresponds to the Postnikov invariant. In particular, every Picard category given as a two-term chain complex has ε=0 because they correspond under the Dold-Kan correspondence to simplicial abelian groups with topological realizations as the product of Eilenberg-Maclane spaces K(H−1(A∙),1)×K(H0(A∙),0) For example, if we have a Picard category with π1(A)=Z/2 and π0(A)=Z , there is no chain complex of Abelian groups giving these homology groups since Z/2 can only be given by a projection Z→⋅2Z→Z/2 Instead this Picard category can be understood as a categorical realization of the truncated spectrum τ≤1S of the sphere spectrum where the only two non-trivial homotopy groups of the spectrum are in degrees 0 and 1 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Division (mathematics)**
Division (mathematics):
Division is one of the four basic operations of arithmetic. The other operations are addition, subtraction, and multiplication. What is being divided is called the dividend, which is divided by the divisor, and the result is called the quotient. At an elementary level the division of two natural numbers is, among other possible interpretations, the process of calculating the number of times one number is contained within another.: 7 This number of times need not be an integer. For example, if 20 apples are divided evenly between 4 people, everyone receives 5 apples (see picture).
Division (mathematics):
The division with remainder or Euclidean division of two natural numbers provides an integer quotient, which is the number of times the second number is completely contained in the first number, and a remainder, which is the part of the first number that remains, when in the course of computing the quotient, no further full chunk of the size of the second number can be allocated. For example, if 21 apples are divided between 4 people, everyone receives 5 apples again, and 1 apple remains.
Division (mathematics):
For division to always yield one number rather than an integer quotient plus a remainder, the natural numbers must be extended to rational numbers or real numbers. In these enlarged number systems, division is the inverse operation to multiplication, that is a = c / b means a × b = c, as long as b is not zero. If b = 0, then this is a division by zero, which is not defined.: 246 In the 21-apples example, everyone would receive 5 apple and a quarter of an apple, thus avoiding any leftover.
Division (mathematics):
Both forms of division appear in various algebraic structures, different ways of defining mathematical structure. Those in which a Euclidean division (with remainder) is defined are called Euclidean domains and include polynomial rings in one indeterminate (which define multiplication and addition over single-variabled formulas). Those in which a division (with a single result) by all nonzero elements is defined are called fields and division rings. In a ring the elements by which division is always possible are called the units (for example, 1 and −1 in the ring of integers). Another generalization of division to algebraic structures is the quotient group, in which the result of "division" is a group rather than a number.
Introduction:
The simplest way of viewing division is in terms of quotition and partition: from the quotition perspective, 20 / 5 means the number of 5s that must be added to get 20. In terms of partition, 20 / 5 means the size of each of 5 parts into which a set of size 20 is divided. For example, 20 apples divide into five groups of four apples, meaning that "twenty divided by five is equal to four". This is denoted as 20 / 5 = 4, or 20/5 = 4. In the example, 20 is the dividend, 5 is the divisor, and 4 is the quotient.
Introduction:
Unlike the other basic operations, when dividing natural numbers there is sometimes a remainder that will not go evenly into the dividend; for example, 10 / 3 leaves a remainder of 1, as 10 is not a multiple of 3. Sometimes this remainder is added to the quotient as a fractional part, so 10 / 3 is equal to 3+1/3 or 3.33..., but in the context of integer division, where numbers have no fractional part, the remainder is kept separately (or exceptionally, discarded or rounded). When the remainder is kept as a fraction, it leads to a rational number. The set of all rational numbers is created by extending the integers with all possible results of divisions of integers.
Introduction:
Unlike multiplication and addition, division is not commutative, meaning that a / b is not always equal to b / a. Division is also not, in general, associative, meaning that when dividing multiple times, the order of division can change the result. For example, (24 / 6) / 2 = 2, but 24 / (6 / 2) = 8 (where the use of parentheses indicates that the operations inside parentheses are performed before the operations outside parentheses).
Introduction:
Division is traditionally considered as left-associative. That is, if there are multiple divisions in a row, the order of calculation goes from left to right: a/b/c=(a/b)/c=a/(b×c)≠a/(b/c)=(a×c)/b.
Division is right-distributive over addition and subtraction, in the sense that a±bc=(a±b)/c=(a/c)±(b/c)=ac±bc.
This is the same for multiplication, as (a+b)×c=a×c+b×c . However, division is not left-distributive, as ab+c=a/(b+c)≠(a/b)+(a/c)=ac+abbc.
For example 12 12 6=2, but 12 12 9.
This is unlike the case in multiplication, which is both left-distributive and right-distributive, and thus distributive.
Notation:
Division is often shown in algebra and science by placing the dividend over the divisor with a horizontal line, also called a fraction bar, between them. For example, "a divided by b" can written as: ab which can also be read out loud as "divide a by b" or "a over b". A way to express division all on one line is to write the dividend (or numerator), then a slash, then the divisor (or denominator), as follows: a/b This is the usual way of specifying division in most computer programming languages, since it can easily be typed as a simple sequence of ASCII characters. (It is also the only notation used for quotient objects in abstract algebra.) Some mathematical software, such as MATLAB and GNU Octave, allows the operands to be written in the reverse order by using the backslash as the division operator: b∖a A typographical variation halfway between these two forms uses a solidus (fraction slash), but elevates the dividend and lowers the divisor: a/b Any of these forms can be used to display a fraction. A fraction is a division expression where both dividend and divisor are integers (typically called the numerator and denominator), and there is no implication that the division must be evaluated further. A second way to show division is to use the division sign (÷, also known as obelus though the term has additional meanings), common in arithmetic, in this manner: a÷b This form is infrequent except in elementary arithmetic. ISO 80000-2-9.6 states it should not be used. This division sign is also used alone to represent the division operation itself, as for instance as a label on a key of a calculator. The obelus was introduced by Swiss mathematician Johann Rahn in 1659 in Teutsche Algebra.: 211 The ÷ symbol is used to indicate subtraction in some European countries, so its use may be misunderstood.In some non-English-speaking countries, a colon is used to denote division: a:b This notation was introduced by Gottfried Wilhelm Leibniz in his 1684 Acta eruditorum.: 295 Leibniz disliked having separate symbols for ratio and division. However, in English usage the colon is restricted to expressing the related concept of ratios.
Notation:
Since the 19th century, US textbooks have used b)a or b)a¯ to denote a divided by b, especially when discussing long division. The history of this notation is not entirely clear because it evolved over time.
Computing:
Manual methods Division is often introduced through the notion of "sharing out" a set of objects, for example a pile of lollies, into a number of equal portions. Distributing the objects several at a time in each round of sharing to each portion leads to the idea of 'chunking' – a form of division where one repeatedly subtracts multiples of the divisor from the dividend itself.
Computing:
By allowing one to subtract more multiples than what the partial remainder allows at a given stage, more flexible methods, such as the bidirectional variant of chunking, can be developed as well.
Computing:
More systematically and more efficiently, two integers can be divided with pencil and paper with the method of short division, if the divisor is small, or long division, if the divisor is larger. If the dividend has a fractional part (expressed as a decimal fraction), one can continue the procedure past the ones place as far as desired. If the divisor has a fractional part, one can restate the problem by moving the decimal to the right in both numbers until the divisor has no fraction, which can make the problem easier to solve (e.g., 10/2.5 = 100/25 = 4).
Computing:
Division can be calculated with an abacus.Logarithm tables can be used to divide two numbers, by subtracting the two numbers' logarithms, then looking up the antilogarithm of the result.
Division can be calculated with a slide rule by aligning the divisor on the C scale with the dividend on the D scale. The quotient can be found on the D scale where it is aligned with the left index on the C scale. The user is responsible, however, for mentally keeping track of the decimal point.
By computer Modern calculators and computers compute division either by methods similar to long division, or by faster methods; see Division algorithm.
In modular arithmetic (modulo a prime number) and for real numbers, nonzero numbers have a multiplicative inverse. In these cases, a division by x may be computed as the product by the multiplicative inverse of x. This approach is often associated with the faster methods in computer arithmetic.
Division in different contexts:
Euclidean division Euclidean division is the mathematical formulation of the outcome of the usual process of division of integers. It asserts that, given two integers, a, the dividend, and b, the divisor, such that b ≠ 0, there are unique integers q, the quotient, and r, the remainder, such that a = bq + r and 0 ≤ r < |b|, where |b| denotes the absolute value of b.
Division in different contexts:
Of integers Integers are not closed under division. Apart from division by zero being undefined, the quotient is not an integer unless the dividend is an integer multiple of the divisor. For example, 26 cannot be divided by 11 to give an integer. Such a case uses one of five approaches: Say that 26 cannot be divided by 11; division becomes a partial function.
Division in different contexts:
Give an approximate answer as a floating-point number. This is the approach usually taken in numerical computation.
Give the answer as a fraction representing a rational number, so the result of the division of 26 by 11 is 26 11 (or as a mixed number, so 26 11 11 .
) Usually the resulting fraction should be simplified: the result of the division of 52 by 22 is also 26 11 . This simplification may be done by factoring out the greatest common divisor.
Give the answer as an integer quotient and a remainder, so 26 11 remainder 4.
To make the distinction with the previous case, this division, with two integers as result, is sometimes called Euclidean division, because it is the basis of the Euclidean algorithm.
Give the integer quotient as the answer, so 26 11 2.
Division in different contexts:
This is the floor function applied to case 2 or 3. It is sometimes called integer division, and denoted by "//".Dividing integers in a computer program requires special care. Some programming languages treat integer division as in case 5 above, so the answer is an integer. Other languages, such as MATLAB and every computer algebra system return a rational number as the answer, as in case 3 above. These languages also provide functions to get the results of the other cases, either directly or from the result of case 3.
Division in different contexts:
Names and symbols used for integer division include div, /, \, and %. Definitions vary regarding integer division when the dividend or the divisor is negative: rounding may be toward zero (so called T-division) or toward −∞ (F-division); rarer styles can occur – see modulo operation for the details.
Divisibility rules can sometimes be used to quickly determine whether one integer divides exactly into another.
Of rational numbers The result of dividing two rational numbers is another rational number when the divisor is not 0. The division of two rational numbers p/q and r/s can be computed as All four quantities are integers, and only p may be 0. This definition ensures that division is the inverse operation of multiplication.
Of real numbers Division of two real numbers results in another real number (when the divisor is nonzero). It is defined such that a/b = c if and only if a = cb and b ≠ 0.
Division in different contexts:
Of complex numbers Dividing two complex numbers (when the divisor is nonzero) results in another complex number, which is found using the conjugate of the denominator: This process of multiplying and dividing by r−is is called 'realisation' or (by analogy) rationalisation. All four quantities p, q, r, s are real numbers, and r and s may not both be 0.
Division in different contexts:
Division for complex numbers expressed in polar form is simpler than the definition above: Again all four quantities p, q, r, s are real numbers, and r may not be 0.
Of polynomials One can define the division operation for polynomials in one variable over a field. Then, as in the case of integers, one has a remainder. See Euclidean division of polynomials, and, for hand-written computation, polynomial long division or synthetic division.
Of matrices One can define a division operation for matrices. The usual way to do this is to define A / B = AB−1, where B−1 denotes the inverse of B, but it is far more common to write out AB−1 explicitly to avoid confusion. An elementwise division can also be defined in terms of the Hadamard product.
Division in different contexts:
Left and right division Because matrix multiplication is not commutative, one can also define a left division or so-called backslash-division as A \ B = A−1B. For this to be well defined, B−1 need not exist, however A−1 does need to exist. To avoid confusion, division as defined by A / B = AB−1 is sometimes called right division or slash-division in this context.
Division in different contexts:
Note that with left and right division defined this way, A / (BC) is in general not the same as (A / B) / C, nor is (AB) \ C the same as A \ (B \ C). However, it holds that A / (BC) = (A / C) / B and (AB) \ C = B \ (A \ C).
Division in different contexts:
Pseudoinverse To avoid problems when A−1 and/or B−1 do not exist, division can also be defined as multiplication by the pseudoinverse. That is, A / B = AB+ and A \ B = A+B, where A+ and B+ denote the pseudoinverses of A and B.
Division in different contexts:
Abstract algebra In abstract algebra, given a magma with binary operation ∗ (which could nominally be termed multiplication), left division of b by a (written a \ b) is typically defined as the solution x to the equation a ∗ x = b, if this exists and is unique. Similarly, right division of b by a (written b / a) is the solution y to the equation y ∗ a = b. Division in this sense does not require ∗ to have any particular properties (such as commutativity, associativity, or an identity element).
Division in different contexts:
"Division" in the sense of "cancellation" can be done in any magma by an element with the cancellation property. Examples include matrix algebras and quaternion algebras. A quasigroup is a structure in which division is always possible, even without an identity element and hence inverses. In an integral domain, where not every element need have an inverse, division by a cancellative element a can still be performed on elements of the form ab or ca by left or right cancellation, respectively. If a ring is finite and every nonzero element is cancellative, then by an application of the pigeonhole principle, every nonzero element of the ring is invertible, and division by any nonzero element is possible. To learn about when algebras (in the technical sense) have a division operation, refer to the page on division algebras. In particular Bott periodicity can be used to show that any real normed division algebra must be isomorphic to either the real numbers R, the complex numbers C, the quaternions H, or the octonions O.
Division in different contexts:
Calculus The derivative of the quotient of two functions is given by the quotient rule:
Division by zero:
Division of any number by zero in most mathematical systems is undefined, because zero multiplied by any finite number always results in a product of zero. Entry of such an expression into most calculators produces an error message. However, in certain higher level mathematics division by zero is possible by the zero ring and algebras such as wheels. In these algebras, the meaning of division is different from traditional definitions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Suberin**
Suberin:
Suberin, cutin and lignins are complex, higher plant epidermis and periderm cell-wall macromolecules, forming a protective barrier. Suberin, a complex polyester biopolymer, is lipophilic, and composed of long chain fatty acids called suberin acids, and glycerol. Suberins and lignins are considered covalently linked to lipids and carbohydrates, respectively, and lignin is covalently linked to suberin, and to a lesser extent, to cutin. Suberin is a major constituent of cork, and is named after the cork oak, Quercus suber. Its main function is as a barrier to movement of water and solutes.
Anatomy and physiology:
Suberin is highly hydrophobic and a somewhat 'rubbery' material. In roots, suberin is deposited in the radial and transverse/tangential cell walls of the endodermal cells. This structure, known as the Casparian strip or Casparian band, functions to prevent water and nutrients taken up by the root from entering the stele through the apoplast. Instead, water must bypass the endodermis via the symplast. This allows the plant to select the solutes that pass further into the plant. It thus forms an important barrier to harmful solutes. For example, mangroves use suberin to minimize salt intake from their littoral habitat.
Anatomy and physiology:
Suberin is found in the phellem layer of the periderm (or cork). This is outermost layer of the bark. The cells in this layer are dead and abundant in suberin, preventing water loss from the tissues below. Suberin can also be found in various other plant structures. For example, they are present in the lenticels on the stems of many plants and the net structure in the rind of a netted melon is composed of suberised cells.
Structure and biosynthesis:
Suberin consists of two domains, a polyaromatic and a polyaliphatic domain. The polyaromatics are predominantly located within the primary cell wall, and the polyaliphatics are located between the primary cell wall and the cell membrane. The two domains are supposed to be cross-linked. The exact qualitative and quantitative composition of suberin monomers varies in different species. Some common aliphatic monomers include α-hydroxyacids (mainly 18-hydroxyoctadec-9-enoic acid) and α,ω-diacids (mainly octadec-9-ene-1,18-dioic acid). The monomers of the polyaromatics are hydroxycinnamic acids and derivatives, such as feruloyltyramine.
Structure and biosynthesis:
In addition to the aromatics and aliphatics components, glycerol has been reported a major suberin component in some species. The role of glycerol is proposed to interlink aliphatic monomers, and possibly also to link polyaliphatics to polyaromatics, during suberin polymer assembly. The polymerization step of aromatic monomers has been shown to involve a peroxidase reaction.
The biosynthesis of the aliphatic monomers shares the same upstream reactions with cutin biosynthesis, and the biosynthesis of aromatics shares the same upstream reactions with lignin biosynthesis.
Phlobaphen also occurs in the polyaromatic part of the suberin mixture. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ephemeralization**
Ephemeralization:
Ephemeralization, a term coined by R. Buckminster Fuller in 1938, is the ability of technological advancement to do "more and more with less and less until eventually you can do everything with nothing," that is, an accelerating increase in the efficiency of achieving the same or more output (products, services, information, etc.) while requiring less input (effort, time, materials, resources, etc.). The application of materials and technology in modern cell phones, compared to older computers and phones, exemplify the concepts of Ephemeralization whereby technological advancement can drive efficiency in the form of fewer materials being used to provide greater utility (more functionality with less resource use). Fuller's vision was that ephemeralization, through technological progress, could result in ever-increasing standards of living for an ever-growing population. The concept has been embraced by those who argue against Malthusian philosophy.Fuller uses Washington Carver's assembly line (used by Henry Ford at his car factory), as an example of how ephemeralization can continuously lead to better products at lower cost with no upper bound on productivity. Fuller saw ephemeralization as an inevitable trend in human development.
Consequences to society:
Francis Heylighen and Alvin Toffler have written that ephemeralization, though it may increase our power to solve physical problems, can make non-physical problems worse. According to Heylighen and Toffler, increasing system complexity and information overload make it difficult and stressful for the people who must control the ephemeralized systems. This might negate the advantages of ephemeralization.The solution proposed by Heylighen is the integration of human intelligence, computer intelligence, and coordination mechanisms that direct an issue to the cognitive resource (document, person, or computer program) most fit to address it. This requires a distributed, self-organizing system, formed by all individuals, computers and the communication links that connect them. The self-organization can be achieved by algorithms. According to Heylighen, the effect is to superpose the contributions of many different human and computer agents into a collective map that may link the cognitive and physical resources relatively efficiently. The resulting information system could react relatively rapidly and adaptively to requests for guidance or changes in the situation.In Heylighen's view, the system could frequently be fed with new information from its myriad human users and computer agents, which it would take into account to offer the human users a list of the best possible approaches to achieve tasks. Heylighen believes near-optimization could be achieved both at the level of the individual who makes the request, and at the level of society which attempts to minimize the conflicts between the desires of its different members and to aim at long term, global progress while as much as possible protecting individual liberty and privacy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Abstract logic**
Abstract logic:
In mathematical logic, an abstract logic is a formal system consisting of a class of sentences and a satisfaction relation with specific properties related to occurrence, expansion, isomorphism, renaming and quantification.Based on Lindström's characterization, first-order logic is, up to equivalence, the only abstract logic that is countably compact and has Löwenheim number ω. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PICMG 1.0**
PICMG 1.0:
PICMG 1.0 is a PICMG specification that defines a CPU form factor and corresponding backplane connectors for PCI-ISA passive backplanes. This standard moves components typically located on the motherboard (i.e. memory, CPUs and chipset components) to a single plug-in card. PICMG 1.0 CPU Cards look much like standard ISA cards with extra gold finger connections for the ISA bus and the root PCI bus. The "motherboard" is replaced with a simple "passive backplane" that has only PCI and ISA connectors attached to it. These backplane connections include a dedicated system slot of the PICMG 1.0 CPU and various connections for standard ISA and PCI peripheral cards. This backplane is simple and robust, with a very low likelihood of failure, given its passive nature. This allows a much lower Mean Time to Repair than classic computer motherboard approaches, as electronics associated with CPUs can be replaced without having to remove peripheral devices.
PICMG Status:
Adopted : 10/10/1994 Current Revision : 2.0
PICMG Status:
Adopted : 5/25/1995 Current Revision : 1.1 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shuttle catalysis**
Shuttle catalysis:
Shuttle catalysis is used to describe catalytic reactions where a chemical entity of a donor molecule is transferred to an acceptor molecule. In these reactions, while the number of chemical bonds of each reactant changes, the types and total number of chemical bonds remain constant over the course of the reaction. In contrast to many organic reactions which exothermicity practically renders them irreversible, reactions operated under shuttle catalysis are often reversible. However, the position of the equilibrium can be driven to the product side through Le Chatelier’s principle. The driving forces for this equilibrium shift are typically the formation of a gas/precipitation, the use of high ground-state energy reactants or the formation of stabilized products or the excess equivalents of a reactant.
Shuttle catalysis:
The relocation of shuttled entities is often mediated by a transition metal catalyst, which serves to functionalize or defunctionalize a compound of interest. An advantage to this process is that it excludes the process of handling toxic or reactive raw chemical entities. However, these reactions require the development of catalytic systems that can efficiently deliver the shuttled entities between the reactants under mild conditions through a sequence of elementary steps.
Applications:
Transfer hydrogenation Transfer hydrogenation has been extensively studied to reduce various functional groups without requiring hazardous pressurized H2.
Applications:
Transfer hydroacylation In 1999, Chul-Ho Jun and Hyuk Lee reported the first example of hydroacylation through shuttle catalysis. In this example, 3-methyl-2-aminopyridine was used to activate the acyl group as well as coordinate to the rhodium catalyst, promoting C–C bond cleavage to eventually enable aldehyde transfer from a ketone to an alkene. The driving force of this reaction is the excess presence of alkenes and the formation of stable styrenes/extrusion of volatile ethylene. This method doesn’t require the use of toxic and self-reacting aldehydes such as acetaldehyde in the traditional hydroacylation procedures.
Applications:
Transfer hydroformylation Hydroformylation is a classical transition-metal catalyzed reaction, and it has been widely employed in industrial settings. However, a drawback of this reaction is the requirement of the hazardous mixture of H2/CO. For that reason, a process to replace H2/CO gas with a non-hazardous aldehyde is sought after.
In 1999, Christian P. Lenges and Maurice Brookhart reported isovaleraldehyde as a suitable surrogate for H2/CO transfer to 3,3-dimethyl-1-butene by using a rhodium(I) catalyst.
The reverse of this process was rendered catalytic by Vy M. Dong and co-workers in 2015. They performed dehydroformylation on aldehydes to achieve the corresponding alkenes. For this transformation, they used either norbornene or norbornadiene as a H2/CO acceptor, promoting reactivity through strain-release.
Applications:
Transfer hydrocyanation To replace the use of toxic hydrocyanide (HCN) gas or surrogates such as acetone cyanohydrin, Bill Morandi and co-workers developed a hydrocyanation strategy using shuttle catalysis. In this example, they use isovaleronitrile as a HCN surrogate under nickel/aluminum co-catalyzed conditions to afford hydrocyanation reactions of various alkenes. The use of isovaleronitrile allows careful control of HCN concentration, and the formation of volatile isobutylene is the driving force for the reaction.
Applications:
Others Other chemical entities including arenes, CO/HCl, HMgBr, H2Zn, H2O, carbene, silylene, sulfenium have also been shuttled under this catalysis platform. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lip trick**
Lip trick:
Lip tricks in skateboarding are performed on half-pipes, quarterpipes and mini ramps. They are tricks that require different varieties of balance on the "lip" of the ramp. The first lip trick done was by Jay Adams. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Environmental Mutagenesis and Genomics Society**
Environmental Mutagenesis and Genomics Society:
The Environmental Mutagenesis and Genomics Society (EMGS) is a scientific society "for the promotion of critical scientific knowledge and research into the causes and consequences of damage to the genome and epigenome in order to inform and support national and international efforts to ensure a healthy, sustainable environment for future generations." The society promotes scientific research into the causes of DNA damage and repair and the relevance of these to disease. It also promotes the application and communication of this knowledge, especially through education, to help protect human health and the environment.
History:
The society, originally founded as the Environmental Mutagen Society (EMS) was formed in the USA in 1969 by Drs. Alexander Hollaender, Joshua Lederberg, James Crow, Ernst Freese, James Neel, William Russell, Heinrich Malling, Frederick J. de Serres, Matthew Meselson, and others. The initial aim was to support the study of environmental mutagenesis, originally in germ-cell mutagenesis, but the scope soon expanded to include all areas of mutagenesis, including mutational mechanisms, test methods, molecular epidemiology, biomarkers, and risk assessment. As a result of this change in scope, in 2012 the society's name was changed to better encompass the broadened reach of the organization.
Activities and achievements:
In 1969, the EMS established the Environmental Mutagen Information Center (EMIC) at the Oak Ridge National Laboratory, which developed the first bibliographic database on environmental mutagenesis, facilitating research throughout the 1970s and early 1980s, particularly the development of tests for genetic toxicology, through the establishment of a register of substances tested for toxicity. This, in turn, contributed significantly to the GENE-TOX program, established by Drs. Angela Auletta and Michael D. Waters at the US EPA and it now forms part of TOXNET.During the early 1970s, the society played a significant part in the development of the US Toxic Substances Control Act of 1976, enabling the United States Environmental Protection Agency to include mutagenicity data in regulatory decisions.The EMS "Committee 17", chaired by John W. Drake, published an influential position paper; “Environmental Mutagenic Hazards”, in Science in 1975. This described the research needs and regulatory responsibility for managing potential mutagenic compounds in the environment. It influenced research direction, regulatory procedures and mutagenicity testing within industry.
Activities and achievements:
Publications In 1970 the EMS established the book series "Chemical Mutagens: Principles and Methods for Their Detection" and the first volume was published in the following year. This has included a number of influential papers, from the first by Dr. Bruce N. Ames on the Salmonella (Ames) mutagenicity assay.In 1979, the EMS began publishing its own journal, Environmental Mutagenesis, renamed Environmental and Molecular Mutagenesis in 1987.
Activities and achievements:
Meetings The society has met annually since its formation. The next annual meeting will be the 54th and will be held in Chicago, Illinois, September 9–13, 2023.
Activities and achievements:
Awards and honors The EMS makes three major awards. Every year it awards the EMS Award in recognition of "outstanding research contributions in the area of environmental mutagenesis" and the Alexander Hollaender Award in recognition of "outstanding contributions in the application of the principles and techniques of environmental mutagenesis to the protection of human health". From time to time it also awards the EMS Service Award in recognition of "long-standing dedication and service to the Society".
Activities and achievements:
The EMS also makes a number of student and travel awards to promote and support the interests of the society.
Collaboration and partnership The EMS is a member organisation of the International Association of Environmental Mutagen Societies (IAEMS) and the Federation of American Societies for Experimental Biology. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Basanite**
Basanite:
Basanite () is an igneous, volcanic (extrusive) rock with aphanitic to porphyritic texture. It is composed mostly of feldspathoids, pyroxenes, olivine, and plagioclase and forms from magma low in silica and enriched in alkali metal oxides that solidifies rapidly close to the Earth's surface.
Description:
Basanite is an aphanitic (fine-grained) igneous rock that is low in silica and enriched in alkali metals. Of its total content of quartz, feldspar, and feldspathoid (QAPF), between 10% and 60% by volume is feldspathoid and over 90% of the feldspar is plagioclase. Quartz is never present. This places basanite in the basanite/tephrite field of the QAPF diagram. Basanite is further distinguished from tephrite by having a normative olivine content greater than 10%. While the IUGS recommends classification by mineral content whenever possible, volcanic rock can be glassy or so fine-grained that this is impractical, and then the rock is classified chemically using the TAS classification. Basanite then falls into the U1 (basanite-tephrite) field of the TAS diagram. Basanite is again distinguished from tephrite by its normative olivine content and from nephelinite by a normative albite content of over 5% and a normative nepheline content under 20%.The mineral assembly in basanite is usually abundant feldspathoids (nepheline or leucite), plagioclase, and augite, together with olivine and lesser iron-titanium oxides such as ilmenite and magnetite-ulvospinel; minor alkali feldspar may be present. Clinopyroxene (augite) and olivine are common as phenocrysts and in the matrix. The augite contains significantly greater titanium, aluminium and sodium than that in typical tholeiitic basalt. Quartz is absent, as are orthopyroxene and pigeonite.Chemically, basanites are mafic. They are low in silica (42 to 45% SiO2) and high in alkalis (3 to 5.5% Na2O and K2O) compared to basalt, which typically contains more SiO2, as evident on the diagram used for TAS classification. Nephelinite is yet richer in Na2O plus K2O compared to SiO2.
Occurrences:
Basanite appears early in the alkaline magma series and basanites are found wherever alkaline magma is erupted. This includes both continental and ocean island settings. Together with basalts, they are produced by hotspot volcanism, for example in the Hawaiian Islands, the Comoros Islands and the Canary Islands. They are particularly common in rift zones.During eruption of the Laacher See caldera some 12,900 years ago, the final phase of the eruption, which tapped the deepest part of the magma chamber, produced basanite lapilli mixed with phonolite lapilli. This has been interpreted as fresh magma injected into the magma chamber that may have helped trigger the eruption.Eruption of basanite and other alkaline magmas characterizes the late alkaline phase (rejuvenation phase) of volcanic islands, which often comes 3 to 5 million years after the main shield-building phase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Baidu Knows**
Baidu Knows:
Baidu Knows (Chinese: 百度知道; pinyin: Bǎidù zhidao; lit. 'Baidu Knows') is a Chinese language collaborative web-based collective intelligence by question and answer provided by the Chinese search engine Baidu. Like Baidu itself, the knows is heavily self-censored in line with government regulations.The test version was launched on June 21, 2005, and turned into release version on November 8, 2005.
Introduction:
A registered user(member for short) puts a question (should be specific) and motivates other members to supply answers using credits as an award. Meanwhile, these answers turn to search result of the same or relevant questions. That's how knowledge is accumulated and shared.
Question and answer together with search engine makes it possible for a member to be a producer and consumer of knowledge, which is the so-called collective intelligence.
Knows's Principle:
Questions or answers containing the following types of content are removed: Pornographic, violent, horrible and uncivilized content Advertisement Reactionary content Personal attacks Content against morality and ethics Malicious, trivial or spam-like content | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**.XIP**
.XIP:
An .XIP file is a XAR archive that can be digitally signed for integrity. The .XIP file format was introduced in OS X 10.9, along with Apple's release of Swift. .XIP allows for a digital signature to be applied and verified on the receiving system before the archive is expanded. When a XIP file is opened (by double-clicking), Archive Utility will automatically expand it (but only if the digital signature is intact).
.XIP:
Apple has reserved the right to use the .XIP file format exclusively, removing it from public use since release. Starting with macOS Sierra, only .XIP archives signed by Apple will be expanded. Developers who had been using .XIP archives were required to move to using signed installer packages or disk images. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Erythropoietin in neuroprotection**
Erythropoietin in neuroprotection:
Erythropoietin in neuroprotection is the use of the glycoprotein erythropoietin (Epo) for neuroprotection. Epo controls erythropoiesis, or red blood cell production.
Erythropoietin in neuroprotection:
Erythropoietin and its receptor were thought to be present in the central nervous system according to experiments with antibodies that were subsequently shown to be nonspecific. While erythropoietin alpha is capable of crossing the blood brain barrier via active transport, concentrations in the central nervous system are very low. The possibility that Epo might have effects on neural tissues resulted in experiments to explore whether Epo might be tissue protective. The reported presence of Epo within the spinal fluid of infants and the expression of Epo-R in the spinal cord, suggested a potential role by Epo within the CNS therefore Epo represented a potential therapy to protect photoreceptors damaged from hypoxic pretreatment.In some animal studies, Epo has been shown to protect nerve cells from hypoxia-induced glutamate toxicity. Epo has also been reported to enhance nerve recovery after spinal trauma. Celik and associates investigated motor neuron apoptosis in rabbits with a transient global spinal ischemia model. The functional neurological status of animals given RhEpo was better after recovery from anesthesia, and kept improving over a two-day period. The animals given saline demonstrated a poor functional neurological status and showed no significant improvements. These results suggested that RhEpo has both an acute and delayed beneficial action in ischemic spinal cord injury.
Erythropoietin in neuroprotection:
In contrast to these results, numerous studies have suggested that Epo had no neuroprotective benefit in animal models and EpoR was not detected in brain tissues using anti-EpoR antibodies that were shown to be sensitive and specific.
Development with mutant Epo and EpoR:
While EpoR was reportedly detected in the embryonic brain, its role in brain development is unclear. In one study Epo stimulated neural progenitor cells and prevented apoptosis in the embryonic brain in mice. Mice without EpoR demonstrated severe anemia, defective heart development, and eventually death around embryonic day 13.5 from apoptosis in the liver, endocardium, myocardium, and fetal brain. As early as embryonic day 10.5 the lack of EpoR can affect brain development by increasing fetal brain apoptosis and decreasing the number of neural progenitor cells. By exposing cultures of EpoR positive embryonic cortical neurons to stimulation by Epo administration, the cells decreased apoptosis, as opposed to the decrease in neuron generation in EpoR negative cells.
Development with mutant Epo and EpoR:
However it has been questioned whether EpoR may or may not be a determining factor for the nervous system function. The contribution of Epo and EpoR to neuroprotection and development are not as clearly understood as its role in erythropoiesis in hematopoietic tissue. In a line of mice that expressed EpoR exclusively in hematopoietic cells, the mice developed normally had normal brains and brain function and were fertile, despite the lack of EpoR in nonhematopoietic tissue. Differential expression of EpoR between erythroid cells. Most notably, plasma Epo concentration is regulated by nonhematopoietic EpoR expression when the peak of plasma concentrations for induced anemia in mutant and wild-type mice. The expression of EpoR in nonhematopoietic tissue is dispensable in normal mouse development, but that the sensitivity of erythroid progenitors to Epo is regulated by the expression of EpoR.
Development with mutant Epo and EpoR:
Erythropoietin mutants R103-E and S100-E (though S100 in Epo doesn't exist) has been reported to be non-erythropoietin but retain the neuroprotective function. Epo with R103 mutation is a potent inhibitor of wild type Epo from binding to its receptor. Though, the viral vector expressed R103-E Epo mutant was shown to be inhibitory to the progression / development of nervous tissue damage in many models, it is not shown to recover the nervous tissue post damage. Given the associated risks, it would be foolish to administer / express Mutant as a preventive measure from neuronal injury. Hence, from a medical or commercial point of view, safe and feasible neuro-protective Epo mutants are not possible.
Development with mutant Epo and EpoR:
Quite a bit of research emphasis is on non erythropoietic but, neuroprotective Peptides of Erythropoietin. Peptide of Epo with amino acids 92-111 is neuroprotective while its erythropoietic potency is 10 fold less than the wild type.
A short peptide sequence from the erythropoietin molecule called JM4, has been found to be non-erythropoietic yet theoretically neuroprotective and is being readied for Stage 1 and 2 clinical studies.
Peripheral nervous system:
Production and localization in PNS Erythropoietin and its receptor are also reported in the peripheral nervous system, specifically in the bodies and axons of ganglions in the dorsal root, and at increased levels in Schwann cells after peripheral nerve injury. The distribution of EpoR was different from Epo, specifically in some neuronal cell bodies in the dorsal root ganglion, endothelial cells, and Schwann cells of normal nerves. Most importantly, experiments with immunostaining revealed that the distribution and concentration of EpoR on Schwann cells doesn’t change after peripheral nerve injury. However those studies are of questionable significance since the antibodies were nonspecific to EpoR. Other research that suggested Epo is up-regulated according to mRNA expression in astrocytes and hypoxia-induced neurons, while EpoR is not. A correlation between the expression of Epo-R in ganglion cells and binding to sensory receptors in the periphery like Pacini bodies and neuromuscular spindles suggests that Epo-R is related to touch regulation.
Peripheral nerve injury:
Site of injury After nerve injury, the increased production of Epo may induce activation of certain cellular pathways, while the concentration of EpoR doesn’t change. In Schwann cells, increased erythropoietin levels may stimulate Schwann cell proliferation via JAK2 and ERK/MAP kinase activation to be explained later. Similar to stimulation of red blood cell precursor cells (erythrogenesis), erythropoietin stimulates non-differentiated Schwann cells to proliferate.
Peripheral nerve injury:
Anti-apoptosis mechanisms Although the mechanism is unclear, it is apparent that erythropoietin has anti-apoptotic action after central and peripheral nerve injury. Cross-talk between JAK2 and NF-κB signaling cascades has been demonstrated to be a possible factor in central nerve injury. Erythropoietin has also been shown to prevent axonal degeneration when produced by neighboring Schwann cells with nitrous oxide as the axonal injury signal.
Mode of action:
Direct and indirect effects Erythropoietin exerts its neuroprotective role directly by activating transmitter molecules that play a role in erythrogenesis and indirectly by restoring blood flow. Subcutaneous administration of RhEpo on cerebral blood flow autoregulation after experimental subarachnoid hemorrhage was studied. In different groups of male Sprague-Dawley Rats, the injection of Epo after induction of hemorrhage normalized the autoregulation of cerebral blood flow, while those treated with a vehicle showed no autoregulation.
Mode of action:
Pathway The pathway for erythropoietin in both the central and peripheral nervous systems begins with the binding of Epo to EpoR. This leads to the enzymatic phosphorylation of PI3-K and NF-κB and results in the activation of proteins that regulate nerve cell apoptosis. Recent research shows that Epo activates JAK2 cascades which activate NF-κB, leading to the expression of CIAP and c-IAP2, two apoptosis-inhibiting genes. Research conducted in rat hippocampal neurons demonstrates that the protective role of Epo in hypoxia-induced cell death acts through extracellular signal-regulated kinases ERK1, ERK2 and protein kinase Akt-1/PKB. The action of Epo is not limited to just promoting cell survival and that the inhibition of neural apoptosis underlies short latency protective effects of Epo after brain injury. Accordingly, the neurotrophic actions may demonstrate longer-latency effects, but more research needs to be conducted on its clinical safety and effectiveness.
Mode of action:
Cerebral damage and inflammation Additionally to the anti-apoptotic effect, Epo reduces inflammatory response during different types of cerebral injury via the NF-κB pathway. The NF-κB pathway activated by Epo/EpoR phosphorylation plays a role in regulating inflammatory and immune response, in addition to preventing apoptosis due to cellular stress. NF-κB proteins regulate immune response through B-lymphocyte control and T-lymphocyte proliferation. These proteins are all important for the expression of genes specific to immune and inflammatory response regulation.
Neuroprotective effects:
As a neuroprotective agent erythropoietin has many functions: antagonizing glutamate cytotoxic action, enhancing antioxidant enzyme expression, reducing free radical production rate, and affecting neurotransmitter release. It exerts its neuroprotective effect indirectly through restoration of blood flow or directly by activating transmitter molecules in neurons that also play a role in erythrogenesis. Although apoptosis is not reversible, early intervention with neuroprotective therapeutic procedures such as erythropoietin administration may reduce the number of neurons that undergo apoptosis.
Neuroprotective effects:
Recombinant human EPO administration The systemic administration of RhEpo has been shown to reduce dorsal root ganglion cell apoptosis. While animals treated with RhEpo weren’t initially protected from mechanical allodynia after spinal nerve crush, a significantly improved recovery rate compared to animals not treated with RhEpo was demonstrated. This RhEpo therapy increased JAK2 phosphorylation, which has been found to be a key signaling step in Epo-induced neuroprotection by an anti-apoptotic mechanism. These findings demonstrate Epo therapy as a feasible treatment of neuropathic pain by reducing the protraction of pain after nerve injury. However, more studies need to be conducted to determine the optimal time and dosage for RhEpo treatment.
Neuroprotective effects:
Neonatal brain injury In infants with poor neurodevelopment, prematurity and asphyxia are typical problems. These conditions can lead to cerebral palsy, mental retardation, and sensory impairment. Hypothermia therapy for neonatal encephalopathy is a proven therapy for neonatal brain injury. However, recent research has demonstrated that high doses of recombinant erythropoietin can reduce or prevent this type of neonatal brain injury if administered early. A high rate of neuronal apoptosis is evident in the developing brain due to initial overproduction. Neurons that are electrically active and make synaptic connections survive, while those that do not undergo apoptosis. While this is a normal phenomenon, it is also known that neurons in the developing brain are at an increased risk to undergo apoptosis in response to injury. A small amount of the RhEpo can cross the blood–brain barrier and protect against hypoxic-ischemia injury. Epo treatment has also shown to preserve hemispheric brain volume 6 weeks after neonatal stroke. It demonstrated both neuroprotective effects and a direction towards neurogenesis in neonatal stroke without associated long-term difficulties.
Neuroprotective effects:
Cognitive and behavioral effects Systemic administration of RhEpo has also been shown to reduce lesion-associated behavioral impairment in hippocampally injured rats. The study confirmed that Epo administration improved posttraumatic behavioral and cognitive abilities versus a saline control that experienced no improvement, although it had no detectable effect on task acquisition in non-lesioned animals. Epo is able to reduce or eliminate the consequences of mechanical injury to the hippocampus but also demonstrates possible therapeutic effects in other cognitive domains.
Neuroprotective effects:
Dopaminergic neurons Epo was shown to specifically protect dopaminergic neurons, which are closely tied into attention deficit hyperactivity disorder. Specifically in mice, Epo demonstrated protective effects on nigral dopaminergic neurons in a mouse model of Parkinson's disease. This recent experiment tested the hypothesis that RhEpo could protect dopaminergic neurons and improve the neurobehavioral outcome in a rat model of Parkinson's Disease. The intrastriatal administration of RhEpo significantly reduced the degree of rotational asymmetry, and the RhEpo-treated rats demonstrated improvement in skilled forearm use. These experiments demonstrated that intrastriatal administration of RhEpo can protect nigral dopaminergic neurons from 6-OHDA induced cell death and improve neurobehavioral outcome in a rat model of Parkinson's Disease.
Neuroprotective effects:
Current treatment Currently methylprednisolone (Medrol) is only pharmaceutical agent used to treat spinal cord trauma. It is a corticosteroid that reduces damage to nerve cells and decreases inflammation near injury sites. It is typically administered within the first 8 hours after injury, but demonstrates poor results both in patients and experimental models. Some controversy has come about concerning the use of methylprednisolone because of its associated risks and poor clinical results, but it is the only medication available.
Neuroprotective effects:
Neurotherapeutic role If administered within a specific timeframe in experiments with erythropoietin in central nervous system, Epo has a favorable response in brain and spinal cord injuries like mechanical trauma or subarachnoid hemorrhages. Research also demonstrates a therapeutic role in modulating neuronal excitability and acting as a trophic factor both in vivo and in vitro. This administration of erythropoietin functions by inhibiting the apoptosis of sensor and motor neurons via stimulation of intracellular anti-apoptotic metabolic paths. The action of erythropoietin on Schwann cells and inflammatory response after neurological trauma also points to initial stimulation of nerve regeneration after peripheral nerve injury.
Neuroprotective effects:
Role in neurogenesis Erythropoietin and its receptor have an essential role in neurogenesis, specifically in post-stroke neurogenesis and in the migration of neuroblasts to areas of neural injury. Severe embryonic neurogenesis defects in animals that were null for Epo or EpoR genes are found. In EpoR knock-down animals, deletion of EpoR genes specific to the brain lead to a reduction in cell growth in the subventricular zone and impaired neurogenesis after stroke. This post-stroke neurogenesis was characterized by an impaired migration of neuroblasts in the peri-infarct cortex. This results is in agreement with the classical approach to Epo/EpoR contributions in development in that it demonstrated an Epo/EpoR requirement for embryonic neural development, adult neurogenesis, and neuron regeneration after injury. High doses of exogenous erythropoietin could demonstrate a neuroprotective role by binding to a receptor that contains the common beta receptor but lacks EpoR. These types of studies into Epo and EpoR null animals have seen and are further elucidating the neuroprotective role of Epo/EpoR in genetics and development.
Neuroprotective effects:
Neuroregeneration While the neuroprotective effects of Epo administration in models of brain injury and disease have been well described, the effects of Epo on Neuroregeneration are currently being investigated. Epo administration during optic nerve transaction was used to assess the neuroprotective properties in vivo as well as demonstrate the neuroregenerative capabilities. The intravitreal injection of Epo increased retinal ganglion cell somata and axon survival after transaction. A small amount of axons penetrated the transaction site and regenerated up to 1 mm into the distal nerve. In a second experiment, Epo doubled the number of retinal ganglion cell axons regenerating along a length of nerve grafted onto the retrobulbar optic nerve. This evidence of Epo as a neuroprotective and neuroregenerative agent is extremely promising for Epo as therapy in central nerve injury and repair.
Research directions:
Erythropoietin has shown to have a neuroprotective role in both the central and peripheral nervous system through pathways that inhibit apoptosis. It has been successful in demonstrating neuroprotective effects in many models of brain injury and in some experiments. It is also capable of influencing neuron stimulation and promoting peripheral nerve regeneration. Epo has a lot of potential uses and could provide a therapeutic answer for nervous system injury. However, more studies need to be conducted to determine the optimal time and dosage for Epo treatment.
Glaucoma:
Neuroprotection is also a concept used in ophthalmology regarding glaucoma. The only neuroprotection currently proven in glaucoma is intraocular pressure reduction. However, there are theories that there are other possible areas of neuroprotection, such as protecting from the toxicity induced by degenerating nerve fibres from glaucoma. Cell culture models show that retinal ganglion cells can be prevented from dying by certain pharmacological treatments. Intraperitoneal injection of Epo in DBA/2J mice protected / slowed down the degeneration of Retinal ganglion cell (RGC). Overexpression of Epo and Epo mutants in the eye via, viral vectors is toxic to the retina. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bid Euchre**
Bid Euchre:
Bid Euchre, Auction Euchre, Pepper, or Hasenpfeffer, is the name given to a group of card games played in North America based on the game Euchre. It introduces an element of bidding in which the trump suit is decided by which player can bid to take the most tricks. Variation comes from the number of cards dealt, the absence of any undealt cards, the bidding and scoring process, and the addition of a no trump declaration. It is typically a partnership game for four players, played with a 24, 32 or 36-card pack, or two decks of 24 cards each.
Single Deck Bid Euchre:
A pack of 24 cards containing 9, 10, J, Q, K, and A in each suit is used. The rank of the cards in the trump suit is: J (of the trump suit, also known as the "right bower" or bauer; high), J (of the other suit of the same colour as the trump suit, also known as the "left bower" or bauer), A, K, Q, 10, 9 (low). In the plain suits, the rank is: A (high), K, Q, J, 10, 9 (low). When playing with no trumps, all four suits follow the 'plain suit' ranking. Cards are dealt one at a time to each player, clockwise, starting with the player to the dealer's left. Each player receives six cards. Variations in big euchre of the number of cards dealt, scoring values, and winning requirements exist, and are agreed to before game play.
Single Deck Bid Euchre:
Bidding Bidding is the primary way in which Bid Euchre is different from standard Euchre. A bid is the number of tricks that a player wagers for his or her team to win and each bid must be higher than any preceding it. Each player, beginning at dealer's left, may either bid or pass. Starting at the person to the left of the dealer, each player "bids" how many "tricks" he or she thinks it is possible to get in partnership with his/her partner (sitting across the table). "Trump bids" are the numbers four through six. Players may bid, or choose to pass. Common bids are three, four, or five. One is not a typical bid. There are some variations, but in most traditional games the bidding only goes around the table once, with each player bidding one time. At the end of bidding, whoever bids highest wins the bid and gets to name the suit that will become trump. Bidding does not generally exceed five (the maximum is six), as there are two special bids.
Single Deck Bid Euchre:
The two bid There is special meaning given to the "two bid." If a player holds two jacks of the same color (both "black" jacks or both "red" jacks), the player can bid "two" to indicate to the player's partner this special possession. This gives useful information to the partner when placing a bid.
Single Deck Bid Euchre:
Pepper exchanges A small pepper is a play where the bidder gets to exchange one card with his/her partner (but does not get to choose or say which card is wanted, only declare the suit that is trump), and then plays alone against the other two players. The partner also has the option to return the original card back to the bidder if he/she chooses to do so. The bidding player or team must get all six tricks. If all six tricks are won, the team with the winning small pepper receives seven points.
Single Deck Bid Euchre:
A big pepper is the same as a small pepper, except the bidder does not get any cards from his/her partner, and, if successful, receives 14 points. Players may also call it "super moon", which is when he attempts to get all six bids without looking at his hand first and does not receive any help from his partner. If someone calls "one best," "moon," or "super moon," the dealer is allowed to call the same or higher bid and win the bid.
Single Deck Bid Euchre:
Leading, taking tricks, and scoring At the end of bidding, the winning (or "contracting") bidder makes the opening play and may lead any card. Going clockwise, the other players each play a card and must follow suit if possible. If a player cannot follow suit, any card can be played. There is no rule about who may play trump first. The trick goes to the highest trump or, if there are no trump cards, to the highest card of the suit led. The winner of a trick leads to the next trick. The contracting side scores one point for each trick taken if it makes at least its contract but is set back (loses) six points if it fails to make its contract, regardless of the value of the contract or the tricks actually won. As such, a side can have a negative score. If the side playing defense (that is, the side that does not win the bid) fails to get any tricks, it goes back six points. An exception to this is in cases of a 'pepper' bid. With this contract, if all the tricks are taken, the contracting side wins 14 or 12 points (for the big and small pepper, respectively). If the contracting side fails to take all six tricks, it is set back 14 or 12 points (for big and small peppers, respectively). The opposing side always scores one point for each trick taken. If the defensive side does not get any tricks in a small or big pepper, it loses six points.
Single Deck Bid Euchre:
Winning The standard winning number or goal is 42 (32 in "hawsy"). The first team to reach or exceed 42 while on offense wins. Other variations of the game do not use a winning number and instead allow players to set a time limit such as one or two hours, at the end of which time the team with the highest point total wins.
Bid euchre variations:
Progressive Progressive Euchre is a tournament format Euchre. Play begins when the lead table rings a bell. The lead table plays eight hands, the deal revolving to the left with each hand, so that each player has dealt twice, then rings the bell again. When the bell rings, players at each table finish their current hand and record their team score on an individual tally. The losing team at the head table moves to the tail table; otherwise, the winning team at each table advances to the next table, and one member of the losing team changes seat so that partners in one game are opponents in the next game. Play begins on the next game immediately without waiting for another signal. After 10 games, players total their tally sheets, to determine the high score and low score for the tournament.
Bid euchre variations:
Play Each table of four players use a 24-card deck containing A K Q J 10 9 in the four suits (♠ ♥ ♣ ♦). Players bid once each, clockwise around the table, starting at the dealer's left. Bids of one to six are made by stating the number of tricks to be taken. A player must either bid higher than any prior bid, or pass.
Bid euchre variations:
A pepper consists of winning all six tricks with a passed card. If no succeeding player wishes to play a loner, the bidder declares suit by saying, "Give me your best heart", "Give me your best club", etc. His partner gives the requested card to the bidder, face down, before seeing the bidder's passed card, and sits out the rest of the hand. As loner bids (asserting that one will win all six tricks without assistance) are pre-emptive and are made by declaring suit and leading out the first trick. The high bidder declares suit as he leads out the first trick. The winner of each trick leads the following trick. Only suits may be declared trump; no-trump and low-no-trump declarations are not permitted. Deal passes around the table, clockwise, after each hand.
Bid euchre variations:
Teams score one point for three or four tricks, two points for all five tricks and four points for a loner. A team failing to achieve their number of tricks receives no points for any tricks won, and two points go to the other partner's score. An euchre sweep nets four points.
Bid euchre variations:
Pfeffer The names "Pfeffer," "Hasenpfeffer," and "Double Hasenpfeffer" come from "Hasenpfeffer", a German dish of marinated and stewed trimmings of hare. Pfeffer, is a variation of Pepper and is most often played in the Midwest. Its primary difference is that the dealer is forced to make a "four Trick Bid" when all players pass in front of the dealer. This allows for a strategy of either forcing teams to have to make bids or to "stick the dealer." The minimum bid for a dealer is four tricks.
Bid euchre variations:
Play All card hierarchies are the same as Pepper. A Pfeffer bid (a/k/a double-Pfeffer) is a bid to win all six tricks, alone. The player who wins the bid declares trump. For "trump bids," the player to the left of the dealer leads to the first trick and each player must follow suit if possible. The trick is won by the highest card of the suit led, or by the highest trump if any were played. Winner of each trick leads to the next and play continues until all six tricks have been played. For "no-trump bids," the player to the right of the "Declarer" leads.
Bid euchre variations:
Scoring & Winning For non-Pfeffer bids, the team that declared trump scores one point for each trick taken if they took at least as many tricks as were bid. If the declaring team takes all six tricks, they get six points and the opposing players are "set." They lose five points and receive a "hickey." If the declaring team takes less than the number of tricks bid, they too will set, lose five points and also receive a hickey.
Bid euchre variations:
For Pfeffer bids, if the declaring team takes all six tricks, they get twelve points and the opposing team are set, lose five points, and receive a hickey. If the declaring team fails to take all six tricks, they are set, lose ten points, and receive two hickeys.
Bid euchre variations:
In all cases, the opposing team simply scores one point for every trick they take. The deal then passes clockwise around the table. The game is to 42 points. In cases of a tie at 42, the bidding team wins. Negative scores are allowed. For purposes of betting, amounts are set for game and sets Games are generally twice the amount as sets. Games ending with the losing team at zero points or below, pay double.
Bid euchre variations:
Hasenpfeffer Hasenpfeffer, also called Pepper, is a four-player partnership variation of Euchre played with a 24-card pack plus the Joker. Six cards are dealt in batches of three, and the rest are laid face down to one side. Bids are made numerically for the naming of trump, and declarer may name no trump in place of a single suit. If no one bids, the holder of the best bauer is obliged to bid three, and if it then proves to be the card out of play, the deal is annulled. The highest bidder announces trump before play. The bidder's side scores one point per trick won if this is not less than the bid, otherwise, it loses one point per undertrick. The play goes up to 10 points. Competition to secure a call is very keen since one stands to gain more than one stands to lose, but for that very reason the bidding is frequently pushed beyond the level of safety.
Bid euchre variations:
Buckpfeffer A variation and combination of many bid euchre varieties, "BuckenPeffer" (or "Buck"), involves only one round of bids. The minimum bid is three. If all three players pass before the dealer, like in "Screw the Dealer", the dealer is forced to bid four tricks. There is no second round of bidding and the dealer is then forced to bid four tricks. There is no bidding "two" to inform a partner that the bidder is holding two jacks of the same color. A player may call high or low as trump, but in this case (unless the player calling trump has called a Pfeffer bid (going solo with no partner and required to take all six tricks), the bidder calls hi or low and must exchange their best card (ace if high is called, 9 if low is called) for the worst card (a 9 if high is called, ace if low is called) from the player on the callers left. There are different scoring and waging rules such as burns, double burns, and triple burns. Scoring is different in that teams, not individuals, are scored. Points awarded are the number of tricks taken and the game is generally played to 25 or more. Scoring idiosyncrasies include: if a team takes all six tricks after calling trump, or skunking the other team, they score six plus the number of the tricks they bid. The skunked team has the number of the winning trump bid subtracted from their score.
Bid euchre variations:
Buck euchre Dirty clubs (or buck euchre), is a variation of the euchre and 500 card games, and similar to Oh Hell – 500. These games are trick-taking card games, but unlike euchre, the players must bid on how many tricks they will take. The game is played by three to six players, depending on the variation. The game uses the same cards as euchre: the 10, J, Q, K, and A of each suit (three players), with lower cards (9, 8, 7, etc.) added if necessary for more players. For the first hand, the dealer is chosen at random, then the deal proceeds clockwise.
Bid euchre variations:
Play Each hand, one suit is trump; trump cards are higher than non-trump. The order of cards for the trump suit is the same as euchre: J of the trump suit (right bauer)-J of the other suit of the same color (left bauer)-A-K-Q-10-etc. The order of cards for non-trump suits is A-K-Q-(J)-10-etc.
Bid euchre variations:
Each hand, five cards are dealt to each player; the remaining cards are placed face down (the blind), except the top card, which is flipped face up. If this face-up card is a club - this is called dirty clubs - there is no bidding and clubs are automatic trumps. Otherwise, each player in clockwise order bids a number of tricks they think they can take. The bid can go around a second time, giving players the chance to raise their bid. The high bidder gets to call trump.
Bid euchre variations:
Play begins with the player to the left of the dealer. This player leads with a single card, and the play proceeds clockwise. Players must follow suit if possible. The player who takes the trick gets to lead for the next trick.
Bid euchre variations:
Scoring Each player starts with the same number of points, which may be 15. The goal is to get to zero. Each player subtracts the number of tricks taken from his score on each hand. However, the high bidder must take at least the number of tricks he bid. If he fails to take this many tricks, instead of subtracting points, he must add five to his score. Therefore, being the high bidder is helpful in that it lets a player call trump, but it is also dangerous as that player is the only one to hold the specified bid.
Bid euchre variations:
One variation is that a player who takes no tricks is bumped (penalized) five points regardless of his bid. When this rule is in place, the players are usually given a chance to drop out after trump is called. A player who drops out cannot be penalized, but also cannot take any tricks. Another variation is that if the call goes all the way around without a bid, there is no trump, and players do not get a chance to drop out.
Bid euchre variations:
Eau Claire Clubs Eau Claire Clubs (also called Dirrties, Clübbérts, or simply Clubs) is similar to Dirty Clubs, and a regional variant. The most notable difference between the two is that it is played with four players split into two partnerships, instead of "every player for themselves." It follows the same general bidding, card play, and scoring rules as Dirty Clubs.
Double Deck Bid Euchre:
Another variation, Double Deck Bid Euchre, uses a 48-card deck, giving 12 cards to each player. There are two teams of two players each. The minimum bid is three, and the winning bid is the highest bid. If the player makes the bid, they get one point for each trick the team takes. If the team with the highest bid fails to make their bid, they lose points equal to their bid. Their opponents get one point for each trick they take. The game is won by the first team to score 36 points.
Double Deck Bid Euchre:
Variations Indiana Double Deck: This version of Double Deck Bid Euchre is commonly played in the Midwest United States, played by four players in teams of two. A deck of 48 cards (a Pinochle Deck is used.
Double Deck Bid Euchre:
Five-handed: A five-handed variation with two decks with nines removed. Each player competes against all the others. This variation can also be played by six, seven or more players, following the same rules. For each player above five, eight cards must be added to the deck. If six play, eight nines are added, four from each of the two decks; for seven players, add the nines and eights from both decks. Eeach player receives eight cards.
Double Deck Bid Euchre:
Double Hasenpfeffer A variant for either four or six players divided into two teams and using the 48-card pinochle pack. Double Hasenpfeffer (or sometimes, Double Pepper), may be played without bauers, so all cards rank A K Q J 10 9 in each suit, and there are no bids of little or big pepper. All cards are dealt out and bidding goes around the table once. The minimum bid is six. If all pass, the dealer names trump at a minimum bid of six tricks. In a four-player game, a high bidder may opt to play alone and exchange any two cards with his or her partner and then play solo against the opposing team. Scoring is the same as in 24-card pepper above, with a forced declaration by the dealer losing only half (rounding up) if not made. Playing alone scores double, positive if bid is made, or negative if not. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Attractor network**
Attractor network:
An attractor network is a type of recurrent dynamical network, that evolves toward a stable pattern over time. Nodes in the attractor network converge toward a pattern that may either be fixed-point (a single state), cyclic (with regularly recurring states), chaotic (locally but not globally unstable) or random (stochastic). Attractor networks have largely been used in computational neuroscience to model neuronal processes such as associative memory and motor behavior, as well as in biologically inspired methods of machine learning.
Attractor network:
An attractor network contains a set of n nodes, which can be represented as vectors in a d-dimensional space where n>d. Over time, the network state tends toward one of a set of predefined states on a d-manifold; these are the attractors.
Overview:
In attractor networks, an attractor (or attracting set) is a closed subset of states A toward which the system of nodes evolves. A stationary attractor is a state or sets of states where the global dynamics of the network stabilize. Cyclic attractors evolve the network toward a set of states in a limit cycle, which is repeatedly traversed. Chaotic attractors are non-repeating bounded attractors that are continuously traversed.
Overview:
The network state space is the set of all possible node states. The attractor space is the set of nodes on the attractor.
Overview:
Attractor networks are initialized based on the input pattern. The dimensionality of the input pattern may differ from the dimensionality of the network nodes. The trajectory of the network consists of the set of states along the evolution path as the network converges toward the attractor state. The basin of attraction is the set of states that results in movement towards a certain attractor.
Types:
Various types of attractors may be used to model different types of network dynamics. While fixed-point attractor networks are the most common (originating from Hopfield networks), other types of networks are also examined.
Types:
Fixed point attractors The fixed point attractor naturally follows from the Hopfield network. Conventionally, fixed points in this model represent encoded memories. These models have been used to explain associative memory, classification, and pattern completion. Hopfield nets contain an underlying energy function that allow the network to asymptotically approach a stationary state. One class of point attractor network is initialized with an input, after which the input is removed and the network moves toward a stable state. Another class of attractor network features predefined weights that are probed by different types of input. If this stable state is different during and after the input, it serves as a model of associative memory. However, if the states during and after input do not differ, the network can be used for pattern completion.
Types:
Other stationary attractors Line attractors and plane attractors are used in the study of oculomotor control. These line attractors, or neural integrators, describe eye position in response to stimuli. Ring attractors have been used to model rodent head direction.
Cyclic attractors Cyclic attractors are instrumental in modelling central pattern generators, neurons that govern oscillatory activity in animals such as chewing, walking, and breathing.
Chaotic attractors Chaotic attractors (also called strange attractors) have been hypothesized to reflect patterns in odor recognition. While chaotic attractors have the benefit of more quickly converging upon limit cycles, there is yet no experimental evidence to support this theory.
Continuous attractors Neighboring stable states (fix points) of continuous attractors (also called continuous attractor neural networks) code for neighboring values of a continuous variable such as head direction or actual position in space.
Types:
Ring attractors A subtype of continuous attractors with a particular topology of the neurons (ring for 1-dimensional and torus or twisted torus for 2-dimensional networks). The observed activity of grid cells is successfully explained by assuming the presence of ring attractors in the medial entorhinal cortex. Recently, it has been proposed that similar ring attractors are present in the lateral portion of the entorhinal cortex and their role extends to registering new episodic memories.
Implementations:
Attractor networks have mainly been implemented as memory models using fixed-point attractors. However, they have been largely impractical for computational purposes because of difficulties in designing the attractor landscape and network wiring, resulting in spurious attractors and poorly conditioned basins of attraction. Furthermore, training on attractor networks is generally computationally expensive, compared to other methods such as k-nearest neighbor classifiers. However, their role in general understanding of different biological functions, such as, locomotor function, memory, decision-making, to name a few, makes them more attractive as biologically realistic models.
Implementations:
Hopfield networks Hopfield attractor networks are an early implementation of attractor networks with associative memory. These recurrent networks are initialized by the input, and tend toward a fixed-point attractor. The update function in discrete time is x(t+1)=f(Wx(t)) , where x is a vector of nodes in the network and W is a symmetric matrix describing their connectivity. The continuous time update is dxdt=−λx+f(Wx) Bidirectional networks are similar to Hopfield networks, with the special case that the matrix W is a block matrix.
Implementations:
Localist attractor networks Zemel and Mozer (2001) proposed a method to reduce the number of spurious attractors that arise from the encoding of multiple attractors by each connection in the network. Localist attractor networks encode knowledge locally by implementing an expectation-maximization algorithm on a mixture-of-gaussians representing the attractors, to minimize the free energy in the network and converge only the most relevant attractor. This results in the following update equations: Determine the activity of attractors: qi(t)=πig(y(t),wi,σ(t))∑jπjg(y(t),wj,σ(t)) Determine the next state of the network: y(t+1)=α(t)ξ+(1−α(t))∑iqi(t)wi Determine the attractor width through network: σy2(t)=1n∑iqi(t)|y(t)−wi|2 (πi denotes basin strength, wi denotes the center of the basin. ξ denotes input to the net, g is a un-normalized gaussian distribution centered in y and of standard deviation equals to σ .) The network is then re-observed, and the above steps repeat until convergence. The model also reflects two biologically relevant concepts. The change in α models stimulus priming by allowing quicker convergence toward a recently visited attractor. Furthermore, the summed activity of attractors allows a gang effect that causes two nearby attractors to mutually reinforce the other's basin.
Implementations:
Reconsolidation attractor networks Siegelmann (2008) generalized the localist attractor network model to include the tuning of attractors themselves. This algorithm uses the EM method above, with the following modifications: (1) early termination of the algorithm when the attractor's activity is most distributed, or when high entropy suggests a need for additional memories, and (2) the ability to update the attractors themselves: wi(t+1)=vqi(t)⋅y(t)+[1−vqi(t)]⋅wi(t) , where v is the step size parameter of the change of wi . This model reflects memory reconsolidation in animals, and shows some of the same dynamics as those found in memory experiments.
Implementations:
Further developments in attractor networks, such as kernel based attractor networks, have improved the computational feasibility of attractor networks as a learning algorithm, while maintaining the high-level flexibility to perform pattern completion on complex compositional structures. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cassia gum**
Cassia gum:
Cassia gum is the flour and food additives made from the endosperms of the seeds of Senna obtusifolia and Senna tora (also called Cassia obtusifolia or Cassia tora). It is composed of at least 75% polysaccharide, primarily galactomannan with a mannose:galactose ratio of 5:1, resulting in a high molecular mass of 200,000-300,000 Da.
Approval:
Japan In 1995, cassia gum was added to the list of approved food additives in Japan by the Japanese Ministry of Health and Welfare.
Approval:
United States Two GRAS notices were filed to the U.S. Food and Drug Administration (FDA), one on June 23, 2000 (GRN 51) and one on November 21, 2003 (GRN 139), both of which were not evaluated due to notifier's request to cease evaluation. In June 2008, specialty firm Lubrizol Advanced Material filed a petition to the FDA proposing that food regulations be amended to provide for the use of cassia gum as a stabilizer in frozen dairy desserts. Approval in the US is still pending, with no clear indication of when it may be obtained.
Approval:
European Union In 2010, cassia gum received EU approval for human food applications.
Uses:
It is used as a thickener and gelling agent, and has E-number E427 in food and E499 in feed (pet food). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slip joint pliers**
Slip joint pliers:
Slip joint pliers are pliers whose pivot point or fulcrum can be moved to increase the size range of their jaws. Most slip joint pliers use a mechanism that allows sliding the pivot point into one of several positions when the pliers are fully opened. Jaws can be thick, thin, regular and multiple. Multiple ones are those slip joint pliers that provide 2 or more pivoting positions.
Varieties:
There are many different varieties of slip joint pliers, including straight slip joint pliers, tongue-and-groove pliers and lineman's pliers.
Varieties:
Straight slip joint pliers Straight slip joint pliers are configured similarly to common or lineman's pliers in that their jaws are in line with their handles. One side of the pliers usually has two holes that are connected by a slot for the pivot. The pivot is fastened to the other side and shaped such that it can slide through the slot when the pliers are fully opened.
Varieties:
Tongue-and-groove pliers Tongue-and-groove pliers have their jaws offset from their handles and have several positions at which the lower jaw can be positioned. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Epistemological pluralism**
Epistemological pluralism:
Epistemological pluralism is a term used in philosophy, economics, and virtually any field of study to refer to different ways of knowing things, different epistemological methodologies for attaining a fuller description of a particular field. A particular form of epistemological pluralism is dualism, for example, the separation of methods for investigating mind from those appropriate to matter (see mind–body problem). By contrast, monism is the restriction to a single approach, for example, reductionism, which asserts the study of all phenomena can be seen as finding relations to some few basic entities.Epistemological pluralism is to be distinguished from ontological pluralism, the study of different modes of being, for example, the contrast in the mode of existence exhibited by "numbers" with that of "people" or "cars".In the philosophy of science epistemological pluralism arose in opposition to reductionism to express the contrary view that at least some natural phenomena cannot be fully explained by a single theory or fully investigated using a single approach.In mathematics, the variety of possible epistemological approaches includes platonism ("mathematics as an objective study of abstract reality, no more created by human thought than the galaxies") radical constructivism (with restriction upon logic, banning the proof by reductio ad absurdum and other limitations), and many other schools of thought.In economics controversy exists between a single epistemological approach to economics and a variety of approaches. "At midcentury, the neoclassical approach achieved near-hegemonic status (at least in the United States), and its proponents sought to bring all kinds of social phenomena under its uniform explanatory umbrella. The resistance of some phenomena to neoclassical treatment has led a number of economists to think that alternative approaches are necessary for at least some phenomena and thus also to advocate pluralism." An extensive history of these attempts is provided by Sent. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Journal of Low Temperature Physics**
Journal of Low Temperature Physics:
The Journal of Low Temperature Physics is a biweekly peer-reviewed scientific journal covering the field of low temperature physics and cryogenics, including superconductivity, superfluidity, matter waves, magnetism and electronic properties, active areas in condensed matter physics, and low temperature technology. Occasionally, special issues dedicated to a particular topic are also published. According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.57. The journal was established by John G. Daunt in 1969, and the current Editors-in-Chief are Neil S. Sullivan, Jukka Pekola and Paul Leiderer.
Abstracting and indexing:
The journal is abstracted and indexed in Chemical Abstracts Service, Science Citation Index, and Scopus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diesel engine**
Diesel engine:
The diesel engine, named after Rudolf Diesel, is an internal combustion engine in which ignition of the fuel is caused by the elevated temperature of the air in the cylinder due to mechanical compression; thus, the diesel engine is called a compression-ignition engine (CI engine). This contrasts with engines using spark plug-ignition of the air-fuel mixture, such as a petrol engine (gasoline engine) or a gas engine (using a gaseous fuel like natural gas or liquefied petroleum gas).
Introduction:
Diesel engines work by compressing only air, or air plus residual combustion gases from the exhaust (known as exhaust gas recirculation, "EGR"). Air is inducted into the chamber during the intake stroke, and compressed during the compression stroke. This increases the air temperature inside the cylinder so that atomised diesel fuel injected into the combustion chamber ignites. With the fuel being injected into the air just before combustion, the dispersion of the fuel is uneven; this is called a heterogeneous air-fuel mixture. The torque a diesel engine produces is controlled by manipulating the air-fuel ratio (λ); instead of throttling the intake air, the diesel engine relies on altering the amount of fuel that is injected, and the air-fuel ratio is usually high.
Introduction:
The diesel engine has the highest thermal efficiency (engine efficiency) of any practical internal or external combustion engine due to its very high expansion ratio and inherent lean burn which enables heat dissipation by the excess air. A small efficiency loss is also avoided compared with non-direct-injection gasoline engines since unburned fuel is not present during valve overlap and therefore no fuel goes directly from the intake/injection to the exhaust. Low-speed diesel engines (as used in ships and other applications where overall engine weight is relatively unimportant) can reach effective efficiencies of up to 55%. The combined cycle gas turbine (Brayton and Rankine cycle) is a combustion engine that is more efficient than a diesel engine, but it is, due to its mass and dimensions, unsuited for vehicles, watercraft, or aircraft. The world's largest diesel engines put in service are 14-cylinder, two-stroke marine diesel engines; they produce a peak power of almost 100 MW each.Diesel engines may be designed with either two-stroke or four-stroke combustion cycles. They were originally used as a more efficient replacement for stationary steam engines. Since the 1910s, they have been used in submarines and ships. Use in locomotives, buses, trucks, heavy equipment, agricultural equipment and electricity generation plants followed later. In the 1930s, they slowly began to be used in a few automobiles. Since the 1970s energy crisis, demand for higher fuel efficiency has resulted in most major automakers, at some point, offering diesel-powered models, even in very small cars. According to Konrad Reif (2012), the EU average for diesel cars at the time accounted for half of newly registered cars. However, air pollution emissions are harder to control in diesel engines than in gasoline engines, so the use of diesel auto engines in the U.S. is now largely relegated to larger on-road and off-road vehicles.Though aviation has traditionally avoided diesel engines, aircraft diesel engines have become increasingly available in the 21st century. Since the late 1990s, for various reasons – including the diesel's normal advantages over gasoline engines, but also for recent issues peculiar to aviation – development and production of diesel engines for aircraft has surged, with over 5,000 such engines delivered worldwide between 2002 and 2018, particularly for light airplanes and unmanned aerial vehicles.
History:
Diesel's idea In 1878, Rudolf Diesel, who was a student at the "Polytechnikum" in Munich, attended the lectures of Carl von Linde. Linde explained that steam engines are capable of converting just 6–10% of the heat energy into work, but that the Carnot cycle allows conversion of much more of the heat energy into work by means of isothermal change in condition. According to Diesel, this ignited the idea of creating a highly efficient engine that could work on the Carnot cycle. Diesel was also introduced to a fire piston, a traditional fire starter using rapid adiabatic compression principles which Linde had acquired from Southeast Asia. After several years of working on his ideas, Diesel published them in 1893 in the essay Theory and Construction of a Rational Heat Motor.Diesel was heavily criticised for his essay, but only a few found the mistake that he made; his rational heat motor was supposed to utilise a constant temperature cycle (with isothermal compression) that would require a much higher level of compression than that needed for compression ignition. Diesel's idea was to compress the air so tightly that the temperature of the air would exceed that of combustion. However, such an engine could never perform any usable work. In his 1892 US patent (granted in 1895) #542846, Diesel describes the compression required for his cycle: pure atmospheric air is compressed, according to curve 1 2, to such a degree that, before ignition or combustion takes place, the highest pressure of the diagram and the highest temperature are obtained-that is to say, the temperature at which the subsequent combustion has to take place, not the burning or igniting point. To make this more clear, let it be assumed that the subsequent combustion shall take place at a temperature of 700°. Then in that case the initial pressure must be sixty-four atmospheres, or for 800° centigrade the pressure must be ninety atmospheres, and so on. Into the air thus compressed is then gradually introduced from the exterior finely divided fuel, which ignites on introduction, since the air is at a temperature far above the igniting-point of the fuel. The characteristic features of the cycle according to my present invention are therefore, increase of pressure and temperature up to the maximum, not by combustion, but prior to combustion by mechanical compression of air, and there upon the subsequent performance of work without increase of pressure and temperature by gradual combustion during a prescribed part of the stroke determined by the cut-oil.By June 1893, Diesel had realised his original cycle would not work and he adopted the constant pressure cycle. Diesel describes the cycle in his 1895 patent application. Notice that there is no longer a mention of compression temperatures exceeding the temperature of combustion. Now it is simply stated that the compression must be sufficient to trigger ignition.
History:
1. In an internal-combustion engine, the combination of a cylinder and piston constructed and arranged to compress air to a degree producing a temperature above the igniting-point of the fuel, a supply for compressed air or gas; a fuel-supply; a distributing-valve for fuel, a passage from the air supply to the cylinder in communication with the fuel-distributing valve, an inlet to the cylinder in communication with the air-supply and with the fuel-valve, and a cut-oil, substantially as described.In 1892, Diesel received patents in Germany, Switzerland, the United Kingdom and the United States for "Method of and Apparatus for Converting Heat into Work". In 1894 and 1895, he filed patents and addenda in various countries for his engine; the first patents were issued in Spain (No. 16,654), France (No. 243,531) and Belgium (No. 113,139) in December 1894, and in Germany (No. 86,633) in 1895 and the United States (No. 608,845) in 1898.Diesel was attacked and criticised over a time period of several years. Critics claimed that Diesel never invented a new motor and that the invention of the diesel engine is fraud. Otto Köhler and Emil Capitaine were two of the most prominent critics of Diesel's time. Köhler had published an essay in 1887, in which he describes an engine similar to the engine Diesel describes in his 1893 essay. Köhler figured that such an engine could not perform any work. Emil Capitaine had built a petroleum engine with glow-tube ignition in the early 1890s; he claimed against his own better judgement that his glow-tube ignition engine worked the same way Diesel's engine did. His claims were unfounded and he lost a patent lawsuit against Diesel. Other engines, such as the Akroyd engine and the Brayton engine, also use an operating cycle that is different from the diesel engine cycle. Friedrich Sass says that the diesel engine is Diesel's "very own work" and that any "Diesel myth" is "falsification of history".
History:
The first diesel engine Diesel sought out firms and factories that would build his engine. With the help of Moritz Schröter and Max Gutermuth, he succeeded in convincing both Krupp in Essen and the Maschinenfabrik Augsburg. Contracts were signed in April 1893, and in early summer 1893, Diesel's first prototype engine was built in Augsburg. On 10 August 1893, the first ignition took place, the fuel used was petrol. In winter 1893/1894, Diesel redesigned the existing engine, and by 18 January 1894, his mechanics had converted it into the second prototype. During January that year, an air-blast injection system was added to the engine's cylinder head and tested. Friedrich Sass argues that, it can be presumed that Diesel copied the concept of air-blast injection from George B. Brayton, albeit that Diesel substantially improved the system. On 17 February 1894, the redesigned engine ran for 88 revolutions – one minute; with this news, Maschinenfabrik Augsburg's stock rose by 30%, indicative of the tremendous anticipated demands for a more efficient engine. On 26 June 1895, the engine achieved an effective efficiency of 16.6% and had a fuel consumption of 519 g·kW−1·h−1.
History:
However, despite proving the concept, the engine caused problems, and Diesel could not achieve any substantial progress. Therefore, Krupp considered rescinding the contract they had made with Diesel. Diesel was forced to improve the design of his engine and rushed to construct a third prototype engine. Between 8 November and 20 December 1895, the second prototype had successfully covered over 111 hours on the test bench. In the January 1896 report, this was considered a success.In February 1896, Diesel considered supercharging the third prototype. Imanuel Lauster, who was ordered to draw the third prototype "Motor 250/400", had finished the drawings by 30 April 1896. During summer that year the engine was built, it was completed on 6 October 1896. Tests were conducted until early 1897. First public tests began on 1 February 1897. Moritz Schröter's test on 17 February 1897 was the main test of Diesel's engine. The engine was rated 13.1 kW with a specific fuel consumption of 324 g·kW−1·h−1, resulting in an effective efficiency of 26.2%. By 1898, Diesel had become a millionaire.
History:
Timeline 1890s 1893: Rudolf Diesel's essay titled Theory and Construction of a Rational Heat Motor appears.
1893: February 21, Diesel and the Maschinenfabrik Augsburg sign a contract that allows Diesel to build a prototype engine.
1893: February 23, Diesel obtains a patent (RP 67207) titled "Arbeitsverfahren und Ausführungsart für Verbrennungsmaschinen" (Working Methods and Techniques for Internal Combustion Engines).
1893: April 10, Diesel and Krupp sign a contract that allows Diesel to build a prototype engine.
1893: April 24, both Krupp and the Maschinenfabrik Augsburg decide to collaborate and build just a single prototype in Augsburg.
1893: July, the first prototype is completed.
1893: August 10, Diesel injects fuel (petrol) for the first time, resulting in combustion, destroying the indicator.
1893: November 30, Diesel applies for a patent (RP 82168) for a modified combustion process. He obtains it on 12 July 1895.
1894: January 18, after the first prototype had been modified to become the second prototype, testing with the second prototype begins.
1894: February 17, The second prototype runs for the first time.
1895: March 30, Diesel applies for a patent (RP 86633) for a starting process with compressed air.
1895: June 26, the second prototype passes brake testing for the first time.
1895: Diesel applies for a second patent US Patent # 608845 1895: November 8 – December 20, a series of tests with the second prototype is conducted. In total, 111 operating hours are recorded.
1896: April 30, Imanuel Lauster completes the third and final prototype's drawings.
1896: October 6, the third and final prototype engine is completed.
1897: February 1, Diesel's prototype engine is running and finally ready for efficiency testing and production.
1897: October 9, Adolphus Busch licenses rights to the diesel engine for the US and Canada.
1897: 29 October, Rudolf Diesel obtains a patent (DRP 95680) on supercharging the diesel engine.
1898: February 1, the Diesel Motoren-Fabrik Actien-Gesellschaft is registered.
1898: March, the first commercial diesel engine, rated 2×30 PS (2×22 kW), is installed in the Kempten plant of the Vereinigte Zündholzfabriken A.G.
1898: September 17, the Allgemeine Gesellschaft für Dieselmotoren A.-G. is founded.
1899: The first two-stroke diesel engine, invented by Hugo Güldner, is built.
1900s 1901: Imanuel Lauster designs the first trunk piston diesel engine (DM 70).
1901: By 1901, MAN had produced 77 diesel engine cylinders for commercial use.
1903: Two first diesel-powered ships are launched, both for river and canal operations: The Vandal naphtha tanker and the Sarmat.
1904: The French launch the first diesel submarine, the Aigrette.
1905: January 14: Diesel applies for a patent on unit injection (L20510I/46a).
1905: The first diesel engine turbochargers and intercoolers are manufactured by Büchi.
1906: The Diesel Motoren-Fabrik Actien-Gesellschaft is dissolved.
1908: Diesel's patents expire.
1908: The first lorry (truck) with a diesel engine appears.
1909: March 14, Prosper L'Orange applies for a patent on precombustion chamber injection. He later builds the first diesel engine with this system.
1910s 1910: MAN starts making two-stroke diesel engines.
1910: November 26, James McKechnie applies for a patent on unit injection. Unlike Diesel, he managed to successfully build working unit injectors.
1911: November 27, the Allgemeine Gesellschaft für Dieselmotoren A.-G. is dissolved.
1911: The Germania shipyard in Kiel builds 850 PS (625 kW) diesel engines for German submarines. These engines are installed in 1914.
1912: MAN builds the first double-acting piston two-stroke diesel engine.
1912: The first locomotive with a diesel engine is used on the Swiss Winterthur-Romanshorn railroad.
1912: The Selandia is the first ocean-going ship with diesel engines.
1913: NELSECO diesels are installed on commercial ships and US Navy submarines.
1913: September 29, Rudolf Diesel dies mysteriously when crossing the English Channel on the SS Dresden.
1914: MAN builds 900 PS (662 kW) two-stroke engines for Dutch submarines.
1919: Prosper L'Orange obtains a patent on a Precombustion chamber insert incorporating a needle injection nozzle. First diesel engine from Cummins.
1920s 1923: At the Königsberg DLG exhibition, the first agricultural tractor with a diesel engine, the prototype Benz-Sendling S6, is presented.
1923: December 15, the first lorry with a direct-injected diesel engine is tested by MAN. The same year, Benz builds a lorry with a pre-combustion chamber injected diesel engine.
1923: The first two-stroke diesel engine with counterflow scavenging appears.
1924: Fairbanks-Morse introduces the two-stroke Y-VA (later renamed to Model 32).
1925: Sendling starts mass-producing a diesel-powered agricultural tractor.
1927: Bosch introduces the first inline injection pump for motor vehicle diesel engines.
1929: The first passenger car with a diesel engine appears. Its engine is an Otto engine modified to use the diesel principle and Bosch's injection pump. Several other diesel car prototypes follow.
1930s 1933: Junkers Motorenwerke in Germany start production of the most successful mass-produced aviation diesel engine of all time, the Jumo 205. By the outbreak of World War II, over 900 examples are produced. Its rated take-off power is 645 kW.
1933: General Motors uses its new roots-blown, unit-injected two-stroke Winton 201A diesel engine to power its automotive assembly exhibit at the Chicago World's Fair (A Century of Progress). The engine is offered in several versions ranging from 600–900 hp (447–671 kW).
1934: The Budd Company builds the first diesel–electric passenger train in the US, the Pioneer Zephyr 9900, using a Winton engine.
1935: The Citroën Rosalie is fitted with an early swirl chamber injected diesel engine for testing purposes. Daimler-Benz starts manufacturing the Mercedes-Benz OM 138, the first mass-produced diesel engine for passenger cars, and one of the few marketable passenger car diesel engines of its time. It is rated 45 PS (33 kW).
1936: March 4, the airship LZ 129 Hindenburg, the biggest aircraft ever made, takes off for the first time. It is powered by four V16 Daimler-Benz LOF 6 diesel engines, rated 1,200 PS (883 kW) each.
1936: Manufacture of the first mass-produced passenger car with a diesel engine (Mercedes-Benz 260 D) begins.
1937: Konstantin Fyodorovich Chelpan develops the V-2 diesel engine, later used in the Soviet T-34 tanks, widely regarded as the best tank chassis of World War II.
1938: General Motors forms the GM Diesel Division, later to become Detroit Diesel, and introduces the Series 71 inline high-speed medium-horsepower two-stroke engine, suitable for road vehicles and marine use.
1940s 1946: Clessie Cummins obtains a patent on a fuel feeding and injection apparatus for oil-burning engines that incorporates separate components for generating injection pressure and injection timing.
1946: Klöckner-Humboldt-Deutz (KHD) introduces an air-cooled mass-production diesel engine to the market.
1950s 1950s: KHD becomes the air-cooled diesel engine global market leader.
1951: J. Siegfried Meurer obtains a patent on the M-System, a design that incorporates a central sphere combustion chamber in the piston (DBP 865683).
1953: First mass-produced swirl chamber injected passenger car diesel engine (Borgward/Fiat).
1954: Daimler-Benz introduces the Mercedes-Benz OM 312 A, a 4.6 litre straight-6 series-production industrial diesel engine with a turbocharger, rated 115 PS (85 kW). It proves to be unreliable.
1954: Volvo produces a small batch series of 200 units of a turbocharged version of the TD 96 engine. This 9.6 litre engine is rated 136 kW (185 PS).
1955: Turbocharging for MAN two-stroke marine diesel engines becomes standard.
1959: The Peugeot 403 becomes the first mass-produced passenger sedan/saloon manufactured outside West Germany to be offered with a diesel engine option.
1960s 1964: Summer, Daimler-Benz switches from precombustion chamber injection to helix-controlled direct injection.
1962–65: A diesel compression braking system, eventually to be manufactured by the Jacobs Manufacturing Company and nicknamed the "Jake Brake", is invented and patented by Clessie Cummins.
1970s 1972: KHD introduces the AD-System, Allstoff-Direkteinspritzung, (anyfuel direct-injection), for its diesel engines. AD-diesels can operate on virtually any kind of liquid fuel, but they are fitted with an auxiliary spark plug that fires if the ignition quality of the fuel is too low.
1976: Development of the common rail injection begins at the ETH Zürich.
1976: The Volkswagen Golf becomes the first compact passenger sedan/saloon to be offered with a diesel engine option.
1978: Daimler-Benz produces the first passenger car diesel engine with a turbocharger (Mercedes-Benz OM617 engine).
1979: First prototype of a low-speed two-stroke crosshead engine with common rail injection.
1980s 1981/82: Uniflow scavenging for two-stroke marine diesel engines becomes standard.
1985: December, road testing of a common rail injection system for lorries using a modified 6VD 12,5/12 GRF-E engine in an IFA W50 takes place.
1986: The BMW E28 524td is the world's first passenger car equipped with an electronically controlled injection pump (developed by Bosch).
1987: Daimler-Benz introduces the electronically controlled injection pump for lorry diesel engines.
1988: The Fiat Croma becomes the first mass-produced passenger car in the world to have a direct injected diesel engine.
1989: The Audi 100 is the first passenger car in the world with a turbocharged, direct injected, and electronically controlled diesel engine.
1990s 1992: 1 July, the Euro 1 emission standard comes into effect.
1993: First passenger car diesel engine with four valves per cylinder, the Mercedes-Benz OM 604.
1994: Unit injector system by Bosch for lorry diesel engines.
1996: First diesel engine with direct injection and four valves per cylinder, used in the Opel Vectra.
1996: First radial piston distributor injection pump by Bosch.
1997: First mass-produced common rail diesel engine for a passenger car, the Fiat 1.9 JTD.
1998: BMW wins the 24 Hours Nürburgring race with a modified BMW E36. The car, called 320d, is powered by a 2-litre, straight-four diesel engine with direct injection and a helix-controlled distributor injection pump (Bosch VP 44), producing 180 kW (240 hp). The fuel consumption is 23 L/100 km, only half the fuel consumption of a similar Otto-powered car.
1998: Volkswagen introduces the VW EA188 Pumpe-Düse engine (1.9 TDI), with Bosch-developed electronically controlled unit injectors.
1999: Daimler-Chrysler presents the first common rail three-cylinder diesel engine used in a passenger car (the Smart City Coupé).
2000s 2000: Peugeot introduces the diesel particulate filter for passenger cars.
2002: Piezoelectric injector technology by Siemens.
2003: Piezoelectric injector technology by Bosch, and Delphi.
2004: BMW introduces dual-stage turbocharging with the BMW M57 engine.
2006: The world's most powerful diesel engine, the Wärtsilä-Sulzer RTA96-C, is produced. It is rated 80,080 kW.
2006: Audi R10 TDI, equipped with a 5.5-litre V12-TDI engine, rated 476 kW (638 hp), wins the 2006 24 Hours of Le Mans.
2006: Daimler-Chrysler launches the first series-production passenger car engine with selective catalytic reduction exhaust gas treatment, the Mercedes-Benz OM 642. It is fully complying with the Tier2Bin8 emission standard.
2008: Volkswagen introduces the LNT catalyst for passenger car diesel engines with the VW 2.0 TDI engine.
2008: Volkswagen starts series production of the biggest passenger car diesel engine, the Audi 6-litre V12 TDI.
2008: Subaru introduces the first horizontally opposed diesel engine to be fitted to a passenger car. It is a 2-litre common rail engine, rated 110 kW.
2010s 2010: Mitsubishi developed and started mass production of its 4N13 1.8 L DOHC I4, the world's first passenger car diesel engine that features a variable valve timing system.
2012: BMW introduces dual-stage turbocharging with three turbochargers for the BMW N57 engine.
2015: Common rail systems working with pressures of 2,500 bar launched.
2015: In the Volkswagen emissions scandal, the US EPA issued a notice of violation of the Clean Air Act to Volkswagen Group after it was found that Volkswagen had intentionally programmed turbocharged direct injection (TDI) diesel engines to activate certain emissions controls only during laboratory emissions testing.
Operating principle:
Overview The characteristics of a diesel engine are Use of compression ignition, instead of an ignition apparatus such as a spark plug.
Internal mixture formation. In diesel engines, the mixture of air and fuel is only formed inside the combustion chamber.
Operating principle:
Quality torque control. The amount of torque a diesel engine produces is not controlled by throttling the intake air (unlike a traditional spark-ignition petrol engine, where the airflow is reduced in order to regulate the torque output), instead, the volume of air entering the engine is maximised at all times, and the torque output is regulated solely by controlling the amount of injected fuel.
Operating principle:
High air-fuel ratio. Diesel engines run at global air-fuel ratios significantly leaner than the stoichiometric ratio.
Diffusion flame: At combustion, oxygen first has to diffuse into the flame, rather than having oxygen and fuel already mixed before combustion, which would result in a premixed flame.
Heterogeneous air-fuel mixture: In diesel engines, there is no even dispersion of fuel and air inside the cylinder. That is because the combustion process begins at the end of the injection phase, before a homogeneous mixture of air and fuel can be formed.
Preference for the fuel to have a high ignition performance (Cetane number), rather than a high knocking resistance (octane rating) that is preferred for petrol engines.
Thermodynamic cycle The diesel internal combustion engine differs from the gasoline powered Otto cycle by using highly compressed hot air to ignite the fuel rather than using a spark plug (compression ignition rather than spark ignition).
Operating principle:
In the diesel engine, only air is initially introduced into the combustion chamber. The air is then compressed with a compression ratio typically between 15:1 and 23:1. This high compression causes the temperature of the air to rise. At about the top of the compression stroke, fuel is injected directly into the compressed air in the combustion chamber. This may be into a (typically toroidal) void in the top of the piston or a pre-chamber depending upon the design of the engine. The fuel injector ensures that the fuel is broken down into small droplets, and that the fuel is distributed evenly. The heat of the compressed air vaporises fuel from the surface of the droplets. The vapour is then ignited by the heat from the compressed air in the combustion chamber, the droplets continue to vaporise from their surfaces and burn, getting smaller, until all the fuel in the droplets has been burnt. Combustion occurs at a substantially constant pressure during the initial part of the power stroke. The start of vaporisation causes a delay before ignition and the characteristic diesel knocking sound as the vapour reaches ignition temperature and causes an abrupt increase in pressure above the piston (not shown on the P-V indicator diagram). When combustion is complete the combustion gases expand as the piston descends further; the high pressure in the cylinder drives the piston downward, supplying power to the crankshaft.
Operating principle:
As well as the high level of compression allowing combustion to take place without a separate ignition system, a high compression ratio greatly increases the engine's efficiency. Increasing the compression ratio in a spark-ignition engine where fuel and air are mixed before entry to the cylinder is limited by the need to prevent pre-ignition, which would cause engine damage. Since only air is compressed in a diesel engine, and fuel is not introduced into the cylinder until shortly before top dead centre (TDC), premature detonation is not a problem and compression ratios are much higher.
Operating principle:
The pressure–volume diagram (pV) diagram is a simplified and idealised representation of the events involved in a diesel engine cycle, arranged to illustrate the similarity with a Carnot cycle. Starting at 1, the piston is at bottom dead centre and both valves are closed at the start of the compression stroke; the cylinder contains air at atmospheric pressure. Between 1 and 2 the air is compressed adiabatically – that is without heat transfer to or from the environment – by the rising piston. (This is only approximately true since there will be some heat exchange with the cylinder walls.) During this compression, the volume is reduced, the pressure and temperature both rise. At or slightly before 2 (TDC) fuel is injected and burns in the compressed hot air. Chemical energy is released and this constitutes an injection of thermal energy (heat) into the compressed gas. Combustion and heating occur between 2 and 3. In this interval the pressure remains constant since the piston descends, and the volume increases; the temperature rises as a consequence of the energy of combustion. At 3 fuel injection and combustion are complete, and the cylinder contains gas at a higher temperature than at 2. Between 3 and 4 this hot gas expands, again approximately adiabatically. Work is done on the system to which the engine is connected. During this expansion phase the volume of the gas rises, and its temperature and pressure both fall. At 4 the exhaust valve opens, and the pressure falls abruptly to atmospheric (approximately). This is unresisted expansion and no useful work is done by it. Ideally the adiabatic expansion should continue, extending the line 3–4 to the right until the pressure falls to that of the surrounding air, but the loss of efficiency caused by this unresisted expansion is justified by the practical difficulties involved in recovering it (the engine would have to be much larger). After the opening of the exhaust valve, the exhaust stroke follows, but this (and the following induction stroke) are not shown on the diagram. If shown, they would be represented by a low-pressure loop at the bottom of the diagram. At 1 it is assumed that the exhaust and induction strokes have been completed, and the cylinder is again filled with air. The piston-cylinder system absorbs energy between 1 and 2 – this is the work needed to compress the air in the cylinder, and is provided by mechanical kinetic energy stored in the flywheel of the engine. Work output is done by the piston-cylinder combination between 2 and 4. The difference between these two increments of work is the indicated work output per cycle, and is represented by the area enclosed by the pV loop. The adiabatic expansion is in a higher pressure range than that of the compression because the gas in the cylinder is hotter during expansion than during compression. It is for this reason that the loop has a finite area, and the net output of work during a cycle is positive.
Operating principle:
Efficiency The fuel efficiency of diesel engines is better than most other types of combustion engines, due to their high compression ratio, high air–fuel equivalence ratio (λ), and the lack of intake air restrictions (i.e. throttle valves). Theoretically, the highest possible efficiency for a diesel engine is 75%. However, in practice the efficiency is much lower, with efficiencies of up to 43% for passenger car engines, up to 45% for large truck and bus engines, and up to 55% for large two-stroke marine engines. The average efficiency over a motor vehicle driving cycle is lower than the diesel engine's peak efficiency (for example, a 37% average efficiency for an engine with a peak efficiency of 44%). That is because the fuel efficiency of a diesel engine drops at lower loads, however, it does not drop quite as fast as the Otto (spark ignition) engine's.
Operating principle:
Emissions Diesel engines are combustion engines and, therefore, emit combustion products in their exhaust gas. Due to incomplete combustion, diesel engine exhaust gases include carbon monoxide, hydrocarbons, particulate matter, and nitrogen oxides pollutants. About 90 per cent of the pollutants can be removed from the exhaust gas using exhaust gas treatment technology. Road vehicle diesel engines have no sulfur dioxide emissions, because motor vehicle diesel fuel has been sulfur-free since 2003. Helmut Tschöke argues that particulate matter emitted from motor vehicles has negative impacts on human health.The particulate matter in diesel exhaust emissions is sometimes classified as a carcinogen or "probable carcinogen" and is known to increase the risk of heart and respiratory diseases.
Operating principle:
Electrical system In principle, a diesel engine does not require any sort of electrical system. However, most modern diesel engines are equipped with an electrical fuel pump, and an electronic engine control unit.
However, there is no high-voltage electrical ignition system present in a diesel engine. This eliminates a source of radio frequency emissions (which can interfere with navigation and communication equipment), which is why only diesel-powered vehicles are allowed in some parts of the American National Radio Quiet Zone.
Operating principle:
Torque control To control the torque output at any given time (i.e. when the driver of a car adjusts the accelerator pedal), a governor adjusts the amount of fuel injected into the engine. Mechanical governors have been used in the past, however electronic governors are more common on modern engines. Mechanical governors are usually driven by the engine's accessory belt or a gear-drive system and use a combination of springs and weights to control fuel delivery relative to both load and speed. Electronically governed engines use an electronic control unit (ECU) or electronic control module (ECM) to control the fuel delivery. The ECM/ECU uses various sensors (such as engine speed signal, intake manifold pressure and fuel temperature) to determine the amount of fuel injected into the engine.
Operating principle:
Due to the amount of air being constant (for a given RPM) while the amount of fuel varies, very high ("lean") air-fuel ratios are used in situations where minimal torque output is required. This differs from a petrol engine, where a throttle is used to also reduce the amount of intake air as part of regulating the engine's torque output. Controlling the timing of the start of injection of fuel into the cylinder is similar to controlling the ignition timing in a petrol engine. It is therefore a key factor in controlling the power output, fuel consumption and exhaust emissions.
Classification:
There are several different ways of categorising diesel engines, as outlined in the following sections.
Classification:
RPM operating range Günter Mau categorises diesel engines by their rotational speeds into three groups: High-speed engines (> 1,000 rpm), Medium-speed engines (300–1,000 rpm), and Slow-speed engines (< 300 rpm).High-speed diesel enginesHigh-speed engines are used to power trucks (lorries), buses, tractors, cars, yachts, compressors, pumps and small electrical generators. As of 2018, most high-speed engines have direct injection. Many modern engines, particularly in on-highway applications, have common rail direct injection. On bigger ships, high-speed diesel engines are often used for powering electric generators. The highest power output of high-speed diesel engines is approximately 5 MW.
Classification:
Medium-speed diesel enginesMedium-speed engines are used in large electrical generators, railway diesel locomotives, ship propulsion and mechanical drive applications such as large compressors or pumps. Medium speed diesel engines operate on either diesel fuel or heavy fuel oil by direct injection in the same manner as low-speed engines. Usually, they are four-stroke engines with trunk pistons; a notable exception being the EMD 567, 645, and 710 engines, which are all two-stroke.The power output of medium-speed diesel engines can be as high as 21,870 kW, with the effective efficiency being around 47-48% (1982). Most larger medium-speed engines are started with compressed air direct on pistons, using an air distributor, as opposed to a pneumatic starting motor acting on the flywheel, which tends to be used for smaller engines.Medium-speed engines intended for marine applications are usually used to power (ro-ro) ferries, passenger ships or small freight ships. Using medium-speed engines reduces the cost of smaller ships and increases their transport capacity. In addition to that, a single ship can use two smaller engines instead of one big engine, which increases the ship's safety.
Classification:
Low-speed diesel enginesLow-speed diesel engines are usually very large in size and mostly used to power ships. There are two different types of low-speed engines that are commonly used: Two-stroke engines with a crosshead, and four-stroke engines with a regular trunk-piston. Two-stroke engines have a limited rotational frequency and their charge exchange is more difficult, which means that they are usually bigger than four-stroke engines and used to directly power a ship's propeller.
Classification:
Four-stroke engines on ships are usually used to power an electric generator. An electric motor powers the propeller. Both types are usually very undersquare, meaning the bore is smaller than the stroke. Low-speed diesel engines (as used in ships and other applications where overall engine weight is relatively unimportant) often have an effective efficiency of up to 55%. Like medium-speed engines, low-speed engines are started with compressed air, and they use heavy oil as their primary fuel.
Classification:
Combustion cycle Four-stroke engines use the combustion cycle described earlier. Most smaller diesels, for vehicular use, for instance, typically use the four-stroke cycle. This is due to several factors, such as the two-stroke design's narrow powerband which is not particularly suitable for automotive use and the necessity for complicated and expensive built-in lubrication systems and scavenging measures. The cost effectiveness (and proportion of added weight) of these technologies has less of an impact on larger, more expensive engines, while engines intended for shipping or stationary use can be run at a single speed for long periods.Two-stroke engines use a combustion cycle which is completed in two strokes instead of four strokes. Filling the cylinder with air and compressing it takes place in one stroke, and the power and exhaust strokes are combined. The compression in a two-stroke diesel engine is similar to the compression that takes place in a four-stroke diesel engine: As the piston passes through bottom centre and starts upward, compression commences, culminating in fuel injection and ignition. Instead of a full set of valves, two-stroke diesel engines have simple intake ports, and exhaust ports (or exhaust valves). When the piston approaches bottom dead centre, both the intake and the exhaust ports are "open", which means that there is atmospheric pressure inside the cylinder. Therefore, some sort of pump is required to blow the air into the cylinder and the combustion gasses into the exhaust. This process is called scavenging. The pressure required is approximately 10-30 kPa.Due to the lack of discrete exhaust and intake strokes, all two-stroke diesel engines use a scavenge blower or some form of compressor to charge the cylinders with air and assist in scavenging. Roots-type superchargers were used for ship engines until the mid-1950s, however since 1955 they have been widely replaced by turbochargers. Usually, a two-stroke ship diesel engine has a single-stage turbocharger with a turbine that has an axial inflow and a radial outflow.
Classification:
Scavenging in two-stroke engines In general, there are three types of scavenging possible: Uniflow scavenging Crossflow scavenging Reverse flow scavengingCrossflow scavenging is incomplete and limits the stroke, yet some manufacturers used it. Reverse flow scavenging is a very simple way of scavenging, and it was popular amongst manufacturers until the early 1980s. Uniflow scavenging is more complicated to make but allows the highest fuel efficiency; since the early 1980s, manufacturers such as MAN and Sulzer have switched to this system. It is standard for modern marine two-stroke diesel engines.
Classification:
Fuel used So-called dual-fuel diesel engines or gas diesel engines burn two different types of fuel simultaneously, for instance, a gaseous fuel and diesel engine fuel. The diesel engine fuel auto-ignites due to compression ignition, and then ignites the gaseous fuel. Such engines do not require any type of spark ignition and operate similar to regular diesel engines.
Fuel injection:
The fuel is injected at high pressure into either the combustion chamber, the "swirl chamber" or the "pre-chamber" (unlike older petrol engines where the fuel is added in the inlet manifold or carburetor). Engines where the fuel is injected into the main combustion chamber are called "direct injection" (DI) engines, while those which use a swirl chamber or pre-chamber are called "indirect injection" (IDI) engines.
Fuel injection:
Direct injection Most direct injection diesel engines have a combustion cup in the top of the piston where the fuel is sprayed. Many different methods of injection can be used. Usually, an engine with helix-controlled mechanic direct injection has either an inline or a distributor injection pump. For each engine cylinder, the corresponding plunger in the fuel pump measures out the correct amount of fuel and determines the timing of each injection. These engines use injectors that are very precise spring-loaded valves that open and close at a specific fuel pressure. Separate high-pressure fuel lines connect the fuel pump with each cylinder. Fuel volume for each single combustion is controlled by a slanted groove in the plunger which rotates only a few degrees releasing the pressure and is controlled by a mechanical governor, consisting of weights rotating at engine speed constrained by springs and a lever. The injectors are held open by the fuel pressure. On high-speed engines the plunger pumps are together in one unit. The length of fuel lines from the pump to each injector is normally the same for each cylinder in order to obtain the same pressure delay. Direct injected diesel engines usually use orifice-type fuel injectors.Electronic control of the fuel injection transformed the direct injection engine by allowing much greater control over the combustion.
Fuel injection:
Common railCommon rail (CR) direct injection systems do not have the fuel metering, pressure-raising and delivery functions in a single unit, as in the case of a Bosch distributor-type pump, for example. A high-pressure pump supplies the CR. The requirements of each cylinder injector are supplied from this common high pressure reservoir of fuel. An Electronic Diesel Control (EDC) controls both rail pressure and injections depending on engine operating conditions. The injectors of older CR systems have solenoid-driven plungers for lifting the injection needle, whilst newer CR injectors use plungers driven by piezoelectric actuators that have fewer moving mass and therefore allow even more injections in a very short period of time. Early common rail system were controlled by mechanical means.
Fuel injection:
The injection pressure of modern CR systems ranges from 140 MPa to 270 MPa.
Fuel injection:
Indirect injection An indirect diesel injection system (IDI) engine delivers fuel into a small chamber called a swirl chamber, precombustion chamber, pre chamber or ante-chamber, which is connected to the cylinder by a narrow air passage. Generally the goal of the pre chamber is to create increased turbulence for better air / fuel mixing. This system also allows for a smoother, quieter running engine, and because fuel mixing is assisted by turbulence, injector pressures can be lower. Most IDI systems use a single orifice injector. The pre-chamber has the disadvantage of lowering efficiency due to increased heat loss to the engine's cooling system, restricting the combustion burn, thus reducing the efficiency by 5–10%. IDI engines are also more difficult to start and usually require the use of glow plugs. IDI engines may be cheaper to build but generally require a higher compression ratio than the DI counterpart. IDI also makes it easier to produce smooth, quieter running engines with a simple mechanical injection system since exact injection timing is not as critical. Most modern automotive engines are DI which have the benefits of greater efficiency and easier starting; however, IDI engines can still be found in the many ATV and small diesel applications. Indirect injected diesel engines use pintle-type fuel injectors.
Fuel injection:
Air-blast injection Early diesel engines injected fuel with the assistance of compressed air, which atomised the fuel and forced it into the engine through a nozzle (a similar principle to an aerosol spray). The nozzle opening was closed by a pin valve actuated by the camshaft. Although the engine was also required to drive an air compressor used for air-blast injection, the efficiency was nonetheless better than other combustion engines of the time. However the system was heavy and it was slow to react to changing torque demands, making it unsuitable for road vehicles.
Fuel injection:
Unit injectors A unit injector system, also known as "Pumpe-Düse" (pump-nozzle in German) combines the injector and fuel pump into a single component, which is positioned above each cylinder. This eliminates the high-pressure fuel lines and achieves a more consistent injection. Under full load, the injection pressure can reach up to 220 MPa. Unit injectors are operated by a cam and the quantity of fuel injected is controlled either mechanically (by a rack or lever) or electronically.
Fuel injection:
Due to increased performance requirements, unit injectors have been largely replaced by common rail injection systems.
Diesel engine particularities:
Mass The average diesel engine has a poorer power-to-mass ratio than an equivalent petrol engine. The lower engine speeds (RPM) of typical diesel engines results in a lower power output. Also, the mass of a diesel engine is typically higher, since the higher operating pressure inside the combustion chamber increases the internal forces, which requires stronger (and therefore heavier) parts to withstand these forces.
Diesel engine particularities:
Noise ("diesel clatter") The distinctive noise of a diesel engine, particularly at idling speeds, is sometimes called "diesel clatter". This noise is largely caused by the sudden ignition of the diesel fuel when injected into the combustion chamber, which causes a pressure wave that sounds like knocking.
Diesel engine particularities:
Engine designers can reduce diesel clatter through: indirect injection; pilot or pre-injection; injection timing; injection rate; compression ratio; turbo boost; and exhaust gas recirculation (EGR). Common rail diesel injection systems permit multiple injection events as an aid to noise reduction. Through measures such as these, diesel clatter noise is greatly reduced in modern engines. Diesel fuels with a higher cetane rating are more likely to ignite and hence reduce diesel clatter.
Diesel engine particularities:
Cold weather starting In warmer climates, diesel engines do not require any starting aid (aside from the starter motor). However, many diesel engines include some form of preheating for the combustion chamber, to assist starting in cold conditions. Engines with a displacement of less than 1 litre per cylinder usually have glowplugs, whilst larger heavy-duty engines have flame-start systems. The minimum starting temperature that allows starting without pre-heating is 40 °C (104 °F) for precombustion chamber engines, 20 °C (68 °F) for swirl chamber engines, and 0 °C (32 °F) for direct injected engines.
Diesel engine particularities:
In the past, a wider variety of cold-start methods were used. Some engines, such as Detroit Diesel engines used a system to introduce small amounts of ether into the inlet manifold to start combustion. Instead of glowplugs, some diesel engines are equipped with starting aid systems that change valve timing. The simplest way this can be done is with a decompression lever. Activating the decompression lever locks the outlet valves in a slight down position, resulting in the engine not having any compression and thus allowing for turning the crankshaft over with significantly less resistance. When the crankshaft reaches a higher speed, flipping the decompression lever back into its normal position will abruptly re-activate the outlet valves, resulting in compression − the flywheel's mass moment of inertia then starts the engine. Other diesel engines, such as the precombustion chamber engine XII Jv 170/240 made by Ganz & Co., have a valve timing changing system that is operated by adjusting the inlet valve camshaft, moving it into a slight "late" position. This will make the inlet valves open with a delay, forcing the inlet air to heat up when entering the combustion chamber.
Diesel engine particularities:
Supercharging & turbocharging Forced induction, especially turbocharging is commonly used on diesel engines because it greatly increases efficiency and torque output. Diesel engines are well suited for forced induction setups due to their operating principle which is characterised by wide ignition limits and the absence of fuel during the compression stroke. Therefore, knocking, pre-ignition or detonation cannot occur, and a lean mixture caused by excess supercharging air inside the combustion chamber does not negatively affect combustion.
Major manufacturers:
MTU MAN Wartsila Rolls Royce Siemens Kolomna KDZ TMH BMZ and UDMZ General Electric GE Transportation Volvo Penta Sulzer (manufacturer) Doosan Doosan infracore , Doosan Marine YaMZ VAZ , KMZ - RD Nevsky , STM GAZ VMZ VMZ Mitsubishi , Mitsui Mazda IHI Kawasaki Honda Suzuki Subaru Isuzu Nissan plus others Caterpillar and Cummins AO Zvezda and Zvezda Energetika Bergen Engines MaK Deutz AG MWM BMW VW , MAPNA BHEL DESA Steyr Motors GmbH Iran Khodro Diesel Isotta Fraschini , EMD Fairbanks Morse , Shanxi Henan Diesel SDM
Fuel and fluid characteristics:
Diesel engines can combust a huge variety of fuels, including several fuel oils that have advantages over fuels such as petrol. These advantages include: Low fuel costs, as fuel oils are relatively cheap Good lubrication properties High energy density Low risk of catching fire, as they do not form a flammable vapour Biodiesel is an easily synthesised, non-petroleum-based fuel (through transesterification) which can run directly in many diesel engines, while gasoline engines either need adaptation to run synthetic fuels or else use them as an additive to gasoline (e.g., ethanol added to gasohol).In diesel engines, a mechanical injector system atomizes the fuel directly into the combustion chamber (as opposed to a Venturi jet in a carburetor, or a fuel injector in a manifold injection system atomizing fuel into the intake manifold or intake runners as in a petrol engine). Because only air is inducted into the cylinder in a diesel engine, the compression ratio can be much higher as there is no risk of pre-ignition provided the injection process is accurately timed. This means that cylinder temperatures are much higher in a diesel engine than a petrol engine, allowing less volatile fuels to be used.
Fuel and fluid characteristics:
Therefore, diesel engines can operate on a huge variety of different fuels. In general, fuel for diesel engines should have a proper viscosity, so that the injection pump can pump the fuel to the injection nozzles without causing damage to itself or corrosion of the fuel line. At injection, the fuel should form a good fuel spray, and it should not have a coking effect upon the injection nozzles. To ensure proper engine starting and smooth operation, the fuel should be willing to ignite and hence not cause a high ignition delay, (this means that the fuel should have a high cetane number). Diesel fuel should also have a high lower heating value.Inline mechanical injector pumps generally tolerate poor-quality or bio-fuels better than distributor-type pumps. Also, indirect injection engines generally run more satisfactorily on fuels with a high ignition delay (for instance, petrol) than direct injection engines. This is partly because an indirect injection engine has a much greater 'swirl' effect, improving vaporisation and combustion of fuel, and because (in the case of vegetable oil-type fuels) lipid depositions can condense on the cylinder walls of a direct-injection engine if combustion temperatures are too low (such as starting the engine from cold). Direct-injected engines with an MAN centre sphere combustion chamber rely on fuel condensing on the combustion chamber walls. The fuel starts vaporising only after ignition sets in, and it burns relatively smoothly. Therefore, such engines also tolerate fuels with poor ignition delay characteristics, and, in general, they can operate on petrol rated 86 RON.
Fuel and fluid characteristics:
Fuel types In his 1893 work Theory and Construction of a Rational Heat Motor, Rudolf Diesel considers using coal dust as fuel for the diesel engine. However, Diesel just considered using coal dust (as well as liquid fuels and gas); his actual engine was designed to operate on petroleum, which was soon replaced with regular petrol and kerosene for further testing purposes, as petroleum proved to be too viscous. In addition to kerosene and petrol, Diesel's engine could also operate on ligroin.Before diesel engine fuel was standardised, fuels such as petrol, kerosene, gas oil, vegetable oil and mineral oil, as well as mixtures of these fuels, were used. Typical fuels specifically intended to be used for diesel engines were petroleum distillates and coal-tar distillates such as the following; these fuels have specific lower heating values of: Diesel oil: 10,200 kcal·kg−1 (42.7 MJ·kg−1) up to 10,250 kcal·kg−1 (42.9 MJ·kg−1) Heating oil: 10,000 kcal·kg−1 (41.8 MJ·kg−1) up to 10,200 kcal·kg−1 (42.7 MJ·kg−1) Coal-tar creosote: 9,150 kcal·kg−1 (38.3 MJ·kg−1) up to 9,250 kcal·kg−1 (38.7 MJ·kg−1) Kerosene: up to 10,400 kcal·kg−1 (43.5 MJ·kg−1)Source:The first diesel fuel standards were the DIN 51601, VTL 9140-001, and NATO F 54, which appeared after World War II. The modern European EN 590 diesel fuel standard was established in May 1993; the modern version of the NATO F 54 standard is mostly identical with it. The DIN 51628 biodiesel standard was rendered obsolete by the 2009 version of the EN 590; FAME biodiesel conforms to the EN 14214 standard. Watercraft diesel engines usually operate on diesel engine fuel that conforms to the ISO 8217 standard (Bunker C). Also, some diesel engines can operate on gasses (such as LNG).
Fuel and fluid characteristics:
Modern diesel fuel properties Gelling DIN 51601 diesel fuel was prone to waxing or gelling in cold weather; both are terms for the solidification of diesel oil into a partially crystalline state. The crystals build up in the fuel system (especially in fuel filters), eventually starving the engine of fuel and causing it to stop running. Low-output electric heaters in fuel tanks and around fuel lines were used to solve this problem. Also, most engines have a spill return system, by which any excess fuel from the injector pump and injectors is returned to the fuel tank. Once the engine has warmed, returning warm fuel prevents waxing in the tank. Before direct injection diesel engines, some manufacturers, such as BMW, recommended mixing up to 30% petrol in with the diesel by fuelling diesel cars with petrol to prevent the fuel from gelling when the temperatures dropped below −15 °C.
Safety:
Fuel flammability Diesel fuel is less flammable than petrol, because its flash point is 55 °C, leading to a lower risk of fire caused by fuel in a vehicle equipped with a diesel engine.
Diesel fuel can create an explosive air/vapour mix under the right conditions. However, compared with petrol, it is less prone due to its lower vapour pressure, which is an indication of evaporation rate. The Material Safety Data Sheet for ultra-low sulfur diesel fuel indicates a vapour explosion hazard for diesel fuel indoors, outdoors, or in sewers.
Cancer Diesel exhaust has been classified as an IARC Group 1 carcinogen. It causes lung cancer and is associated with an increased risk for bladder cancer.
Engine runaway (uncontrollable overspeeding) See diesel engine runaway.
Applications:
The characteristics of diesel have different advantages for different applications.
Applications:
Passenger cars Diesel engines have long been popular in bigger cars and have been used in smaller cars such as superminis in Europe since the 1980s. They were popular in larger cars earlier, as the weight and cost penalties were less noticeable. Smooth operation as well as high low-end torque are deemed important for passenger cars and small commercial vehicles. The introduction of electronically controlled fuel injection significantly improved the smooth torque generation, and starting in the early 1990s, car manufacturers began offering their high-end luxury vehicles with diesel engines. Passenger car diesel engines usually have between three and twelve cylinders, and a displacement ranging from 0.8 to 6.0 litres. Modern powerplants are usually turbocharged and have direct injection.Diesel engines do not suffer from intake-air throttling, resulting in very low fuel consumption especially at low partial load (for instance: driving at city speeds). One fifth of all passenger cars worldwide have diesel engines, with many of them being in Europe, where approximately 47% of all passenger cars are diesel-powered. Daimler-Benz in conjunction with Robert Bosch GmbH produced diesel-powered passenger cars starting in 1936. The popularity of diesel-powered passenger cars in markets such as India, South Korea and Japan is increasing (as of 2018).
Applications:
Commercial vehicles and lorries In 1893, Rudolf Diesel suggested that the diesel engine could possibly power "wagons" (lorries). The first lorries with diesel engines were brought to market in 1924.Modern diesel engines for lorries have to be both extremely reliable and very fuel efficient. Common-rail direct injection, turbocharging and four valves per cylinder are standard. Displacements range from 4.5 to 15.5 litres, with power-to-mass ratios of 2.5–3.5 kg·kW−1 for heavy duty and 2.0–3.0 kg·kW−1 for medium duty engines. V6 and V8 engines used to be common, due to the relatively low engine mass the V configuration provides. Recently, the V configuration has been abandoned in favour of straight engines. These engines are usually straight-6 for heavy and medium duties and straight-4 for medium duty. Their undersquare design causes lower overall piston speeds which results in increased lifespan of up to 1,200,000 kilometres (750,000 mi). Compared with 1970s diesel engines, the expected lifespan of modern lorry diesel engines has more than doubled.
Applications:
Railroad rolling stock Diesel engines for locomotives are built for continuous operation between refuelings and may need to be designed to use poor quality fuel in some circumstances. Some locomotives use two-stroke diesel engines. Diesel engines have replaced steam engines on all non-electrified railroads in the world. The first diesel locomotives appeared in 1913, and diesel multiple units soon after. Nearly all modern diesel locomotives are more correctly known as diesel–electric locomotives because they use an electric transmission: the diesel engine drives an electric generator which powers electric traction motors. While electric locomotives have replaced the diesel locomotive for passenger services in many areas diesel traction is widely used for cargo-hauling freight trains and on tracks where electrification is not economically viable.
Applications:
In the 1940s, road vehicle diesel engines with power outputs of 150–200 metric horsepower (110–150 kW; 150–200 hp) were considered reasonable for DMUs. Commonly, regular truck powerplants were used. The height of these engines had to be less than 1 metre (3 ft 3 in) to allow underfloor installation. Usually, the engine was mated with a pneumatically operated mechanical gearbox, due to the low size, mass, and production costs of this design. Some DMUs used hydraulic torque converters instead. Diesel–electric transmission was not suitable for such small engines. In the 1930s, the Deutsche Reichsbahn standardised its first DMU engine. It was a 30.3 litres (1,850 cu in), 12-cylinder boxer unit, producing 275 metric horsepower (202 kW; 271 hp). Several German manufacturers produced engines according to this standard.
Applications:
Watercraft The requirements for marine diesel engines vary, depending on the application. For military use and medium-size boats, medium-speed four-stroke diesel engines are most suitable. These engines usually have up to 24 cylinders and come with power outputs in the one-digit Megawatt region. Small boats may use lorry diesel engines. Large ships use extremely efficient, low-speed two-stroke diesel engines. They can reach efficiencies of up to 55%. Unlike most regular diesel engines, two-stroke watercraft engines use highly viscous fuel oil. Submarines are usually diesel–electric.The first diesel engines for ships were made by A. B. Diesels Motorer Stockholm in 1903. These engines were three-cylinder units of 120 PS (88 kW) and four-cylinder units of 180 PS (132 kW) and used for Russian ships. In World War I, especially submarine diesel engine development advanced quickly. By the end of the War, double acting piston two-stroke engines with up to 12,200 PS (9 MW) had been made for marine use.
Applications:
Aviation Early Diesel engines had been used in aircraft before World War II, for instance, in the rigid airship LZ 129 Hindenburg, which was powered by four Daimler-Benz DB 602 diesel engines, or in several Junkers aircraft, which had Jumo 205 engines installed.In 1929, in the United States, the Packard Motor Company developed America's first aircraft diesel engine, the Packard DR-980—an air-cooled, 9-cylinder radial engine. They installed it in various aircraft of the era—some of which were used in record-breaking distance or endurance flights, and in the first successful demonstration of ground-to-air radiophone communications (voice radio having been previously unintelligible in aircraft equipped with spark-ignition engines, due to electromagnetic interference). Additional advantages cited, at the time, included a lower risk of post-crash fire, and superior performance at high altitudes.On March 6, 1930, the engine received an Approved Type Certificate—first ever for an aircraft diesel engine—from the U.S. Department of Commerce. However, noxious exhaust fumes, cold-start and vibration problems, engine structural failures, the death of its developer, and the industrial economic contraction of the Great Depression, combined to kill the program.
Applications:
Modern From then, until the late 1970s, there had not been many applications of the diesel engine in aircraft. In 1978, Piper Cherokee co-designer Karl H. Bergey argued that "the likelihood of a general aviation diesel in the near future is remote."However, with the 1970s energy crisis and environmental movement, and resulting pressures for greater fuel economy, reduced carbon and lead in the atmosphere, and other issues, there was a resurgence of interest in diesel engines for aircraft. High-compression piston aircraft engines that run on aviation gasoline ("avgas") generally require the addition of toxic Tetraethyl lead to avgas, to avoid engine pre-ignition and detonation; but diesel engines do not require leaded fuel. Also, biodiesel can, theoretically, provide a net reduction in atmospheric carbon compared to avgas. For these reasons, the general aviation community has begun to fear the possible banning or discontinuance of leaded avgas.Additionally, avgas is a specialty fuel in very low (and declining) demand, compared to other fuels, and its makers are susceptible to costly aviation-crash lawsuits, reducing refiners' interest in producing it. Outside the United States, avgas has already become increasingly difficult to find at airports (and generally), than less-expensive, diesel-compatible fuels like Jet-A and other jet fuels.By the late 1990s / early 2000s, diesel engines were beginning to appear in light aircraft. Most notably, Frank Thielert and his Austrian engine enterprise, began developing diesel engines to replace the 100 horsepower (75 kW) - 350 horsepower (260 kW) gasoline/piston engines in common light aircraft use. First successful application of the Theilerts to production aircraft was in the Diamond DA42 Twin Star light twin, which exhibited exceptional fuel efficiency surpassing anything in its class, and its single-seat predecessor, the Diamond DA40 Diamond Star.In subsequent years, several other companies have developed aircraft diesel engines, or have begun to—most notably Continental Aerospace Technologies which, by 2018, was reporting it had sold over 5,000 such engines worldwide.The United States' Federal Aviation Administration has reported that "by 2007, various jet-fueled piston aircraft had logged well over 600,000 hours of service". In early 2019, AOPA reported that a diesel engine model for general aviation aircraft is "approaching the finish line." By late 2022, Continental was reporting that its "Jet-A" fueled engines had exceeded "2,000... in operation today," with over "9 million hours," and were being "specified by major OEMs" for Cessna, Piper, Diamond, Mooney, Tecnam, Glasair and Robin aircraft.In recent years (2016), diesel engines have also found use in unmanned aircraft (UAV), due to their reliability, durability, and low fuel consumption.
Applications:
Non-road diesel engines Non-road diesel engines are commonly used for construction equipment and agricultural machinery. Fuel efficiency, reliability and ease of maintenance are very important for such engines, whilst high power output and quiet operation are negligible. Therefore, mechanically controlled fuel injection and air-cooling are still very common. The common power outputs of non-road diesel engines vary a lot, with the smallest units starting at 3 kW, and the most powerful engines being heavy duty lorry engines.
Applications:
Stationary diesel engines Stationary diesel engines are commonly used for electricity generation, but also for powering refrigerator compressors, or other types of compressors or pumps. Usually, these engines either run continuously with partial load, or intermittently with full load. Stationary diesel engines powering electric generators that put out an alternating current, usually operate with alternating load, but fixed rotational frequency. This is due to the mains' fixed frequency of either 50 Hz (Europe), or 60 Hz (United States). The engine's crankshaft rotational frequency is chosen so that the mains' frequency is a multiple of it. For practical reasons, this results in crankshaft rotational frequencies of either 25 Hz (1500 per minute) or 30 Hz (1800 per minute).
Low heat rejection engines:
A special class of prototype internal combustion piston engines has been developed over several decades with the goal of improving efficiency by reducing heat loss. These engines are variously called adiabatic engines; due to better approximation of adiabatic expansion; low heat rejection engines, or high temperature engines. They are generally piston engines with combustion chamber parts lined with ceramic thermal barrier coatings. Some make use of pistons and other parts made of titanium which has a low thermal conductivity and density. Some designs are able to eliminate the use of a cooling system and associated parasitic losses altogether. Developing lubricants able to withstand the higher temperatures involved has been a major barrier to commercialization.
Future developments:
In mid-2010s literature, main development goals for future diesel engines are described as improvements of exhaust emissions, reduction of fuel consumption, and increase of lifespan (2014). It is said that the diesel engine, especially the diesel engine for commercial vehicles, will remain the most important vehicle powerplant until the mid-2030s. Editors assume that the complexity of the diesel engine will increase further (2014). Some editors expect a future convergency of diesel and Otto engines' operating principles due to Otto engine development steps made towards homogeneous charge compression ignition (2017). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pythagorean quadruple**
Pythagorean quadruple:
A Pythagorean quadruple is a tuple of integers a, b, c, and d, such that a2 + b2 + c2 = d2. They are solutions of a Diophantine equation and often only positive integer values are considered. However, to provide a more complete geometric interpretation, the integer values can be allowed to be negative and zero (thus allowing Pythagorean triples to be included) with the only condition being that d > 0. In this setting, a Pythagorean quadruple (a, b, c, d) defines a cuboid with integer side lengths |a|, |b|, and |c|, whose space diagonal has integer length d; with this interpretation, Pythagorean quadruples are thus also called Pythagorean boxes. In this article we will assume, unless otherwise stated, that the values of a Pythagorean quadruple are all positive integers.
Parametrization of primitive quadruples:
A Pythagorean quadruple is called primitive if the greatest common divisor of its entries is 1. Every Pythagorean quadruple is an integer multiple of a primitive quadruple. The set of primitive Pythagorean quadruples for which a is odd can be generated by the formulas where m, n, p, q are non-negative integers with greatest common divisor 1 such that m + n + p + q is odd. Thus, all primitive Pythagorean quadruples are characterized by the identity
Alternate parametrization:
All Pythagorean quadruples (including non-primitives, and with repetition, though a, b, and c do not appear in all possible orders) can be generated from two positive integers a and b as follows: If a and b have different parity, let p be any factor of a2 + b2 such that p2 < a2 + b2. Then c = a2 + b2 − p2/2p and d = a2 + b2 + p2/2p. Note that p = d − c.
Alternate parametrization:
A similar method exists for generating all Pythagorean quadruples for which a and b are both even. Let l = a/2 and m = b/2 and let n be a factor of l2 + m2 such that n2 < l2 + m2. Then c = l2 + m2 − n2/n and d = l2 + m2 + n2/n. This method generates all Pythagorean quadruples exactly once each when l and m run through all pairs of natural numbers and n runs through all permissible values for each pair.
Alternate parametrization:
No such method exists if both a and b are odd, in which case no solutions exist as can be seen by the parametrization in the previous section.
Properties:
The largest number that always divides the product abcd is 12. The quadruple with the minimal product is (1, 2, 2, 3).
Relationship with quaternions and rational orthogonal matrices:
A primitive Pythagorean quadruple (a, b, c, d) parametrized by (m, n, p, q) corresponds to the first column of the matrix representation E(α) of conjugation α(⋅)α by the Hurwitz quaternion α = m + ni + pj + qk restricted to the subspace of quaternions spanned by i, j, k, which is given by where the columns are pairwise orthogonal and each has norm d. Furthermore, we have that 1/dE(α) belongs to the orthogonal group SO(3,Q) , and, in fact, all 3 × 3 orthogonal matrices with rational coefficients arise in this manner.
Primitive Pythagorean quadruples with small norm:
There are 31 primitive Pythagorean quadruples in which all entries are less than 30. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heteroblasty (botany)**
Heteroblasty (botany):
Heteroblasty is the significant and abrupt change in form and function, that occurs over the lifespan of certain plants. Characteristics affected include internode length and stem structure as well as leaf form, size and arrangement. It should not be confused with seasonal heterophylly, where early and late growth in a season are visibly different. This change is different from a homoblastic change which is a gradual change or little change at all, so that there is little difference between the juvenile and adult stages. Some characteristics affected by heteroblastic change include the distance between successive leaves (internode length) and stem structure as well as leaf form, size and arrangement. Heteroblasty is found in many plant families as well as only some species within a genus. This random spread of heteroblastic plants across species is believed to be caused by convergent evolution.The earlier and later stages of development are commonly labeled as juvenile and adult respectively, particularly in relation to leaves.
Heteroblasty (botany):
Heteroblastic change is thus often referred to as ‘vegetative phase change’ (distinct from reproductive phase change) in the plant molecular biology literature.The term heteroblasty was coined by the German botanist Karl Ritter von Goebel, along with homoblasty for plants with leaf characteristics that do not change significantly. Leonard Cockayne observed that heteroblasty occurred in an unusually high proportion of tree species native to New Zealand.
Origins:
There are two ways to look at how heteroblasty developed. The first is to look at the evolution of heteroblasty, and the second is to consider the ecological interactions of heteroblastic plants.
Origins:
Evolution Many hypothesize that heteroblasty is a result of natural selection for species, that can best survive in both low and high light environments. As a plant grows in the forest it experiences predictable changes in light intensity. With this in mind a plant that changes its leaf morphology and phyllotaxy to best suit these changes in light intensity could be more competitive than one that has only on leaf form and phyllotaxy. It is also hypothesized that the development of heteroblastic trees preceded the development of divaricating shrub forms, which are now very common in New Zealand. It is thought that these shrubs are a mutation from the heteroblastic trees and have lost the ability to develop into the adult stage and so are very similar to heteroblastic trees in their juvenile form. It has also been observed that heteroblastic species do not stem from a single point of origin they are found in many different and unrelated species, because of this it is believed that large-scale convergent evolution has to have occurred for so many unrelated plants to exhibit similar behavior.
Origins:
Ecology Heteroblasty can affect all parts of the plant but the leaves are the most common examples and by far the most studied. It has been hypothesized that the heteroblastic changes are due to changes in the plant's exposure to sun, because many species spend their juvenile years in the understory then grow to maturity where they are a part of the top canopy and so have full exposure to the sun. This has not been well studied, because the common heteroblastic plants are woody and take so long to grow such as Eucalyptus grandis. The juvenile plants tend to face more competition and must make special adaptations to succeed that are then unnecessary as a mature plant. For example, a sampling in a dense forest must grow quickly to succeed at first but once it has established itself most woody plants no longer compete severely with their neighbor and so the adaptations needed as a juvenile plant are no longer necessary. This can lead to a change in growth in maturity as the tree faces new environmental factors. Such as a need to resist new pathogens or parasites.
Mechanism:
At the cellular level, there are different ways that a plant controls its growth and development. There are internal and external signals that result in a change in the plant's response. The plants also have genetic predetermined growth patterns.
Mechanism:
Signaling Hormones are known to regulate heteroblastic change in plants. One hormone that has been identified is gibberellin. In a study, it was used to spontaneously revert the mature form of Hedera helix (a common English ivy) to its juvenile form. After being sprayed with gibberellin acid some of the ivies began to produce aerial roots which are a characteristic of the juvenile form as well as three lobed leaves another characteristic. It is also hypothesized that auxin and cytokinin when working together can cause the sudden change in phyllotaxy of heterogenetic plants. The gene ABPH1 has been found to code for cytokinin and when changed in a mutant affected the plant's ability to regulate the phyllotaxy of the stem. The hypothesis is based mostly on studies done on non-heteroblastic plants and so it is not certain that these are the cause of the sudden changes in a heteroblastic plant. A dramatic change in leaf size is another example of a heteroblastic change in plants and researchers have looked to studies done on non-heteroblastic plants for answers about what hormones and genes could regulate these changes. Aintegumenta has been found to be one of these regulatory genes that regulated cell growth. It is believed that many genes are involved in the regulation of leaf size and these genes do not closely interact meaning they are not caused by a master regulator but instead are a part of many different pathways.
Mechanism:
Genetics Some most common model plants include Arabidopsis thaliana (common name: mouse-ear cress), Antirrhinum majus (common name: snapdragon), and Zea mays (common name: corn). Some authors have argued that these species are not useful models for the study of gene expression in heteroblastic plants because none of them express obvious heteroblastic traits. Researchers in this area of study can use Arabidopsis to some degree for study as it does undergo some change from the juvenile phase to the mature phase but it is not clearly heteroblastic. If we assume the process of change is similar and uses similar regulations we can use Arabidopsis to analyze the causes of change in plant growth that may be occurring in the same way but more dramatically in heteroblastic plants and so can only be used to analyze heteroblastic changes. This involves many assumptions though and so researchers are seeking other plants to use as model subjects. The problem with this is that most plants that display heteroblastic growth are woody plants. Their life spans are much longer in general and unlike Arabidopsis very little of their genomes are known or mapped. A species that shows promise is Eucalyptus grandis. This tree is grown commonly because of its many uses for teas, oils, and wood. The tree overall is fast growing and widely grown due to its many uses and so is one of the best candidates for genome sequencing, which is being done now so that the tree can be better studied in the future. There is already a complete quantitative trait loci map for the juvenile traits.
Examples:
These plants are a few of the common examples of heteroblastic plants often found in studies and is far from an all-encompassing list. All listed are plants, because they are the only organisms that have been found to undergo this growth change it is absent in animals, fungi, and microbes as far as is known to this point.
Lightwood (Acacia implexa) is a fast wood tree found in Australia Spiral ginger (Costus pulverulentus C.Presl) is an herb found in South America found primarily in Nicaragua and is used as a traditional medicine in teas for pain and inflammation. It is also used to treat cancer.
Lance wood (Pseudopanax crassifolius) is a native of New Zealand Pōkākā (Elaeocarpus hookerianus) native to New Zealand Bucket-of-water tree or Maple leaf (Carpodetus serratus) native to New Zealand
Geographic distribution:
This is a list of places heteroblastic plants have been commonly found and documented but not a complete list of all places as heteroblastic plants can be hard to identify and do not appear in families predictably.
New Zealand has a very large population of heteroblastic plants with about 200 tree species and 10% of the woody shrubs species having heteroblastic tendencies.
Australia also has heteroblastic species though the exact amount is not known.
South America also has a few heteroblastic plants, specifically known in Mexico, and Nicaragua.
Similar processes:
Processes often confused with heteroblasty include: Homoblasty is the first example of this. To understand Heteroblasty you must first understand that homoblasty is different. Homoblastic change is the slight change a plant experiences over a long period of time as it grows to maturity. Examples of this are a plants leaves growing slightly larger over time as it matures or a trees trunk growing in girth.
Similar processes:
Heterophylly is another term that is often used interchangeably with Heteroblasty. The process of heterophylly refers to specific changes in leaf morphology that lead to variation in leaf shape or size on a single plant. This type of change is seen when you study the individual leaves and compare them, this is different than heteroblasty in which the entire foliage changes dramatically but for the most part uniformly. A heteroblastic plant can have heterophyllic changes but they are not the same.
Similar processes:
Phenotypic Plasticity changes the structure of plants as well but should not be confused with Heteroblasty. Phenotypic plasticity is when an individual can use the same genes to create a different phenotype based on environmental signals. Such as when a plant is adapting its immune system to a new pathogen or when a reptile changes its sex based on environmental queues. The difference here is that Heteroblasty is not entirely dependent on the environment, though it can be affected by it, and happens throughout the plant's maturity instead of at random points because of an environmental change. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heredity (journal)**
Heredity (journal):
Heredity is a monthly peer-reviewed scientific journal published by Nature Portfolio. It covers heredity in a biological sense, i.e. genetics. The journal was founded by Ronald Fisher and C. D. Darlington in 1947 and is the official journal of The Genetics Society. From 1996 the publishing was taken over by Nature Portfolio. The editor-in-chief is Sara Goodacre.According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.832, ranking it 61st out of 173 journals in the category "Ecology", 19th out of 51 journals in the category "Evolutionary Biology", and 79th out of 175 journals in the category "Genetics & Heredity". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stout**
Stout:
Stout is a dark, top-fermented beer with a number of variations, including dry stout, oatmeal stout, milk stout, and imperial stout.
Stout:
The first known use of the word stout for beer, in a document dated 1677 found in the Egerton Manuscripts, referred to its strength. The name porter was first used in 1721 to describe a dark brown beer. Because of the huge popularity of porters, brewers made them in a variety of strengths. The stronger beers, typically 7% or 8% alcohol by volume (ABV), were called "stout porters", so the history and development of stout and porter are intertwined, and the term stout has become firmly associated with dark beer, rather than just strong beer.
History:
Porter originated in London, England in the early 1720s. The style quickly became popular in the city, especially with porters (hence its name): it had a strong flavour, took longer to spoil than other beers, was significantly cheaper than other beers, and was not easily affected by heat. Within a few decades, porter breweries in London had grown "beyond any previously known scale". Large volumes were exported to Ireland and by 1776 it was being brewed by Arthur Guinness at his St. James's Gate Brewery. In the 19th century, the beer gained its customary black colour through the use of black patent malt, and became stronger in flavour.
History:
Originally, the adjective stout meant "proud" or "brave", but later, after the 14th century, it took on the connotation of "strong". The first known use of the word stout for beer was in a document dated 1677 found in the Egerton Manuscript, the sense being that a stout beer was a strong beer. The expression stout porter was applied during the 18th century to strong versions of porter. Stout still meant only "strong" and it could be related to any kind of beer, as long as it was strong: in the UK it was possible to find "stout pale ale", for example. Later, stout was eventually to be associated only with porter, becoming a synonym of dark beer.
History:
Because of the huge popularity of porters, brewers made them in a variety of strengths. The beers with higher gravities were called "Stout Porters". There is still division and debate on whether stouts should be a separate style from porter. Usually the only deciding factor is strength."Nourishing" and sweet "milk" stouts became popular in Great Britain in the years following the First World War, though their popularity declined towards the end of the 20th century, apart from pockets of local interest such as in Glasgow with Sweetheart Stout.
History:
Beer writer Michael Jackson wrote about stouts and porters in the 1970s, but in the mid 1980s a survey by What’s Brewing found just 29 brewers in the UK and Channel Islands still making stout, most of them milk stouts. In the 21st century, stout is making a comeback with a new generation of drinkers, thanks to new products from burgeoning craft and regional brewers.
Milk stout:
Milk stout (also called sweet stout or cream stout) is a stout containing lactose, a sugar derived from milk. Because lactose cannot be fermented by beer yeast, it adds sweetness and perceived body to the finished beer. Milk stout, which was claimed to be nutritious, was given to nursing mothers, and to help increase their milk production. The classic surviving example of milk stout is Mackeson's, for which the original brewers advertised that "each pint contains the energising carbohydrates of 10 ounces [284 ml] of pure dairy milk". The style was rare until being revived by a number of craft breweries in the twenty-first century.
Milk stout:
There were prosecutions in Newcastle upon Tyne in 1944 under the Food and Drugs Act 1938 regarding misleading labelling of milk stout.
Dry or Irish stout:
With milk or sweet stout becoming the dominant stout in the UK in the early 20th century, it was mainly in Ireland that the non-sweet or standard stout was being made. As standard stout has a drier taste than the English and American sweet stouts, they came to be called dry stout or Irish stout to differentiate them from stouts with added lactose or oatmeal. This is the style that represents a typical stout to most people. The best selling stouts worldwide are Irish stouts made by Guinness (now owned by Diageo) at St. James's Gate Brewery (also known as the Guinness Brewery) in Dublin. Guinness makes a number of different varieties of its Irish stouts. Other examples of Irish dry stout include Murphy's and Beamish, now both owned by Heineken. Native Irish stouts are brewed by independent Irish craft breweries, most of whom include a stout in their core ranges. Draught Irish stout is normally served with a nitrogen propellant in addition to the carbon dioxide most beers use, to create a creamy texture with a long-lasting head. Some canned and bottled stouts include a special device called a "widget" to nitrogenate the beer in the container to replicate the experience of the keg varieties.
Porter:
There were no differences between stout and porter historically, though there had been a tendency for breweries to differentiate the strengths of their beers with the words "extra", "double" and "stout". The term stout was initially used to indicate a stronger porter than other porters from a brewery.
Oatmeal stout:
Oatmeal stout is a stout with a proportion of oats, normally a maximum of 30%, added during the brewing process. Even though a larger proportion of oats in beer can lead to a bitter or astringent taste, during the medieval period in Europe, oats were a common ingredient in ale, and proportions up to 35% were standard. Despite some areas of Europe, such as Norway, still clinging to the use of oats in brewing until the early part of the 20th century, the practice had largely died out by the 16th century, so much so that in 1513 Tudor sailors refused to drink oat beer offered to them because of the bitter flavour.There was a revival of interest in using oats during the end of the 19th century, when (supposedly) restorative, nourishing and invalid beers, such as the later milk stout, were popular, because of the association of porridge with health. Maclay of Alloa produced an Original Oatmalt Stout in 1895 that used 70% "oatmalt", and a 63/- Oatmeal Stout in 1909, which used 30% "flaked (porridge) oats".In the 20th century, many oatmeal stouts contained only a minimal amount of oats. For example, in 1936 Barclay Perkins Oatmeal Stout used only 0.5% oats. As the oatmeal stout was parti-gyled with their porter and standard stout, these two also contained the same proportion of oats. (Parti-gyle brewing involves blending the worts drawn from multiple mashes or sparges after the boil to produces beers of different gravities.) The name seems to have been a marketing device more than anything else. In the 1920s and 1930s Whitbread's London Stout and Oatmeal Stout were identical, just packaged differently. The amount of oats Whitbread used was minimal, again around 0.5%. With such a small quantity of oats used, it could only have had little impact on the flavour or texture of these beers.
Oatmeal stout:
Many breweries were still brewing oatmeal stouts in the 1950s, for example Brickwoods in Portsmouth, Matthew Brown in Blackburn and Ushers in Trowbridge. When Michael Jackson mentioned the defunct Eldrige Pope "Oat Malt Stout" in his 1977 book The World Guide to Beer, oatmeal stout was no longer being made anywhere, but Charles Finkel, founder of Merchant du Vin, was curious enough to commission Samuel Smith to produce a version. Samuel Smith's Oatmeal Stout then became the template for other breweries' versions.
Oatmeal stout:
Oatmeal stouts do not usually taste specifically of oats. The smoothness of oatmeal stouts comes from the high content of proteins, lipids (includes fats and waxes), and gums imparted by the use of oats. The gums increase the viscosity and body adding to the sense of smoothness.
Oyster stout:
Oysters have had a long association with stout. When stouts were emerging in the 18th century, oysters were a commonplace food often served in public houses and taverns. By the 20th century, oyster beds were in decline, and stout had given way to pale ale. Ernest Barnes came up with the idea of combining oysters with stout using an oyster concentrate made by Thyrodone Development Ltd. in Bluff, New Zealand, where he was factory manager. It was first sold by the Dunedin Brewery Company in New Zealand in 1938, with the Hammerton Brewery in London, UK, beginning production using the same formula the following year. Hammerton Brewery was re-established in 2014 and is once again brewing an oyster stout.
Oyster stout:
Modern oyster stouts may be made with a handful of oysters in the barrel, hence the warning by one establishment, the Porterhouse Brewery in Dublin, that their award-winning Oyster Stout was not suitable for vegetarians. Others, such as Marston's Oyster Stout, use the name with the implication that the beer would be suitable for drinking with oysters.
Chocolate stout:
Chocolate stout is a name brewers sometimes give to certain stouts having a noticeable dark chocolate flavour through the use of darker, more aromatic malt; particularly chocolate malt—a malt that has been roasted or kilned until it acquires a chocolate colour. Sometimes, as with Muskoka Brewery's Double Chocolate Cranberry Stout, Young's Double Chocolate Stout, and Rogue Brewery's Chocolate Stout, the beers are also brewed with a small amount of chocolate, chocolate flavouring, or cacao nibs.
Imperial stout:
Imperial stout, also known as "Russian Imperial stout", is a strong dark beer in the style that was brewed in the 18th century by Thrale's Anchor Brewery in London for export to the court of Catherine II of Russia. In 1781 the brewery changed hands and the beer became known as "Barclay Perkins Imperial Brown Stout". It was shipped to Russia by Albert von Le Coq who was awarded a Russian royal warrant which entitled him to use the name "Imperial". Historical analyses from the time period of 1849 to 1986 show that the beer had an original gravity between 1.100 and 1.107 and an alcohol content of around 10% ABV. This remained virtually unchanged over the whole time period. A recipe from 1856 also indicates that it was hopped at a rate of 10 pounds of hops to the barrel (28 g/L). When Barclay's brewery was taken over by Courage in 1955, the beer was renamed "Courage Imperial Russian Stout" and it was brewed sporadically until 1993. The bottle cap still said "Barclay's".In Canada, Imperial Stout was produced in Prince Albert first by Fritz Sick, and then by Molson following a 1958 takeover. Denmark's Wiibroe Brewery launched its 8.2 per cent Imperial Stout in 1930. The first brewery to brew an Imperial Stout in the United States was Bert Grant's Yakima Brewing.Imperial stouts have a high alcohol content, usually over 9% abv, and are among the darkest available beer styles. Samuel Smith's brewed a version for export to the United States in the early 1980s, and today Imperial stout is among the most popular beer styles with U.S. craft brewers. American interpretations of the style often include ingredients such as vanilla beans, chili powder, maple syrup, coffee, and marshmallows. Many are aged in bourbon barrels to add additional layers of flavour. The word "Imperial" is now commonly added to other beer styles to denote a stronger version, hence Imperial IPAs, Imperial pilsners etc.Baltic porter is a version of Imperial stout which originated in the Baltic region in the 19th century. Imperial stouts imported from Britain were recreated locally using local ingredients and brewing traditions.
Pastry stout:
A pastry stout is a stout beer which is brewed to be intentionally sweet with the end goal that the beer mimics the flavor and sometimes the appearance of a dessert. Many breweries who produce pastry stouts will experiment with flavors such as chocolate, marshmallow, maple syrup, vanilla, and various fruit. The finished product will have the flavor and aroma of popular sweets such as blueberry pancakes, s’mores, donuts, brownies, cake, ice cream and fruit crumble just to name a few. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Known-key distinguishing attack**
Known-key distinguishing attack:
In cryptography, a known-key distinguishing attack is an attack model against symmetric ciphers, whereby an attacker who knows the key can find a structural property in cipher, where the transformation from plaintext to ciphertext is not random. There is no common formal definition for what such a transformation may be. The chosen-key distinguishing attack is strongly related, where the attacker can choose a key to introduce such transformations.These attacks do not directly compromise the confidentiality of ciphers, because in a classical scenario, the key is unknown to the attacker. Known-/chosen-key distinguishing attacks apply in the "open key model" instead. They are known to be applicable in some situations where block ciphers are converted to hash functions, leading to practical collision attacks against the hash.Known-key distinguishing attacks were first introduced in 2007 by Lars Knudsen and Vincent Rijmen in a paper that proposed such an attack against 7 out of 10 rounds of the AES cipher and another attack against a generalized Feistel cipher. Their attack finds plaintext/ciphertext pairs for a cipher with a known key, where the input and output have s least significant bits set to zero, in less than 2s time (where s is fewer than half the block size).These attacks have also been applied to reduced-round Threefish (Skein) and Phelix. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quasisymmetric function**
Quasisymmetric function:
In algebra and in particular in algebraic combinatorics, a quasisymmetric function is any element in the ring of quasisymmetric functions which is in turn a subring of the formal power series ring with a countable number of variables. This ring generalizes the ring of symmetric functions. This ring can be realized as a specific limit of the rings of quasisymmetric polynomials in n variables, as n goes to infinity. This ring serves as universal structure in which relations between quasisymmetric polynomials can be expressed in a way independent of the number n of variables (but its elements are neither polynomials nor functions).
Definitions:
The ring of quasisymmetric functions, denoted QSym, can be defined over any commutative ring R such as the integers. Quasisymmetric functions are power series of bounded degree in variables x1,x2,x3,… with coefficients in R, which are shift invariant in the sense that the coefficient of the monomial x1α1x2α2⋯xkαk is equal to the coefficient of the monomial xi1α1xi2α2⋯xikαk for any strictly increasing sequence of positive integers i1<i2<⋯<ik indexing the variables and any positive integer sequence (α1,α2,…,αk) of exponents.
Definitions:
Much of the study of quasisymmetric functions is based on that of symmetric functions.
A quasisymmetric function in finitely many variables is a quasisymmetric polynomial.
Both symmetric and quasisymmetric polynomials may be characterized in terms of actions of the symmetric group Sn on a polynomial ring in n variables x1,…,xn . One such action of Sn permutes variables, changing a polynomial p(x1,…,xn) by iteratively swapping pairs (xi,xi+1) of variables having consecutive indices.
Those polynomials unchanged by all such swaps form the subring of symmetric polynomials.
A second action of Sn conditionally permutes variables, changing a polynomial p(x1,…,xn) by swapping pairs (xi,xi+1) of variables except in monomials containing both variables.
Those polynomials unchanged by all such conditional swaps form the subring of quasisymmetric polynomials. One quasisymmetric function in four variables x1,x2,x3,x4 is the polynomial x12x2x3+x12x2x4+x12x3x4+x22x3x4.
The simplest symmetric function containing these monomials is x12x2x3+x12x2x4+x12x3x4+x22x3x4+x1x22x3+x1x22x4+x1x32x4+x2x32x4+x1x2x32+x1x2x42+x1x3x42+x2x3x42.
Important bases:
QSym is a graded R-algebra, decomposing as QSym QSym n, where QSym n is the R -span of all quasisymmetric functions that are homogeneous of degree n . Two natural bases for QSym n are the monomial basis {Mα} and the fundamental basis {Fα} indexed by compositions α=(α1,α2,…,αk) of n , denoted α⊨n . The monomial basis consists of M0=1 and all formal power series Mα=∑i1<i2<⋯<ikxi1α1xi2α2⋯xikαk.
Important bases:
The fundamental basis consists F0=1 and all formal power series Fα=∑α⪰βMβ, where α⪰β means we can obtain α by adding together adjacent parts of β , for example, (3,2,4,2) ⪰ (3,1,1,1,2,1,2). Thus, when the ring R is the ring of rational numbers, one has QSym span span Q{Fα∣α⊨n}.
Then one can define the algebra of symmetric functions Λ=Λ0⊕Λ1⊕⋯ as the subalgebra of QSym spanned by the monomial symmetric functions m0=1 and all formal power series mλ=∑Mα, where the sum is over all compositions α which rearrange to the partition λ . Moreover, we have QSym n . For example, F(1,2)=M(1,2)+M(1,1,1) and m(2,1)=M(2,1)+M(1,2).
Other important bases for quasisymmetric functions include the basis of quasisymmetric Schur functions, the "type I" and "type II" quasisymmetric power sums, and bases related to enumeration in matroids.
Applications:
Quasisymmetric functions have been applied in enumerative combinatorics, symmetric function theory, representation theory, and number theory. Applications of quasisymmetric functions include enumeration of P-partitions, permutations, tableaux, chains of posets, reduced decompositions in finite Coxeter groups (via Stanley symmetric functions), and parking functions. In symmetric function theory and representation theory, applications include the study of Schubert polynomials, Macdonald polynomials, Hecke algebras, and Kazhdan–Lusztig polynomials. Often quasisymmetric functions provide a powerful bridge between combinatorial structures and symmetric functions.
Related algebras:
As a graded Hopf algebra, the dual of the ring of quasisymmetric functions is the ring of noncommutative symmetric functions. Every symmetric function is also a quasisymmetric function, and hence the ring of symmetric functions is a subalgebra of the ring of quasisymmetric functions.
The ring of quasisymmetric functions is the terminal object in category of graded Hopf algebras with a single character.
Hence any such Hopf algebra has a morphism to the ring of quasisymmetric functions.
One example of this is the peak algebra.
Other related algebras The Malvenuto–Reutenauer algebra is a Hopf algebra based on permutations that relates the rings of symmetric functions, quasisymmetric functions, and noncommutative symmetric functions, (denoted Sym, QSym, and NSym respectively), as depicted the following commutative diagram. The duality between QSym and NSym mentioned above is reflected in the main diagonal of this diagram.
Many related Hopf algebras were constructed from Hopf monoids in the category of species by Aguiar and Majahan.One can also construct the ring of quasisymmetric functions in noncommuting variables. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Proof of stake**
Proof of stake:
Proof-of-stake (PoS) protocols are a class of consensus mechanisms for blockchains that work by selecting validators in proportion to their quantity of holdings in the associated cryptocurrency. This is done to avoid the computational cost of proof-of-work (POW) schemes. The first functioning use of PoS for cryptocurrency was Peercoin in 2012, although the scheme, on the surface, still resembled a POW.
Description:
For a blockchain transaction to be recognized, it must be appended to the blockchain. In the proof of stake blockchain the appending entities are named minters or validators (in the proof of work blockchains this task is carried out by the miners); in most protocols, the validators receive a reward for doing so. For the blockchain to remain secure, it must have a mechanism to prevent a malicious user or group from taking over a majority of validation. PoS accomplishes this by requiring that validators have some quantity of blockchain tokens, requiring potential attackers to acquire a large fraction of the tokens on the blockchain to mount an attack.Proof of work (PoW), another commonly used consensus mechanism, uses a validation of computational prowess to verify transactions, requiring a potential attacker to acquire a large fraction of the computational power of the validator network. This incentivizes consuming huge quantities of energy. PoS is more energy-efficient.Early PoS implementations were plagued by a number of new attacks that exploited the unique vulnerabilities of the PoS protocols. Eventually two dominant designs emerged: so called Byzantine Fault Tolerance-based and chain-based approaches. Bashir identifies three more types of PoS: committee-based PoS (a.k.a. nominated PoS, NPoS); delegated proof of stake (DPoS); liquid proof of stake (LPoS).
Attacks:
The additional vulnerabilities of the PoS schemes are directly related to their advantage, a relatively low amount of calculations to be performed while constructing a blockchain.
Attacks:
Long-range attacks The low amount of computing power involved allows a class of attacks that replace a non-negligible portion of the main blockchain with a hijacked version. These attacks are called in literature by different names, Long-Range, Alternative History, Alternate History, History Revision, and are unfeasible in the PoW schemes due to the sheer volume of calculations required. The early stages of a blockchain are much more malleable for rewriting, as they likely have much smaller group of stakeholders involved, simplifying the collusion. If the per-block and per-transaction rewards are offered, the malicious group can, for example, redo the entire history and collect these rewards.The classic "Short-Range" attack (bribery attack) that rewrites just a small tail portion of the chain is also possible.
Attacks:
Nothing at stake Since validators do not need to spend a considerable amount of computing power (and thus money) on the process, they are prone to the Nothing-at-Stake attack: the participation in a successful validation increases the validator's earnings, so there is a built-in incentive for the validators to accept all chain forks submitted to them, thus increasing the chances of earning the validation fee. The PoS schemes enable low-cost creation of blockchain alternatives starting at any point in history (costless simulation), submitting these forks to eager validators endangers the stability of the system. If this situation persists, it can allow double-spending, where a digital token can be spent more than once. This can be mitigated through penalizing validators who validate conflicting chains ("economic finality") or by structuring the rewards so that there is no economic incentive to create conflicts. Byzantine Fault Tolerance based PoS are generally considered robust against this threat.
Attacks:
Bribery attack Bribery attack, where the attackers financially induce some validators to approve their fork of blockchain, is enhanced in PoS, as rewriting a large portion of history might enable the collusion of once-rich stakeholders that no longer hold significant amounts at stake to claim a necessary majority at some point back in time, and grow the alternative blockchain from there, an operation made possible by the low computing cost of adding blocks in the PoS scheme.
Variants:
Chain-based PoS This is essentially a modification of the PoW scheme, where the competition is based not on applying brute force to solving the identical puzzle in the smallest amount of time, but instead on varying the difficulty of the puzzle depending on the stake of the participant; the puzzle is solved if on a tick of the clock (|| is concatenation): Hash(ProposedNewBlock||ClockTime)<target∗StakeValue The smaller amount of calculations required for solving the puzzle for high-value stakeholders helps to avoid excessive hardware.
Variants:
Nominated PoS (NPoS) Also known as "committee-based", this scheme involves an election of a committee of validators using a verifiable random function with probabilities of being elected higher with higher stake. Validators then randomly take turns producing blocks. NPoS is utilized by Ouroboros Praos and BABE.
Variants:
BFT-based PoS The outline of the BFT PoS "epoch" (adding a block to the chain) is as follows: A "proposer" with a "proposed block" is randomly selected by adding it to the temporary pool used to select just one consensual block; The other participants, validators, obtain the pool, validate, and vote for one; The BFT consensus is used to finalize the most-voted block.The scheme works as long as no more than a third of validators are dishonest. BFT schemes are used in Tendermint and Casper FFG.
Variants:
Delegated proof of stake (DPoS) Proof of stake delegated systems use a two-stage process: first, the stakeholders elect a validation committee, a.k.a. witnesses, by voting proportionally to their stakes, then the witnesses take tuns in a round-robin fashion to propose new blocks that are then voted upon by the witnesses, usually in the BFT-like fashion. Since there are fewer validators in the DPoS than in many other PoS schemes, the consensus can be established faster. The scheme is used in many chains, including EOS, Lisk, Tron.
Variants:
Liquid proof of stake (LPoS) In the liquid PoS anyone with a stake can declare themselves a validator, but for the small holders is makes sense to delegate their voting rights instead to larger players in exchange for some benefits (like periodic payouts). A market is established where the validators compete on the fees, reputation, and other factors. Token holders are free to switch their support to anothe validator at any time. LPoS is used in Tezos.
Variants:
'Stake' definition The exact definition of "stake" varies from implementation to implementation. For instance, some cryptocurrencies use the concept of "coin age", the product of the number of tokens with the amount of time that a single user has held them, rather than merely the number of tokens, to define a validator's stake.
Implementations:
The first functioning implementation of a proof-of-stake cryptocurrency was Peercoin, introduced in 2012. Other cryptocurrencies, such as Blackcoin, Nxt, Cardano, and Algorand followed. However, as of 2017, PoS cryptocurrencies were still not as widely used as proof-of-work cryptocurrencies.In September 2022, Ethereum, the world second largest cryptocurrency in 2022, switched from proof of work to a proof of stake consensus mechanism system, after several proposals and some delays.
Concerns:
Security Critics have argued that the proof of stake model is less secure compared to the proof of work model.
Concerns:
Centralization Critics have argued that the proof of stake will likely lead cryptocurrency blockchains being more centralized in comparison to proof of work as the system favors users who have a large amount of cryptocurrency, which in turn could lead to users who have a large amount of cryptocurrency having major influence on the management and direction for a crypto blockchain.
Energy consumption:
In 2021 a study by the University of London found that in general the energy consumption of the proof-of-work based Bitcoin was about a thousand times higher than that of the highest consuming proof-of-stake system that was studied even under the most favorable conditions (Bitcoins)? or (Proof Of Stakes)?and that most proof of stake systems cause less energy consumption in most configurations. The researchers also noted that the energy consumption for proof-of-stake with permissioned systems that used less validators (than Proof Of Work)? or (than other Proof Of Stakes)? were more efficient than permission-less systems that don't use validators at all. They also couldn't find the energy consumption of a proof-of-stake system on a large scale, as such a system did not exist at the time of the report.
Energy consumption:
In January 2022 Vice-Chair of the European Securities and Markets Authority Erik Thedéen called on the EU to ban the proof of work model in favor of the proof of stake model due to its lower energy consumption.On 15 September 2022, Ethereum transitioned its consensus mechanism from proof-of-work to proof-of-stake in an upgrade process known as "the Merge". This has cut Ethereum's energy usage by 99%.
Sources:
Deirmentzoglou, Evangelos; Papakyriakopoulos, Georgios; Patsakis, Constantinos (2019). "A Survey on Long-Range Attacks for Proof of Stake Protocols". IEEE Access. 7: 28712–28725. doi:10.1109/ACCESS.2019.2901858. eISSN 2169-3536. S2CID 84185792.
Xiao, Y.; Zhang, N.; Lou, W.; Hou, Y. T. (2020). "A Survey of Distributed Consensus Protocols for Blockchain Networks". IEEE Communications Surveys and Tutorials. 22 (2): 1432–1465. arXiv:1904.04098. doi:10.1109/COMST.2020.2969706. ISSN 1553-877X. S2CID 102352657.
Bashir, Imran (2022). "Blockchain Age Protocols". Blockchain Consensus. Apress. pp. 331–376. doi:10.1007/978-1-4842-8179-6_8. ISBN 978-1-4842-8178-9. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Schaeffer's sign**
Schaeffer's sign:
Schaeffer's sign is a clinical sign in which squeezing the Achilles tendon elicits an extensor plantar reflex. It is found in patients with pyramidal tract lesions, and is one of a number of Babinski-like responses.The sign takes its name from the German neurologist Max Schaeffer (1852-1923). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sheeted dyke complex**
Sheeted dyke complex:
A sheeted dyke complex, or sheeted dike complex, is a series of sub-parallel intrusions of igneous rock, forming a layer within the oceanic crust. At mid-ocean ridges, dykes are formed when magma beneath areas of tectonic plate divergence travels through a fracture in the earlier formed oceanic crust, feeding the lavas above and cooling below the seafloor forming upright columns of igneous rock. Magma continues to cool, as the existing seafloor moves away from the area of divergence, and additional magma is intruded and cools. In some tectonic settings slices of the oceanic crust are obducted (emplaced) upon continental crust, forming an ophiolite.
Geometry:
The individual dykes typically range in thickness from a few centimetres to a few metres. Most of the dykes show evidence of one-sided chilled margins, consistent with most dykes having been split by later dykes. It is also common for the chilled margins to be consistently on one side, suggesting that most dykes in any one exposure were gradually moved away from the spreading centre by further stages of intrusion in a constant location.
Geometry:
The layer of sheeted dykes that makes up the lower part of Layer 2 of the oceanic crust is typically between one and two kilometres thick. At the top, the dykes become increasingly separated by screens of lava, while at the base they become separated by screens of gabbro.
Dyke formation:
Sheeted dyke complexes are most commonly found at divergent plate boundaries marked by the presence of mid-ocean ridges. These subaqueous mountain ranges are made up of newly created oceanic crust due to tectonic plates moving away from each other. In response to the separation of plates, magma from the asthenosphere is subject to upwelling, pushing hot magma up towards the seafloor. The magma that reaches the surface is subject to fast cooling and creates basaltic formations such as pillow lava, a common extrusive rock created near areas of volcanic activity on the seafloor. Although some magma is able to reach the surface of oceanic crust, a considerable amount of magma solidifies within the crust. Dykes are formed when the rising magma that does not reach the surface cools into upright columns of igneous rock beneath areas of divergence.
Dyke formation:
Ophiolites Dykes are perpetually formed as long as magma continues to flow through the plate boundary, creating a distinct, stratigraphic-like sequences of rocky columns within the seafloor. Ophiolites are formed when these sections of oceanic crust are revealed above sea level and embedded within coastal crust. Older dykes formed near divergence zones are pushed away as new seafloor is created, a phenomenon known as seafloor spreading, and over time, the oldest dykes are pushed far enough from convergence zones to be exposed above sea level.
Seafloor spreading and continental drift:
The creation of sheeted dykes is a perpetual and continuous process that promotes the phenomenon known as seafloor spreading. Seafloor spreading is the creation of new oceanic crust by volcanic activity at mid-ocean ridges, and as magma continues to rise and solidify at mid-ocean ridges, the existing older dykes are pushed out of the way to make room for newer seabed. The rate at which new oceanic crust is created is referred to as spreading rate, and variations in spreading rate determine the geometry of the mid-ocean ridge being created at plate boundaries.
Seafloor spreading and continental drift:
Fast-spreading ridges Mid-ocean ridges with a spreading rate greater than or equal to 90 mm/year are considered to be fast-spreading ridges. Due to the large amounts magma being expelled from the asthenosphere in a relatively short period of time, these formations typically protrude much higher from the seafloor.
Slow-spreading ridges Mid-ocean ridges with a spreading rate less than or equal to 40 mm/year are considered to be slow-spreading ridges. These formations are typically characterized by a large depression in the seafloor, known as rift valleys, and are formed due to the lack of magma present to solidify.
Examples:
Troodos Ophiolite, Cyprus Maydan Syncline, Oman, part of the Semail Ophiolite - A sheeted dyke complex on the coast of Oman has been discovered to have been formed during a single sea-floor spreading episode.
Hole 504b, Costa Rica - Hole 504b is a scientific ocean drilling program that burrowed 1562.3 m below the seafloor directly through layers of sediment exposing sheeted dykes and pillow lava. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SPQR tree**
SPQR tree:
In graph theory, a branch of mathematics, the triconnected components of a biconnected graph are a system of smaller graphs that describe all of the 2-vertex cuts in the graph. An SPQR tree is a tree data structure used in computer science, and more specifically graph algorithms, to represent the triconnected components of a graph. The SPQR tree of a graph may be constructed in linear time and has several applications in dynamic graph algorithms and graph drawing.
SPQR tree:
The basic structures underlying the SPQR tree, the triconnected components of a graph, and the connection between this decomposition and the planar embeddings of a planar graph, were first investigated by Saunders Mac Lane (1937); these structures were used in efficient algorithms by several other researchers prior to their formalization as the SPQR tree by Di Battista and Tamassia (1989, 1990, 1996).
Structure:
An SPQR tree takes the form of an unrooted tree in which for each node x there is associated an undirected graph or multigraph Gx. The node, and the graph associated with it, may have one of four types, given the initials SPQR: In an S node, the associated graph is a cycle graph with three or more vertices and edges. This case is analogous to series composition in series–parallel graphs; the S stands for "series".
Structure:
In a P node, the associated graph is a dipole graph, a multigraph with two vertices and three or more edges, the planar dual to a cycle graph. This case is analogous to parallel composition in series–parallel graphs; the P stands for "parallel".
Structure:
In a Q node, the associated graph has a single real edge. This trivial case is necessary to handle the graph that has only one edge. In some works on SPQR trees, this type of node does not appear in the SPQR trees of graphs with more than one edge; in other works, all non-virtual edges are required to be represented by Q nodes with one real and one virtual edge, and the edges in the other node types must all be virtual.
Structure:
In an R node, the associated graph is a 3-connected graph that is not a cycle or dipole. The R stands for "rigid": in the application of SPQR trees in planar graph embedding, the associated graph of an R node has a unique planar embedding.Each edge xy between two nodes of the SPQR tree is associated with two directed virtual edges, one of which is an edge in Gx and the other of which is an edge in Gy. Each edge in a graph Gx may be a virtual edge for at most one SPQR tree edge.
Structure:
An SPQR tree T represents a 2-connected graph GT, formed as follows. Whenever SPQR tree edge xy associates the virtual edge ab of Gx with the virtual edge cd of Gy, form a single larger graph by merging a and c into a single supervertex, merging b and d into another single supervertex, and deleting the two virtual edges. That is, the larger graph is the 2-clique-sum of Gx and Gy. Performing this gluing step on each edge of the SPQR tree produces the graph GT; the order of performing the gluing steps does not affect the result. Each vertex in one of the graphs Gx may be associated in this way with a unique vertex in GT, the supervertex into which it was merged.
Structure:
Typically, it is not allowed within an SPQR tree for two S nodes to be adjacent, nor for two P nodes to be adjacent, because if such an adjacency occurred the two nodes could be merged into a single larger node. With this assumption, the SPQR tree is uniquely determined from its graph. When a graph G is represented by an SPQR tree with no adjacent P nodes and no adjacent S nodes, then the graphs Gx associated with the nodes of the SPQR tree are known as the triconnected components of G.
Construction:
The SPQR tree of a given 2-vertex-connected graph can be constructed in linear time.The problem of constructing the triconnected components of a graph was first solved in linear time by Hopcroft & Tarjan (1973). Based on this algorithm, Di Battista & Tamassia (1996) suggested that the full SPQR tree structure, and not just the list of components, should be constructible in linear time. After an implementation of a slower algorithm for SPQR trees was provided as part of the GDToolkit library, Gutwenger & Mutzel (2001) provided the first linear-time implementation. As part of this process of implementing this algorithm, they also corrected some errors in the earlier work of Hopcroft & Tarjan (1973).
Construction:
The algorithm of Gutwenger & Mutzel (2001) includes the following overall steps.
Construction:
Sort the edges of the graph by the pairs of numerical indices of their endpoints, using a variant of radix sort that makes two passes of bucket sort, one for each endpoint. After this sorting step, parallel edges between the same two vertices will be adjacent to each other in the sorted list and can be split off into a P-node of the eventual SPQR tree, leaving the remaining graph simple.
Construction:
Partition the graph into split components; these are graphs that can be formed by finding a pair of separating vertices, splitting the graph at these two vertices into two smaller graphs (with a linked pair of virtual edges having the separating vertices as endpoints), and repeating this splitting process until no more separating pairs exist. The partition found in this way is not uniquely defined, because the parts of the graph that should become S-nodes of the SPQR tree will be subdivided into multiple triangles.
Construction:
Label each split component with a P (a two-vertex split component with multiple edges), an S (a split component in the form of a triangle), or an R (any other split component). While there exist two split components that share a linked pair of virtual edges, and both components have type S or both have type P, merge them into a single larger component of the same type.To find the split components, Gutwenger & Mutzel (2001) use depth-first search to find a structure that they call a palm tree; this is a depth-first search tree with its edges oriented away from the root of the tree, for the edges belonging to the tree, and towards the root for all other edges. They then find a special preorder numbering of the nodes in the tree, and use certain patterns in this numbering to identify pairs of vertices that can separate the graph into smaller components. When a component is found in this way, a stack data structure is used to identify the edges that should be part of the new component.
Usage:
Finding 2-vertex cuts With the SPQR tree of a graph G (without Q nodes) it is straightforward to find every pair of vertices u and v in G such that removing u and v from G leaves a disconnected graph, and the connected components of the remaining graphs: The two vertices u and v may be the two endpoints of a virtual edge in the graph associated with an R node, in which case the two components are represented by the two subtrees of the SPQR tree formed by removing the corresponding SPQR tree edge.
Usage:
The two vertices u and v may be the two vertices in the graph associated with a P node that has two or more virtual edges. In this case the components formed by the removal of u and v are represented by subtrees of the SPQR tree, one for each virtual edge in the node.
Usage:
The two vertices u and v may be two vertices in the graph associated with an S node such that either u and v are not adjacent, or the edge uv is virtual. If the edge is virtual, then the pair (u,v) also belongs to a node of type P and R and the components are as described above. If the two vertices are not adjacent then the two components are represented by two paths of the cycle graph associated with the S node and with the SPQR tree nodes attached to those two paths.
Usage:
Representing all embeddings of planar graphs If a planar graph is 3-connected, it has a unique planar embedding up to the choice of which face is the outer face and of orientation of the embedding: the faces of the embedding are exactly the nonseparating cycles of the graph. However, for a planar graph (with labeled vertices and edges) that is 2-connected but not 3-connected, there may be greater freedom in finding a planar embedding. Specifically, whenever two nodes in the SPQR tree of the graph are connected by a pair of virtual edges, it is possible to flip the orientation of one of the nodes (replacing it by its mirror image) relative to the other one. Additionally, in a P node of the SPQR tree, the different parts of the graph connected to virtual edges of the P node may be arbitrarily permuted. All planar representations may be described in this way. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Betamethasone**
Betamethasone:
Betamethasone is a steroid medication. It is used for a number of diseases including rheumatic disorders such as rheumatoid arthritis and systemic lupus erythematosus, skin diseases such as dermatitis and psoriasis, allergic conditions such as asthma and angioedema, preterm labor to speed the development of the baby's lungs, Crohn's disease, cancers such as leukemia, and along with fludrocortisone for adrenocortical insufficiency, among others. It can be taken by mouth, injected into a muscle, or applied to the skin, typically in cream, lotion, or liquid forms.Serious side effects include an increased risk of infection, muscle weakness, severe allergic reactions, and psychosis. Long-term use may cause adrenal insufficiency. Stopping the medication suddenly following long-term use may be dangerous. The cream commonly results in increased hair growth and skin irritation. Betamethasone belongs to the glucocorticoid class of medication. It is a stereoisomer of dexamethasone, the two compounds differing only in the spatial configuration of the methyl group at position 16 (see steroid nomenclature).Betamethasone was patented in 1958, and approved for medical use in the United States in 1961. The cream and ointment are on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2020, it was the 233rd most commonly prescribed medication in the United States, with more than 1 million prescriptions.
Medical uses:
Betamethasone is a corticosteroid that is available as a pill, by injection, and as an ointment, cream, lotion, gel, or aerosol (spray) for the skin, and a foam for the scalp. When given by injection, anti-inflammatory effects begin in around two hours and last for seven days.It is used as a topical cream to relieve skin irritation, such as itching and flaking from eczema. It is used as a treatment for local psoriasis, as betamethasone dipropionate and salicylic acid, or as the combination calcipotriol/betamethasone dipropionate. Betamethasone sodium phosphate is used orally and via injection with the same indications as other steroids. Many betamethasone-based pharmaceuticals include the steroid as the valerate ester.
Medical uses:
In a randomized controlled trial betamethasone was shown to reduce some of the ataxia (poor coordination) symptoms associated with ataxia telangiectasia (A-T) by 28-31%.Betamethasone is also used to stimulate fetal lung maturation in order to prevent infant respiratory distress syndrome (IRDS) and to decrease the incidence and mortality from intracranial hemorrhage in premature infants.
A cream with 0.05% betamethasone appears effective in treating phimosis in boys, and often averts the need for circumcision. It has replaced circumcision as the preferred treatment method for some physicians in the British National Health Service.
Side effects:
Euphoria Depression Adrenal suppression Hypertension Groupings of fine blood vessels becoming prominent under the skin, petechiae Excessive hair growth (hypertrichosis) EcchymosesProlonged use of this medicine on extensive areas of skin, broken or raw skin, skin folds, or underneath airtight dressings may on rare occasions result in enough corticosteroid being absorbed to have side effects on other parts of the body; for example, by causing a decrease in the production of natural hormones by the adrenal glands.
Side effects:
Betamethasone is also used prior to delivery of a preterm baby to help prepare the lungs for breathing. However, because betamethasone crosses the placenta, which is required for its beneficial effects, it may also be associated with complications, such as hypoglycemia and leukocytosis in newborns exposed in utero.
When injected into the epidural space or the spine, it may cause serious side effects like loss of vision, stroke, and paralysis.
Forms:
Betamethasone is available in a number of compound forms: betamethasone dipropionate (branded as Diprosone, Diprolene, Celestamine, Procort (in Pakistan), and others), betamethasone sodium phosphate (branded as Bentelan in Italy) and betamethasone valerate (branded as Audavate, Betnovate, Celestone, Fucibet, and others). In the United States and Canada, betamethasone is mixed with clotrimazole and sold as Lotrisone and Lotriderm. It is also available in combination with salicylic acid (branded as Diprosalic) for using in psoriatic skin conditions. In some countries, it is also sold mixed with both clotrimazole and gentamicin to add an antibacterial agent to the mix.
Forms:
Betamethasone sodium phosphate mixed with betamethasone acetate is available in the United States as Celestone Soluspan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ladder yarn**
Ladder yarn:
Ladder yarn or train tracks yarn is a type of novelty yarn. It is constructed like ladders, with a horizontal stripe of material suspended between two thinner threads, alternating with gaps. Sometimes a contrasting strand is fed through the gaps to produce another look. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**76 (number)**
76 (number):
76 (seventy-six) is the natural number following 75 and preceding 77.
In mathematics:
76 is: a composite number; a square-prime, of the form (p2, q) where q is a higher prime. It is the ninth of this general form and the seventh of the form (22.q).a Lucas number.
a telephone or involution number, the number of different ways of connecting 6 points with pairwise connections.
a nontotient.
a 14-gonal number.
a centered pentagonal number.
an Erdős–Woods number since it is possible to find sequences of 76 consecutive integers such that each inner member shares a factor with either the first or the last member.
with an aliquot sum of 64; within an aliquot sequence of two composite numbers (76,64,63,1,0) to the Prime in the 63-aliquot tree.
an automorphic number in base 10. It is one of two 2-digit numbers whose square, 5,776, and higher powers, end in the same two digits. The other is 25.There are 76 unique compact uniform hyperbolic honeycombs in the third dimension that are generated from Wythoff constructions.
In science:
The atomic number of osmium.
The Little Dumbbell Nebula in the constellation Pegasus is designated as Messier object 76 (M76).
In other fields:
Seventy-six is also: In colloquial American parlance, reference to 1776, the year of the signing of the United States Declaration of Independence.
Seventy-Six, an 1823 novel by American writer John Neal.
The Spirit of '76, patriotic painting by Archibald MacNeal Willard.
A brand of ConocoPhillips gas stations, 76.
The number of trombonists leading the parade in "Seventy-Six Trombones", from Meredith Willson's musical The Music Man.
The 76ers, a professional basketball team based in Philadelphia.
76, the debut album of Dutch trance producer and DJ Armin van Buuren.
Years like 1876 and 1976 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Value premium**
Value premium:
In investing, value premium refers to the greater risk-adjusted return of value stocks over growth stocks. Eugene Fama and Kenneth French first identified the premium in 1992, using a measure they called HML (high book-to-market ratio minus low book-to-market ratio) to measure equity returns based on valuation. Other experts, such as John C. Bogle, have argued that no value premium exists, claiming that Fama and French's research is period dependent. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dcu family**
Dcu family:
The C4-dicarboxylate uptake family or Dcu family (TC# 2.A.13) is a family of transmembrane ion transporters found in bacteria. Their function is to exchange dicarboxylates such as aspartate, malate, fumarate and succinate.
Structure:
Many members of this family are predicted to have 11 or 12 transmembrane regions (TMSs); however, one member of this family (Uncharacterized protein of Encarsia pergandiella symbiont, Cardinium hertigii, strain cEper1; TC# 2.A.13.2.1) is reported to have 10 transmembrane regions, with both the N- and C-termini localized to the periplasm. For DcuA, the 'positive inside' rule is obeyed, and two putative TMSs are localized to a cytoplasmic loop between TMSs 5 and 6 and in the C-terminal periplasmic region. The fully sequenced proteins are of fairly uniform size, from 434-446 amino acyl residues in length.
Structure:
There are no crystal structures available for members of the Dcu family.
Function:
The two E. coli proteins, DcuA (TC# 2.A.13.1.1) and DcuB (TC# 2.A.13.1.2), of the Dcu family are involved in the transport of aspartate, malate, fumarate and succinate, functioning as antiporters with any two of these substrates. They exhibit 36% identity with 63% similarity, and both transport fumarate in exchange for succinate with the same affinity (30 μM). Since DcuA is encoded in an operon with the gene for aspartase, and DcuB is encoded in an operon with the gene for fumarase, their physiological functions may be to catalyse aspartate:fumarate and fumarate:malate exchange during the anaerobic utilization of aspartate and fumarate, respectively. The two transporters can apparently substitute for each other under certain physiological conditions.
Function:
The generalized transport reaction catalyzed by the proteins of the Dcu family is:Dicarboxylate1 (out) + Dicarboxylate2 (in) ⇌ Dicarboxylate1 (in) + Dicarboxylate2 (out).
Expression:
The Escherichia coli DcuA and DcuB proteins have very different expression patterns. DcuA is constitutively expressed; DcuB is strongly induced anaerobically by FNR and C4-dicarboxylates, while it is repressed by nitrate and subject to CRP-mediated catabolite repression. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Manzanate**
Manzanate:
Manzanate is a flavor ingredient which has a fruity apple smell and with aspects of cider and sweet pineapple. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hierarchical task network**
Hierarchical task network:
In artificial intelligence, hierarchical task network (HTN) planning is an approach to automated planning in which the dependency among actions can be given in the form of hierarchically structured networks.
Hierarchical task network:
Planning problems are specified in the hierarchical task network approach by providing a set of tasks, which can be: primitive (initial state) tasks, which roughly correspond to the actions of STRIPS; compound tasks (intermediate state), which can be seen as composed of a set of simpler tasks; goal tasks (goal state), which roughly corresponds to the goals of STRIPS, but are more general.A solution to an HTN problem is then an executable sequence of primitive tasks that can be obtained from the initial task network by decomposing compound tasks into their set of simpler tasks, and by inserting ordering constraints.
Hierarchical task network:
A primitive task is an action that can be executed directly given the state in which it is executed supports its precondition. A compound task is a complex task composed of a partially ordered set of further tasks, which can either be primitive or abstract. A goal task is a task of satisfying a condition. The difference between primitive and other tasks is that the primitive actions can be directly executed. Compound and goal tasks both require a sequence of primitive actions to be performed; however, goal tasks are specified in terms of conditions that have to be made true, while compound tasks can only be specified in terms of other tasks via the task network outlined below.
Hierarchical task network:
Constraints among tasks are expressed in the form of networks, called (hierarchical) task networks. A task network is a set of tasks and constraints among them. Such a network can be used as the precondition for another compound or goal task to be feasible. This way, one can express that a given task is feasible only if a set of other actions (those mentioned in the network) are done, and they are done in such a way that the constraints among them (specified by the network) are satisfied. One particular formalism for representing hierarchical task networks that has been fairly widely used is TAEMS.
Hierarchical task network:
Some of the best-known domain-independent HTN-planning systems are: NOAH, Nets of Action Hierarchies.
Nonlin, one of the first HTN planning systems.
SIPE-2 O-Plan, Open Planning Architecture UMCP, the first provably sound and complete HTN planning systems.
I-X/I-Plan SHOP2, a HTN-planner developed at University of Maryland, College Park.
PANDA, a system designed for hybrid planning, an extension of HTN planning developed at Ulm University, Germany.
HTNPlan-P, preference-based HTN planning.HTN planning is strictly more expressive than STRIPS, to the point of being undecidable in the general case. However, many syntactic restrictions of HTN planning are decidable, with known complexities ranging from NP-complete to 2-EXPSPACE-complete, and some HTN problems can be efficiently compiled into PDDL, a STRIPS-like language. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trezona Formation**
Trezona Formation:
The Trezona Formation is a Neoproterozoic epoch fossiliferous geological formation in South Australia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Q-exponential**
Q-exponential:
In combinatorial mathematics, a q-exponential is a q-analog of the exponential function, namely the eigenfunction of a q-derivative. There are many q-derivatives, for example, the classical q-derivative, the Askey-Wilson operator, etc. Therefore, unlike the classical exponentials, q-exponentials are not unique. For example, eq(z) is the q-exponential corresponding to the classical q-derivative while Eq(z) are eigenfunctions of the Askey-Wilson operators.
Definition:
The q-exponential eq(z) is defined as eq(z)=∑n=0∞zn[n]q!=∑n=0∞zn(1−q)n(q;q)n=∑n=0∞zn(1−q)n(1−qn)(1−qn−1)⋯(1−q) where [n]!q is the q-factorial and (q;q)n=(1−qn)(1−qn−1)⋯(1−q) is the q-Pochhammer symbol. That this is the q-analog of the exponential follows from the property (ddz)qeq(z)=eq(z) where the derivative on the left is the q-derivative. The above is easily verified by considering the q-derivative of the monomial (ddz)qzn=zn−11−qn1−q=[n]qzn−1.
Here, [n]q is the q-bracket.
For other definitions of the q-exponential function, see Exton (1983), Ismail & Zhang (1994), Suslov (2003) and Cieśliński (2011).
Properties:
For real q>1 , the function eq(z) is an entire function of z . For q<1 , eq(z) is regular in the disk |z|<1/(1−q) Note the inverse, eq(z)e1/q(−z)=1 Addition Formula The analogue of exp exp exp (x+y) does not hold for real numbers x and y . However, if these are operators satisfying the commutation relation xy=qyx , then eq(x)eq(y)=eq(x+y) holds true.
Relations:
For −1<q<1 , a function that is closely related is Eq(z).
It is a special case of the basic hypergeometric series, Eq(z)=1ϕ1(00;z)=∑n=0∞q(n2)(−z)n(q;q)n=∏n=0∞(1−qnz)=(z;q)∞.
Clearly, lim lim q→1∑n=0∞q(n2)(1−q)n(q;q)n(−z)n=e−z.
Relation with Dilogarithm eq(x) has the following infinite product representation: eq(x)=(∏k=0∞(1−qk(1−q)x))−1.
On the other hand, log (1−x)=−∑n=1∞xnn holds. When |q|<1 log log (1−qk(1−q)x)=∑k=0∞∑n=1∞(qk(1−q)x)nn=∑n=1∞((1−q)x)n(1−qn)n=11−q∑n=1∞((1−q)x)n[n]qn.
By taking the limit q→1 lim log eq(x/(1−q))=Li2(x), where Li2(x) is the dilogarithm.
In physics:
The Q-exponential function is also known as the quantum dilogarithm. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Usage share of web browsers**
Usage share of web browsers:
The usage share of web browsers is the portion, often expressed as a percentage, of visitors to a group of web sites that use a particular web browser.
Accuracy:
Measuring browser usage in the number of requests (page hits) made by each user agent can be misleading.
Accuracy:
Overestimation Not all requests are generated by a user, as a user agent can make requests at regular time intervals without user input. In this case, the user's activity might be overestimated. Some examples: Certain anti-virus products fake their user agent string to appear to be popular browsers. This is done to trick attack sites that might display clean content to the scanner, but not to the browser. The Register reported in June 2008 that traffic from AVG Linkscanner, using an IE6 user agent string, outstripped human link clicks by nearly 10 to 1.
Accuracy:
A user who revisits a site shortly after changing or upgrading browsers may be double-counted under some methods; overall numbers at the time of a new version's release may be skewed.
Accuracy:
Occasionally websites are written in such a way that they effectively block certain browsers. One common reason for this is that the website has been tested to work with only a limited number of browsers, and so the site owners enforce that only tested browsers are allowed to view the content, while all other browsers are sent a "failure" message, and instruction to use another browser. Many of the untested browsers may still be otherwise capable of rendering the content. Sophisticated users who are aware of this may then "spoof" the user agent string in order to gain access to the site.
Accuracy:
Firefox, Chrome, Safari, and Opera will, under some circumstances, fetch resources before they need to render them, so that the resources can be used faster if they are needed. This technique, prerendering or pre-loading, may inflate the statistics for the browsers using it because of pre-loading of resources which are not used in the end.
Accuracy:
Underestimation It is also possible to underestimate the usage share by using the number of requests, for example: Firefox 1.5 (and other Gecko-based browsers) and later versions use fast Document Object Model (DOM) caching. JavaScript is executed on page load only from net or disk cache, but not if it is loaded from DOM cache. This can affect JavaScript-based tracking of browser statistics.
Accuracy:
While most browsers generate additional page hits by refreshing web pages when the user navigates back through page history, some browsers (such as Opera) reuse cached content without resending requests to the server.
Generally, the more faithfully a browser implements HTTP's cache specifications, the more it will be under-reported relative to browsers that implement those specifications poorly.
Browser users may run site, cookie and JavaScript blockers which cause those users to be under-counted. For example, common AdBlock blocklists such as EasyBlock include sites such as StatCounter in their privacy lists, and NoScript blocks all JavaScript by default. The Firefox Add-ons website reports 15.0 million users of AdBlock variants and 2.2 million users of NoScript.
Users behind a caching proxy (e.g. Squid) may have repeat requests for certain pages served to the browser from the cache, rather than retrieving it again via the Internet.
Accuracy:
User agent spoofing Websites often include code to detect browser version to adjust the page design sent according to the user agent string received. This may mean that less popular browsers are not sent complex content (even though they might be able to deal with it correctly) or, in extreme cases, refused all content. Thus, various browsers have a feature to cloak or spoof their identification to force certain server-side content.
Accuracy:
Default user agent strings of most browsers have pieces of strings from one or more other browsers, so that if the browser is unknown to a website, it can be identified as one of those. For example, Safari has not only "Mozilla/5.0", but also "KHTML" (from which Safari's WebKit was forked) and "Gecko" (the engine of Firefox).
Some Linux browsers such as GNOME Web identify themselves as Safari in order to aid compatibility.
Differences in measurement:
Net Applications, in their NetMarketShare report, uses unique visitors to measure web usage. The effect is that users visiting a site ten times will only be counted once by these sources, while they are counted ten times by statistics companies that measure page hits.
Differences in measurement:
Net Applications uses country-level weighting as well. The goal of weighting countries based on their usage is to mitigate selection area based sampling bias. This bias is caused by the differences in the percentage of tracked hits in the sample, and the percentage of global usage tracked by third party sources. This difference is caused by the heavier levels of market usage.Statistics from the United States government's Digital Analytics Program (DAP) do not represent world-wide usage patterns. DAP uses raw data from a unified Google Analytics account.
Summary tables:
The following tables summarize the usage share of all browsers for the indicated months.
Summary tables:
Crossover to smartphones having majority share According to StatCounter web use statistics (a proxy for all use), in the week from 7–13 November 2016, "mobile" (meaning smartphones) alone (without tablets) overtook desktop for the first time and by the end of the year smartphones were in the majority. Since 27 October, the desktop has not shown a majority, even on weekdays.
Summary tables:
Previously, according to StatCounter press release, the world has become desktop-minority; as of October 2016, there was about 49% of desktop usage for that month. The two biggest continents, Asia and Africa, have been mobile-majority for a while, and Australia is by now desktop-minority too. A few countries in Europe and South America have also followed this trend of being mobile-majority.
Summary tables:
In March 2015, for the first time in the US the number of mobile-only adult internet users exceeded the number of desktop-only internet users with 11.6% of the digital population only using mobile compared to 10.6% only using desktop; this also means the majority, 78%, use both desktop and mobile to access the internet.
Older reports (2000–2019):
StatCounter (Jan 2009 to October 2019) StatCounter statistics are directly derived from hits (not unique visitors) from 3 million sites using StatCounter totaling more than 15 billion hits per month. No weightings are used.
W3Counter (May 2007 to December 2022) This site counts the last 15,000 page views from each of approximately 80,000 websites.
This limits the influence of sites with more than 15,000 monthly visitors on the usage statistics.
W3Counter is not affiliated with the World Wide Web Consortium (W3C).
Net Applications (May 2016 to November 2019) Net Applications bases its usage share on statistics from 40,000 websites having around 160 million unique visitors per month. The mean site has 1300 unique visitors per day.
Older reports (2000–2019):
Wikimedia (April 2009 to March 2015) Wikimedia traffic analysis reports are based on server logs of about 4 billion page requests per month, based on the user agent information that accompanied the requests. These server logs cover requests to all the Wikimedia Foundation projects, including Wikipedia, Wikimedia Commons, Wiktionary, Wikibooks, Wikiquote, Wikisource, Wikinews, Wikiversity and others.Note: Wikimedia has recently had a large percentage of unrecognised browsers, previously counted as Firefox, that are now assumed to be Internet Explorer 11 fixed in the February 2014 and later numbers. And February 2014 numbers include mobile for Internet Explorer and Firefox (not included in Android). Chrome did not include the mobile numbers at that time while Android does since there was an "Android browser" that was the default browser at that time.
Older reports (2000–2019):
Clicky (September 2009 to August 2013) StatOwl.com (September 2008 to November 2012) 92% of sites monitored by StatOwl serve predominantly United States market.
AT Internet Institute (Europe, July 2007 to June 2010) AT Internet Institute was formerly known as XiTi.
Method: Only counts visits to local sites in 23 European countries and then averages the percentages for those 23 European countries independent of population size.
Older reports (2000–2019):
TheCounter.com (2000 to 2009) TheCounter.com is a defunct a web counter service, and identifies sixteen versions of six browsers (Internet Explorer, Firefox, Safari, Opera, Netscape, and Konqueror). Other browsers are categorised as either "Netscape compatible" (including Google Chrome, which may also be categorized as "Safari" because of its "Webkit" subtag) or "unknown". Internet Explorer 8 is identified as Internet Explorer 7. Monthly data includes all hits from 2008-02-01 until the end of the month concerned. More than the exact browser type, this data identifies the underlying rendering engine used by various browsers, and the table below aggregates them in the same column.
Older reports (2000–2019):
OneStat.com (April 2002 to March 2009) ADTECH (Europe, 2004 to 2009) WebSideStory (US, February 1999 to June 2006)
Older reports (pre-2000):
GVU WWW user survey (January 1994 to October 1998) EWS Web Server at UIUC (1996 Q2 to 1998) ZD Market Intelligence (US, January 1997 to January 1998) Zona Research (US, Jan 1997 to Jan 1998) AdKnowledge (January 1998 to June 1998) Dataquest (1995 to 1997) International Data Corporation (US, 1996 to 1997) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aromaticity**
Aromaticity:
In chemistry, aromaticity means the molecule has cyclic (ring-shaped) structures with pi bonds in resonance (those containing delocalized electrons). Aromatic rings give increased stability compared to saturated compounds having single bonds, and other geometric or connective non-cyclic arrangements with the same set of atoms. Aromatic rings are very stable and do not break apart easily. Organic compounds that are not aromatic are classified as aliphatic compounds—they might be cyclic, but only aromatic rings have enhanced stability. The term aromaticity with this meaning is historically related to the concept of having an aroma, but is a distinct property from that meaning.
Aromaticity:
Since the most common aromatic compounds are derivatives of benzene (an aromatic hydrocarbon common in petroleum and its distillates), the word aromatic occasionally refers informally to benzene derivatives, and so it was first defined. Nevertheless, many non-benzene aromatic compounds exist. In living organisms, for example, the most common aromatic rings are the double-ringed bases (Purine) in RNA and DNA. An aromatic functional group or other substituent is called an aryl group.
Aromaticity:
In terms of the electronic nature of the molecule, aromaticity describes a conjugated system often represented in Lewis diagrams as alternating single and double bonds in a ring. In reality, the electrons represented by the double bonds in the Lewis diagram are actually distributed evenly around the ring ("delocalized"), increasing the molecule's stability. Due to the restrictions imposed by the way Lewis diagrams are drawn, the molecule cannot be represented by one diagram, but rather a hybrid of multiple different diagrams (called resonance), such as with the two resonance structures of benzene. These molecules cannot be found in either one of these representations, with the longer single bonds in one location and the shorter double bond in another (see § Theory below). Rather, the molecule exhibits all equal bond lengths in between those of single and double bonds. This commonly seen model of aromatic rings, namely the idea that benzene was formed from a six-membered carbon ring with alternating single and double bonds (cyclohexatriene), was developed by August Kekulé (see § History below). The model for benzene consists of two resonance forms, which corresponds to the double and single bonds superimposing to produce six one-and-a-half bonds. Benzene is a more stable molecule than would be expected without accounting for charge delocalization.
Theory:
As it is a standard for resonance diagrams, the use of a double-headed arrow indicates that two structures are not distinct entities but merely hypothetical possibilities. Neither is an accurate representation of the actual compound, which is best represented by a hybrid (average) of these structures. A C=C bond is shorter than a C−C bond. Benzene is a regular hexagon—it is planar and all six carbon–carbon bonds have the same length, which is intermediate between that of a single and that of a double bond.
Theory:
In a cyclic molecule with three alternating double bonds, cyclohexatriene, the bond length of the single bond would be 1.54 Å and that of the double bond would be 1.34 Å. However, in a molecule of benzene, the length of each of the bonds is 1.40 Å, indicating it to be the average of single and double bond.A better representation is that of the circular π-bond (Armstrong's inner cycle), in which the electron density is evenly distributed through a π-bond above and below the ring. This model more correctly represents the location of electron density within the aromatic ring.
Theory:
The single bonds are formed from overlap of hybridized atomic sp2-orbitals in line between the carbon nuclei—these are called σ-bonds. Double bonds consist of a σ-bond and a π-bond. The π-bonds are formed from overlap of atomic p-orbitals above and below the plane of the ring. The following diagram shows the positions of these p-orbitals: Since they are out of the plane of the atoms, these orbitals can interact with each other freely, and become delocalized. This means that, instead of being tied to one atom of carbon, each electron is shared by all six in the ring. Thus, there are not enough electrons to form double bonds on all the carbon atoms, but the "extra" electrons strengthen all of the bonds on the ring equally. The resulting molecular orbital is considered to have π symmetry.
History:
The term "aromatic" The first known use of the word "aromatic" as a chemical term—namely, to apply to compounds that contain the phenyl group—occurred in an article by August Wilhelm Hofmann in 1855. Hofmann used the term for a class of benzene compounds, many of which have odors (aromas), unlike pure saturated hydrocarbons. Aromaticity as a chemical property bears no general relationship with the olfactory properties of such compounds (how they smell), although in 1855, before the structure of benzene or organic compounds was understood, chemists like Hofmann were beginning to understand that odiferous molecules from plants, such as terpenes, had chemical properties that we recognize today are similar to unsaturated petroleum hydrocarbons like benzene. If this was indeed the earliest introduction of the term, it is curious that Hofmann says nothing about why he introduced an adjective indicating olfactory character to apply to a group of chemical substances, of which only some have notable aromas. Also, some of the most odoriferous organic substances known are terpenes, which are not aromatic in the chemical sense. Terpenes and benzenoid substances do have a chemical characteristic in common, that is, higher unsaturation than many aliphatic compounds, and Hofmann may not have made a distinction between the two categories. Many of the earliest-known examples of aromatic compounds, such as benzene and toluene, have distinctive pleasant smells. This property led to the term "aromatic" for this class of compounds, and hence the term "aromaticity" for the eventually discovered electronic property.
History:
The structure of the benzene ring In the 19th century, chemists found it puzzling that benzene could be so unreactive toward addition reactions, given its presumed high degree of unsaturation. The cyclohexatriene structure for benzene was first proposed by August Kekulé in 1865. Most chemists were quick to accept this structure, since it accounted for most of the known isomeric relationships of aromatic chemistry. The hexagonal structure explains why only one isomer of benzene exists and why disubstituted compounds have three isomers.Between 1897 and 1906, J. J. Thomson, the discoverer of the electron, proposed three equivalent electrons between each pair of carbon atoms in benzene. An explanation for the exceptional stability of benzene is conventionally attributed to Sir Robert Robinson, who was apparently the first (in 1925) to coin the term aromatic sextet as a group of six electrons that resists disruption.
History:
In fact, this concept can be traced further back, via Ernest Crocker in 1922, to Henry Edward Armstrong, who in 1890 wrote "the [six] centric affinities act within a cycle … benzene may be represented by a double ring … and when an additive compound is formed, the inner cycle of affinity suffers disruption, the contiguous carbon-atoms to which nothing has been attached of necessity acquire the ethylenic condition".Here, Armstrong is describing at least four modern concepts. First, his "affinity" is better known nowadays as the electron, which was to be discovered only seven years later by J. J. Thomson. Second, he is describing electrophilic aromatic substitution, proceeding (third) through a Wheland intermediate, in which (fourth) the conjugation of the ring is broken. He introduced the symbol C centered on the ring as a shorthand for the inner cycle, thus anticipating Erich Clar's notation. It is argued that he also anticipated the nature of wave mechanics, since he recognized that his affinities had direction, not merely being point particles, and collectively having a distribution that could be altered by introducing substituents onto the benzene ring (much as the distribution of the electric charge in a body is altered by bringing it near to another body).
History:
The quantum mechanical origins of this stability, or aromaticity, were first modelled by Hückel in 1931. He was the first to separate the bonding electrons into sigma and pi electrons.
Aromaticity of an arbitrary aromatic compound can be measured quantitatively by the nucleus-independent chemical shift (NICS) computational method and aromaticity percentage methods.
Characteristics of aromatic systems:
An aromatic (or aryl) ring contains a set of covalently bound atoms with specific characteristics: A delocalized conjugated π system, most commonly an arrangement of alternating single and double bonds Coplanar structure, with all the contributing atoms in the same plane Contributing atoms arranged in one or more rings A number of π delocalized electrons that is even, but not a multiple of 4. That is, 4n + 2 π-electrons, where n = 0, 1, 2, 3, and so on. This is known as Hückel's rule.According to Hückel's rule, if a molecule has 4n + 2 π-electrons, it is aromatic, but if it has 4n π-electrons and has characteristics 1–3 above, the molecule is said to be antiaromatic. Whereas benzene is aromatic (6 electrons, from 3 double bonds), cyclobutadiene is antiaromatic, since the number of π delocalized electrons is 4, which of course is a multiple of 4. The cyclobutadienide(2−) ion, however, is aromatic (6 electrons). An atom in an aromatic system can have other electrons that are not part of the system, and are therefore ignored for the 4n + 2 rule. In furan, the oxygen atom is sp2 hybridized. One lone pair is in the π system and the other in the plane of the ring (analogous to the C–H bond in the other positions). There are 6 π-electrons, so furan is aromatic.
Characteristics of aromatic systems:
Aromatic molecules typically display enhanced chemical stability, compared with similar non-aromatic molecules. A molecule that can be aromatic will tend to change toward aromaticity, and the added stability changes the chemistry of the molecule. Aromatic compounds undergo electrophilic aromatic substitution and nucleophilic aromatic substitution reactions, but not electrophilic addition reactions as happens with carbon–carbon double bonds.
Characteristics of aromatic systems:
In the presence of a magnetic field, the circulating π-electrons in an aromatic molecule produce an aromatic ring current that induces an additional magnetic field, an important effect in nuclear magnetic resonance. The NMR signal of protons in the plane of an aromatic ring are shifted substantially further down-field than those on non-aromatic sp2 carbons. This is an important way of detecting aromaticity. By the same mechanism, the signals of protons located near the ring axis are shifted upfield. Aromatic molecules are able to interact with each other in so-called π–π stacking: The π systems form two parallel rings overlap in a "face-to-face" orientation. Aromatic molecules are also able to interact with each other in an "edge-to-face" orientation: The slight positive charge of the substituents on the ring atoms of one molecule are attracted to the slight negative charge of the aromatic system on another molecule.
Characteristics of aromatic systems:
Planar monocyclic molecules containing 4n π-electrons are called antiaromatic and are, in general, unstable. Molecules that could be antiaromatic will tend to change from this electronic or conformation, thereby becoming non-aromatic. For example, cyclooctatetraene (COT) distorts out of planarity, breaking π overlap between adjacent double bonds. Recent studies have determined that cyclobutadiene adopts an asymmetric, rectangular configuration in which single and double bonds indeed alternate, with no resonance; the single bonds are markedly longer than the double bonds, reducing unfavorable p-orbital overlap. This reduction of symmetry lifts the degeneracy of the two formerly non-bonding molecular orbitals, which by Hund's rule forces the two unpaired electrons into a new, weakly bonding orbital (and also creates a weakly antibonding orbital). Hence, cyclobutadiene is non-aromatic; the strain of the asymmetric configuration outweighs the anti-aromatic destabilization that would afflict the symmetric, square configuration.
Characteristics of aromatic systems:
Hückel's rule of aromaticity treats molecules in their singlet ground states (S0). The stability trends of the compounds described here are found to be reversed in the lowest lying triplet and singlet excited states (T1 and S1), according to Baird's rule. This means that compounds like benzene, with 4n + 2 π-electrons and aromatic properties in the ground state, become antiaromatic and often adopt less symmetric structures in the excited state.
Aromatic compounds:
Importance Aromatic compounds play key roles in the biochemistry of all living things. The four aromatic amino acids histidine, phenylalanine, tryptophan, and tyrosine each serve as one of the 20 basic building-blocks of proteins. Further, all 5 nucleotides (adenine, thymine, cytosine, guanine, and uracil) that make up the sequence of the genetic code in DNA and RNA are aromatic purines or pyrimidines. The molecule heme contains an aromatic system with 22 π-electrons. Chlorophyll also has a similar aromatic system.
Aromatic compounds:
Aromatic compounds are important in industry. Key aromatic hydrocarbons of commercial interest are benzene, toluene, ortho-xylene and para-xylene. About 35 million tonnes are produced worldwide every year. They are extracted from complex mixtures obtained by the refining of oil or by distillation of coal tar, and are used to produce a range of important chemicals and polymers, including styrene, phenol, aniline, polyester and nylon.
Aromatic compounds:
Neutral homocyclics Benzene, as well as most other annulenes (with the exception of cyclodecapentaene, because it is non-planar) with the formula C4n+2H4n+2 where n is a natural number, such as cyclotetradecaheptaene (n=3).
Aromatic compounds:
Heterocyclics In heterocyclic aromatics (heteroaromatics), one or more of the atoms in the aromatic ring is of an element other than carbon. This can lessen the ring's aromaticity, and thus (as in the case of furan) increase its reactivity. Other examples include pyridine, pyrazine, pyrrole, imidazole, pyrazole, oxazole, thiophene, and their benzannulated analogs (benzimidazole, for example). In all these examples, the number of π-electrons is 6, due to the π-electrons from the double bonds as well as the two electrons from any lone pair that is in the p-orbital that is in the plane of the aromatic π system. For example, in pyridine, the five sp2-hybridized carbons each have a p-orbital that is perpendicular to the plane of the ring, and each of these p-orbitals contains one π-electron. Additionally, the nitrogen atom is also sp2-hybridized and has one electron in a p-orbital, which adds up to 6 p-electrons, thus making pyridine aromatic. The lone pair on the nitrogen is not part of the aromatic π system. Pyrrole and imidazole are both five membered aromatic rings that contain heteroatoms. In pyrrole, each of the four sp2-hybridized carbons contributes one π-electron, and the nitrogen atom is also sp2-hybridized and contributes two π-electrons from its lone pair, which occupies a p-orbital. In imidazole, both nitrogens are sp2-hybridized; the one in the double bond contributes one electron and the one which is not in the double bond and is in a lone pair contributes two electrons to the π system.
Aromatic compounds:
Fused aromatics and polycyclics Polycyclic aromatic hydrocarbons are molecules containing two or more simple aromatic rings fused together by sharing two neighboring carbon atoms. Examples are naphthalene, anthracene, and phenanthrene. In fused aromatics, not all carbon–carbon bonds are necessarily equivalent, as the electrons are not delocalized over the entire molecule. The aromaticity of these molecules can be explained using their orbital picture. Like benzene and other monocyclic aromatic molecules, polycyclics have a cyclic conjugated pi system with p-orbital overlap above and below the plane of the ring.
Aromatic compounds:
Substituted aromatics Many chemical compounds are aromatic rings with other functional groups attached. Examples include trinitrotoluene (TNT), acetylsalicylic acid (aspirin), paracetamol, and the nucleotides of DNA.
Aromatic ions Aromatic molecules need not be neutral molecules. Ions that satisfy Huckel's rule of 4n + 2 π-electrons in a planar, cyclic, conjugated molecule are considered to be aromatic ions. For example, the cyclopentadienyl anion and the cycloheptatrienylium cation are both considered to be aromatic ions, and the azulene molecule can be approximated as a combination of both.
Aromatic compounds:
In order to convert the atom from sp3 to sp2, a carbocation, carbanion, or carbon radical must be formed. These leave sp2-hybridized carbons that can partake in the π system of an aromatic molecule. Like neutral aromatic compounds, these compounds are stable and form easily. The cyclopentadienyl anion is formed very easily and thus 1,3-cyclopentadiene is a very acidic hydrocarbon with a pKa of 16. Other examples of aromatic ions include the cyclopropenium cation (2 π-electrons) and cyclooctatetraenyl dianion (10 π electrons).
Aromatic compounds:
Atypical aromatic compounds Aromaticity also occurs in rings consisting only of chemical elements that are not carbon. Inorganic six-membered-ring compounds analogous to benzene have been synthesized. For example, borazine is a six-membered ring composed of alternating boron and nitrogen atoms, each with one hydrogen atom attached to each atom of the ring. It has a delocalized π system and undergoes electrophilic substitution reactions appropriate to aromatic rings rather than reactions expected of non-aromatic molecules.Quite recently, the aromaticity of planar Si6−5 rings occurring in the Zintl phase Li12Si7 was experimentally evinced by Li solid-state NMR. Metal aromaticity is believed to exist in certain clusters of aluminium and gallium, specifically Ga32- and Al42-, for example.Homoaromaticity is the state of systems where conjugation is interrupted by a single sp3 hybridized carbon atom.Y-aromaticity is used to describe a Y-shaped, planar (flat) molecule with resonance bonds. The concept was developed to explain the extraordinary stability and high basicity of the guanidinium cation. Guanidinium is not a ring molecule, and is cross-conjugated rather than a π system of consecutively attached atoms, but is reported to have its six π-electrons delocalized over the whole molecule. The concept is controversial and some authors emphasize different effects. This has also been suggested as the reason that the trimethylenemethane dication is more stable than the butadienyl dication.σ-aromaticity refers to stabilization arising from the delocalization of sigma bonds. It is often invoked in cluster chemistry and is closely related to Wade's Rule. Furthermore, in 2021 a σ-aromatic Th3 complex was reported, indicating that the concept of σ-aromaticity remains relevant for orbitals with principle quantum number 6.
Other symmetries:
Möbius aromaticity occurs when a cyclic system of molecular orbitals, formed from pπ atomic orbitals and populated in a closed shell by 4n (n is an integer) electrons, is given a single half-twist to form a Möbius strip. A π system with 4n electrons in a flat (non-twisted) ring would be antiaromatic, and therefore highly unstable, due to the symmetry of the combinations of p atomic orbitals. By twisting the ring, the symmetry of the system changes and becomes allowed (see also Möbius–Hückel concept for details). Because the twist can be left-handed or right-handed, the resulting Möbius aromatics are dissymmetric or chiral. But as of 2012, no Möbius aromatic molecules had been synthesized. Aromatics with two half-twists corresponding to the paradromic topologies were first suggested by Johann Listing. In one form of carbo-benzene, the ring is expanded and contains alkyne and allene groups.
Other symmetries:
Spherical aromaticity is aromaticity that occurs in fullerenes. In 2000, Andreas Hirsch and coworkers in Erlangen, Germany, formulated a rule to determine when a fullerene would be aromatic. They found that if there were 2(n + 1)2 π-electrons, then the fullerene would display aromatic properties. This follows from the fact that an aromatic fullerene must have full icosahedral (or other appropriate) symmetry, so the molecular orbitals must be entirely filled. This is possible only if there are exactly 2(n + 1)2 electrons, where n is a nonnegative integer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multivariate cryptography**
Multivariate cryptography:
Multivariate cryptography is the generic term for asymmetric cryptographic primitives based on multivariate polynomials over a finite field F . In certain cases those polynomials could be defined over both a ground and an extension field. If the polynomials have the degree two, we talk about multivariate quadratics. Solving systems of multivariate polynomial equations is proven to be NP-complete. That's why those schemes are often considered to be good candidates for post-quantum cryptography. Multivariate cryptography has been very productive in terms of design and cryptanalysis. Overall, the situation is now more stable and the strongest schemes have withstood the test of time. It is commonly admitted that Multivariate cryptography turned out to be more successful as an approach to build signature schemes primarily because multivariate schemes provide the shortest signature among post-quantum algorithms.
History:
Tsutomu Matsumoto and Hideki Imai (1988) presented their so-called C* scheme at the Eurocrypt conference. Although C* has been broken by Jacques Patarin (1995), the general principle of Matsumoto and Imai has inspired a generation of improved proposals. In later work, the "Hidden Monomial Cryptosystems" was developed by (in French) Jacques Patarin. It is based on a ground and an extension field. "Hidden Field Equations" (HFE), developed by Patarin in 1996, remains a popular multivariate scheme today [P96]. The security of HFE has been thoroughly investigated, beginning with a direct Gröbner basis attack [FJ03, GJS06], key-recovery attacks (Kipnis & Shamir 1999) [BFP13], and more. The plain version of HFE is considered to be practically broken, in the sense that secure parameters lead to an impractical scheme. However, some simple variants of HFE, such as the minus variant and the vinegar variant allow one to strengthen the basic HFE against all known attacks. In addition to HFE, Patarin developed other schemes. In 1997 he presented “Balanced Oil & Vinegar” and in 1999 “Unbalanced Oil and Vinegar”, in cooperation with Aviad Kipnis and Louis Goubin (Kipnis, Patarin & Goubin 1995).
Construction:
Multivariate Quadratics involves a public and a private key. The private key consists of two affine transformations, S and T, and an easy to invert quadratic map P′:Fm→Fn . We denote the n×n matrix of the affine endomorphisms S:Fn→Fn by MS and the shift vector by vS∈Fn and similarly for T:Fm→Fm . In other words, S(x)=MSx+vS and T(y)=MTy+vT .The triple (S−1,P′−1,T−1) is the private key, also known as the trapdoor. The public key is the composition P=S∘P′∘T which is by assumption hard to invert without the knowledge of the trapdoor.
Signature:
Signatures are generated using the private key and are verified using the public key as follows. The message is hashed to a vector in y∈Fn via a known hash function. The signature is x=P−1(y)=T−1(P′−1(S−1(y))) .The receiver of the signed document must have the public key P in possession. He computes the hash y and checks that the signature x fulfils P(x)=y
Applications:
Unbalanced Oil and Vinegar Hidden Field Equations SFLASH by NESSIE Rainbow TTS QUARTZ QUAD (cipher) Four multivariate cryptography signature schemes (GeMMS, LUOV, Rainbow and MQDSS) have made their way into the 2nd round of the NIST post-quantum competition: see slide 12 of the report. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Four-wire terminating set**
Four-wire terminating set:
A four-wire terminating set (4WTS) is a balanced transformer used to perform a conversion between four-wire and two-wire operation in telecommunication systems.
Four-wire terminating set:
For example, a 4-wire circuit may, by means of a 4-wire terminating set, be connected to a 2-wire telephone set. Also, a pair of 4-wire terminating sets may be used to introduce an intermediate 4-wire circuit into a 2-wire circuit, in which loop repeaters may be situated to amplify signals in each direction without positive feedback and oscillation. The 4WTS differs from a simple hybrid coil in being equipped to adjust its impedance to maximize return loss.
Four-wire terminating set:
Four-wire terminating sets were largely supplanted by resistance hybrids in the late 20th century. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rotor wing**
Rotor wing:
A rotor wing is a lifting rotor or wing which spins to provide aerodynamic lift. In general, a rotor may spin about an axis which is aligned substantially either vertically or side-to-side (spanwise). All three classes have been studied for use as lifting rotors and several variations have been flown on full-size aircraft, although only the vertical-axis rotary wing has become widespread on rotorcraft such as the helicopter.
Rotor wing:
Some types provide lift at zero forward airspeed, allowing for vertical takeoff and landing (VTOL), as in the helicopter. Others, especially unpowered free-spinning types, require forward airspeed in the same manner as a fixed-wing aircraft, as in the autogyro. Many can also provide forward thrust if required.
Types:
Many ingenious ways have been devised to convert the spinning of a rotor into aerodynamic lift. The various types of such rotor wings may be classified according to the axis of the rotor. Types include: Vertical-axisConventional rotary wings as used by modern rotorcraft.Spanwise horizontal-axisWing rotor: an airfoil-section horizontal-axis rotor which creates the primary lift.
Magnus rotor: a rotor which creates lift via the Magnus effect.
Flettner rotor: a smooth cylindrical Magnus rotor with disc end plates.
Thom rotor: a smooth spinning cylinder with multiple discs along the span.
Cycloidal rotor or cyclorotor: a set of horizontal lifting aerofoils rotating around the rim of a supporting horizontal-axis rotor. (May be powered or unpowered.) An aircraft with a cycloidal rotor wing is called a cyclogyro. Some examples are hybrids comprising a cycloidal rotor around a central Magnus cylinder.
Cross-flow fan: a slatted cylindrical fan in a shaped duct.Longitudinal horizontal-axisRadial-lift rotor: a substantially fore-aft axis rotor which creates lift through cyclic pitch variation.
Self-propelling wing or Radial-lift rotor: a propeller or rotor with the rotation axis angled to the airflow to create a cyclic variation in pitch and hence a radial lift component.
Radial-lift propeller with cyclic pitch control: a propeller capable of generating a sideways lift component.
Conventional rotary wings:
Conventional rotorcraft have vertical-axis rotors. The main types include the helicopter with powered rotors providing both lift and thrust, and the autogyro with unpowered rotors providing lift only. There are also various hybrid types, especially the gyrodyne which has both a powered rotor and independent forward propulsion, and the stopped rotor in which the rotor stops spinning to act as a fixed wing in forward flight.
Magnus rotors:
When a spinning body passes through air at right angles to its axis of spin, it experiences a sideways force in the third dimension. This Magnus effect was first demonstrated on a spinning cylinder by Gustav Magnus in 1872. If the cylinder axis is aligned spanwise (side to side) then forward movement through the air generates lift. The rotating body does not need to be a cylinder and many related shapes have been studied.
Magnus rotors:
Flettner rotor The Flettner rotor comprises a Magnus cylinder with a disc endplate at each end. The American Plymouth A-A-2004 floatplane had Flettner rotors in place of the main wings and achieved short flights in 1924.
Cross-flow fan:
The cross-flow fan comprises an arrangement of blades running parallel to a central axis and aligned radially, with the fan partially or fully enclosed in a shaped duct. Due to the specific shaping, rotating the fan causes air to be drawn in at one end of the duct, passed across the fan and expelled at the other end.
The FanWing is a lifting rotor which uses this principle. It can both provide forward thrust by expelling air backwards and augment lift, even at very low airspeeds, by also drawing the air downwards. A prototype UAV was flown in 2007.
Radial-lift rotors:
During World War II Focke-Wulf proposed the Triebflügel, in which a tipjet-driven rotor wing is located around the fuselage waist. The proposed mode of operation was to land and take off as a tail-sitter, using the wing as a conventional rotor. The craft would then tilt over to horizontal flight and lift would be provided by cyclic pitch variation of the rotor wings, with the wing tip ramjets now angled to provide forward thrust.A few years later the American Vought XF5U circular-winged fighter prototype was designed with large radial-lift propellers. These were angled upwards when the craft was on the ground, creating a cyclic variation in the blades' angle of attack or pitch when the craft was moving forwards. This cyclic variation induced a radial lifting component to the blades, when in the horizontal segment of rotation, which was intended to augment the wing lift. A prototype aircraft was completed but the project was closed before the prototype had flown. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mir-872 microRNA precursor family**
Mir-872 microRNA precursor family:
In molecular biology mir-872 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms.
Sertoli Cell Expression:
miR-872 has been found to be expressed in sertoli cells and to post-transcriptionally target the Sod-1 gene, which encodes the copper/zinc-binding superoxide dismutase 1 (SOD-1) enzyme. Overproduction of SOD-1 increases oxidative damage and through this results in enhanced apoptosis and cell death.
Insulin-regulated HO-1 expression:
Insulin infusion in rats has seen increased levels of heme-oxygenase 1 expression, blocked by inhibited activation of PI3K or protein kinase C. miR-872 levels are reduced with inhibition of adipocytes of the 3T3-L1 cell line, along with those of miRNAs-155 and -183. Insulin is therefore able to increase expression of HO-1 through miR-872 downregulation, as well as via pathways dependent upon PI3K and protein kinase C. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HBV RNA encapsidation signal epsilon**
HBV RNA encapsidation signal epsilon:
The HBV RNA encapsidation signal epsilon (HBV_epsilon) is an element essential for HBV virus replication.
It is an RNA structure situated near the 5' end of the HBV pregenomic RNA. The structure consists of a lower stem, a bulge region, an upper stem and a tri-loop.
HBV RNA encapsidation signal epsilon:
The structure was determined and refined through enzymatic probing and NMR spectroscopy. The closure of the tri-loop was not predicted by RNA structure prediction programs but observed in the NMR structure. The regions shown to be critical for encapsidation of the RNA in the viral lifecycle are the bulge, upper stem and tri-loop which interact with the terminal protein domain of the HBV viral polymerase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Roadblock**
Roadblock:
A roadblock is a temporary installation set up to control or block traffic along a road. The reasons for one could be: Roadworks Temporary road closure during special events Police chase Robbery Sobriety checkpointIn peaceful circumstances, they are usually installed by the police or road transport authorities; they are also commonly employed during wars and are usually staffed by heavily armed soldiers in that case. During protests and riots, both police and demonstrators sometimes use roadblocks. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Signed-digit representation**
Signed-digit representation:
In mathematical notation for numbers, a signed-digit representation is a positional numeral system with a set of signed digits used to encode the integers.
Signed-digit representation can be used to accomplish fast addition of integers because it can eliminate chains of dependent carries. In the binary numeral system, a special case signed-digit representation is the non-adjacent form, which can offer speed benefits with minimal space overhead.
History:
Challenges in calculation stimulated early authors Colson (1726) and Cauchy (1840) to use signed-digit representation. The further step of replacing negated digits with new ones was suggested by Selling (1887) and Cajori (1928).
History:
In 1928, Florian Cajori noted the recurring theme of signed digits, starting with Colson (1726) and Cauchy (1840). In his book History of Mathematical Notations, Cajori titled the section "Negative numerals". For completeness, Colson uses examples and describes addition (pp. 163–4), multiplication (pp. 165–6) and division (pp. 170–1) using a table of multiples of the divisor. He explains the convenience of approximation by truncation in multiplication. Colson also devised an instrument (Counting Table) that calculated using signed digits.
History:
Eduard Selling advocated inverting the digits 1, 2, 3, 4, and 5 to indicate the negative sign. He also suggested snie, jes, jerd, reff, and niff as names to use vocally. Most of the other early sources used a bar over a digit to indicate a negative sign for it. Another German usage of signed-digits was described in 1902 in Klein's encyclopedia.
Definition and properties:
Digit set Let D be a finite set of numerical digits with cardinality b>1 (If b≤1 , then the positional number system is trivial and only represents the trivial ring), with each digit denoted as di for 0≤i<b.
b is known as the radix or number base. D can be used for a signed-digit representation if it's associated with a unique function fD:D→Z such that mod b for all 0≤i<b.
This function, fD, is what rigorously and formally establishes how integer values are assigned to the symbols/glyphs in D.
Definition and properties:
One benefit of this formalism is that the definition of "the integers" (however they may be defined) is not conflated with any particular system for writing/representing them; in this way, these two distinct (albeit closely related) concepts are kept separate. D can be partitioned into three distinct sets D+ , D0 , and D− , representing the positive, zero, and negative digits respectively, such that all digits d+∈D+ satisfy fD(d+)>0 , all digits d0∈D0 satisfy fD(d0)=0 and all digits d−∈D− satisfy fD(d−)<0 . The cardinality of D+ is b+ , the cardinality of D0 is b0 , and the cardinality of D− is b− , giving the number of positive and negative digits respectively, such that b=b++b0+b− Balanced form representations Balanced form representations are representations where for every positive digit d+ , there exist a corresponding negative digit d− such that fD(d+)=−fD(d−) . It follows that b+=b− . Only odd bases can have balanced form representations, as otherwise db/2 has to be the opposite of itself and hence 0, but 0≠b2 . In balanced form, the negative digits d−∈D− are usually denoted as positive digits with a bar over the digit, as d−=d¯+ for d+∈D+ . For example, the digit set of balanced ternary would be D3={1¯,0,1} with fD3(1¯)=−1 , fD3(0)=0 , and fD3(1)=1 . This convention is adopted in finite fields of odd prime order q :Fq={0,1,1¯=−1,...d=q−12,d¯=1−q2|q=0}.
Definition and properties:
Dual signed-digit representation Every digit set D has a dual digit set op given by the inverse order of the digits with an isomorphism op defined by op . As a result, for any signed-digit representations N of a number system ring N constructed from D with valuation vD:N→N , there exists a dual signed-digit representations of N , op , constructed from op with valuation op op →N , and an isomorphism op defined by op , where − is the additive inverse operator of N . The digit set for balanced form representations is self-dual.
Definition and properties:
For integers Given the digit set D and function f:D→Z as defined above, let us define an integer endofunction T:Z→Z as the following: if mod b,0≤i<b If the only periodic point of T is the fixed point 0 , then the set of all signed-digit representations of the integers Z using D is given by the Kleene plus D+ , the set of all finite concatenated strings of digits dn…d0 with at least one digit, with n∈N . Each signed-digit representation m∈D+ has a valuation vD:D+→Z vD(m)=∑i=0nfD(di)bi .Examples include balanced ternary with digits D={1¯,0,1} . Otherwise, if there exist a non-zero periodic point of T , then there exist integers that are represented by an infinite number of non-zero digits in D . Examples include the standard decimal numeral system with the digit set dec ={0,1,2,3,4,5,6,7,8,9} , which requires an infinite number of the digit 9 to represent the additive inverse −1 , as dec 10 =−1 , and the positional numeral system with the digit set D={A,0,1} with f(A)=−4 , which requires an infinite number of the digit A to represent the number 2 , as TD(2)=2−(−4)3=2 For decimal fractions If the integers can be represented by the Kleene plus D+ , then the set of all signed-digit representations of the decimal fractions, or b -adic rationals Z[1∖b] , is given by Q=D+×P×D∗ , the Cartesian product of the Kleene plus D+ , the set of all finite concatenated strings of digits dn…d0 with at least one digit, the singleton P consisting of the radix point ( .
Definition and properties:
or , ), and the Kleene star D∗ , the set of all finite concatenated strings of digits d−1…d−m , with m,n∈N . Each signed-digit representation q∈Q has a valuation vD:Q→Z[1∖b] vD(q)=∑i=−mnfD(di)bi For real numbers If the integers can be represented by the Kleene plus D+ , then the set of all signed-digit representations of the real numbers R is given by R=D+×P×DN , the Cartesian product of the Kleene plus D+ , the set of all finite concatenated strings of digits dn…d0 with at least one digit, the singleton P consisting of the radix point ( .
Definition and properties:
or , ), and the Cantor space DN , the set of all infinite concatenated strings of digits d−1d−2… , with n∈N . Each signed-digit representation r∈R has a valuation vD:R→R vD(r)=∑i=−∞nfD(di)bi .The infinite series always converges to a finite real number.
Definition and properties:
For other number systems All base- b numerals can be represented as a subset of DZ , the set of all doubly infinite sequences of digits in D , where Z is the set of integers, and the ring of base- b numerals is represented by the formal power series ring Z[[b,b−1]] , the doubly infinite series ∑i=−∞∞aibi where ai∈Z for i∈Z Integers modulo powers of b The set of all signed-digit representations of the integers modulo bn , Z∖bnZ is given by the set Dn , the set of all finite concatenated strings of digits dn−1…d0 of length n , with n∈N . Each signed-digit representation m∈Dn has a valuation vD:Dn→Z/bnZ mod bn Prüfer groups A Prüfer group is the quotient group Z(b∞)=Z[1∖b]/Z of the integers and the b -adic rationals. The set of all signed-digit representations of the Prüfer group is given by the Kleene star D∗ , the set of all finite concatenated strings of digits d1…dn , with n∈N . Each signed-digit representation p∈D∗ has a valuation vD:D∗→Z(b∞) mod 1 Circle group The circle group is the quotient group T=R/Z of the integers and the real numbers. The set of all signed-digit representations of the circle group is given by the Cantor space DN , the set of all right-infinite concatenated strings of digits d1d2… . Each signed-digit representation m∈Dn has a valuation vD:DN→T mod 1 The infinite series always converges.
Definition and properties:
b-adic integers The set of all signed-digit representations of the b -adic integers, Zb is given by the Cantor space DN , the set of all left-infinite concatenated strings of digits …d1d0 . Each signed-digit representation m∈Dn has a valuation vD:DN→Zb vD(m)=∑i=0∞fD(di)bi b-adic solenoids The set of all signed-digit representations of the b -adic solenoids, Tb is given by the Cantor space DZ , the set of all doubly infinite concatenated strings of digits …d1d0d−1… . Each signed-digit representation m∈Dn has a valuation vD:DZ→Tb vD(m)=∑i=−∞∞fD(di)bi
In written and spoken language:
Indo-Aryan languages The oral and written forms of numbers in the Indo-Aryan languages use a negative numeral (e.g., "un" in Hindi and Bengali, "un" or "unna" in Punjabi, "ekon" in Marathi) for the numbers between 11 and 90 that end with a nine. The numbers followed by their names are shown for Punjabi below (the prefix "ik" means "one"): 19 unni, 20 vih, 21 ikki 29 unatti, 30 tih, 31 ikatti 39 untali, 40 chali, 41 iktali 49 unanja, 50 panjah, 51 ikvanja 59 unahat, 60 sath, 61 ikahat 69 unattar, 70 sattar, 71 ikhattar 79 unasi, 80 assi, 81 ikiasi 89 unanve, 90 nabbe, 91 ikinnaven.Similarly, the Sesotho language utilizes negative numerals to form 8's and 9's.
In written and spoken language:
8 robeli (/Ro-bay-dee/) meaning "break two" i.e. two fingers down 9 robong (/Ro-bong/) meaning "break one" i.e. one finger down Classical Latin In Classical Latin, integers 18 and 19 did not even have a spoken, nor written form including corresponding parts for "eight" or "nine" in practice - despite them being in existence. Instead, in Classic Latin, 18 = duodēvīgintī ("two taken from twenty"), (IIXX or XIIX), 19 = ūndēvīgintī ("one taken from twenty"), (IXX or XIX) 20 = vīgintī ("twenty"), (XX).For upcoming integer numerals [28, 29, 38, 39, ..., 88, 89] the additive form in the language had been much more common, however, for the listed numbers, the above form was still preferred. Hence, approaching thirty, numerals were expressed as: 28 = duodētrīgintā ("two taken from thirty"), less frequently also yet vīgintī octō / octō et vīgintī ("twenty eight / eight and twenty"), (IIXXX or XXIIX versus XXVIII, latter having been fully outcompeted.) 29 = ūndētrīgintā ("one taken from thirty") despite the less preferred form was also at their disposal.This is one of the main foundations of contemporary historians' reasoning, explaining why the subtractive I- and II- was so common in this range of cardinals compared to other ranges. Numerals 98 and 99 could also be expressed in both forms, yet "two to hundred" might have sounded a bit odd - clear evidence is the scarce occurrence of these numbers written down in a subtractive fashion in authentic sources.
In written and spoken language:
Finnish Language There is yet another language having this feature (by now, only in traces), however, still in active use today. This is the Finnish Language, where the (spelled out) numerals are used this way should a digit of 8 or 9 occur. The scheme is like this: 1 = "yksi" (Note: yhd- or yht- mostly when about to be declined; e.g. "yhdessä" = "together, as one [entity]") 2 = "kaksi" (Also note: kahde-, kahte- when declined) 3 = "kolme" 4 = "neljä"...
In written and spoken language:
7 = "seitsemän" 8 = "kah(d)eksan" (two left [for it to reach it]) 9 = "yh(d)eksän" (one left [for it to reach it]) 10 = "kymmenen" (ten)Above list is no special case, it consequently appears in larger cardinals as well, e.g.: 399 = "kolmesataayhdeksänkymmentäyhdeksän"Emphasizing of these attributes stay present even in the shortest colloquial forms of numerals: 1 = "yy" 2 = "kaa" 3 = "koo"...
In written and spoken language:
7 = "seiska" 8 = "kasi" 9 = "ysi" 10 = "kymppi"However, this phenomenon has no influence on written numerals, the Finnish use the standard Western-Arabic decimal notation.
Time keeping In the English language it is common to refer to times as, for example, 'seven to three', 'to' performing the negation.
Other systems:
There exist other signed-digit bases such that the base b≠b++b−+1 . A notable examples of this is Booth encoding, which has a digit set D={1¯,0,1} with b+=1 and b−=1 , but which uses a base b=2<3=b++b−+1 . The standard binary numeral system would only use digits of value {0,1} Note that non-standard signed-digit representations are not unique. For instance: 0111 D=4+2+1=7 10 1¯1D=8−2+1=7 11 D=8−4+2+1=7 100 1¯D=8−1=7 The non-adjacent form (NAF) of Booth encoding does guarantee a unique representation for every integer value. However, this only applies for integer values. For example, consider the following repeating binary numbers in NAF, 0.
Notes and references:
J. P. Balantine (1925) "A Digit for Negative One", American Mathematical Monthly 32:302.
Lui Han, Dongdong Chen, Seok-Bum Ko, Khan A. Wahid "Non-speculative Decimal Signed Digit Adder" from Department of Electrical and Computer Engineering, University of Saskatchewan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Private label debit**
Private label debit:
Private label debit refers to a merchant-branded card or mobile payment app that utilizes an automated clearing house (ACH) to directly debit consumer checking accounts. Used in a closed-loop environment, private label debit offers secure transactions through PIN protection or tokenization.Private label debit programs have become increasingly popular in the United States, totaling roughly $13 billion in payment value. Such programs are offered by well-known brands, including Target, Kroger, and Circle K, as a consumer loyalty strategy.Companies such as ZipLine, First Data, and BIM Networks implement private label debit programs for such brands. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MSZ96**
MSZ96:
MSZ96 is a quantum key distribution protocol which allows a cryptographic key bit to be encoded using four nonorthogonal quantum states described by non-commuting quadrature phase amplitudes of a weak optical field, without photon polarization (BB84 protocol) or entangled photons (E91 protocol). It is named afet Yi Mu, Jessica Seberry; Yuliang Zheng. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Topspinner**
Topspinner:
A topspinner is a type of delivery bowled by a cricketer bowling either wrist spin or finger spin. In either case, the bowler imparts the ball with top spin by twisting it with his or her fingers prior to delivery. In both cases, the topspinner is the halfway house between the stock delivery and the wrong'un - in the wrist spinner's case his googly, and in the finger spinner's case his doosra.
Mechanics:
A topspinner is released over the top of the fingers in such a way that it spins forward in the air towards the batsman in flight. The forward spinning motion impedes air travelling over the ball, but assists air travelling underneath. The difference in air pressure above and underneath the ball (described as the Magnus effect) acts as a downward force, meaning that the ball falls earlier and faster than normal.
Mechanics:
In cricketing terms, this means that the ball drops shorter, falls faster and bounces higher than might otherwise be anticipated by the batsman. These properties are summed up in cricketing terms as a "looping" or "loopy" delivery. Also, the ball travels approximately straight on, as compared to a wrist spin or finger spin stock delivery that breaks to the left or right on impact. A batsman may easily be deceived by the ball, particularly given that the action is quite similar to the stock delivery. Compared to the stock delivery, the ball will dip in flight, and land shorter than expected. The majority of the time, this increased angle of descent will lead to an increased bounce, making it a particularly difficult ball to attack. Tactically, a bowler will bowl topspinners to draw a batsman forward before using the dip and extra bounce to deceive them. In particular, batsmen looking to sweep or drive are vulnerable as the bounce can defeat them and lead to a catch. However, on an underprepared soft wicket, the spin on the ball may actually cause it to grip and shoot through low. Again, this will make it a particularly difficult delivery for the batsman to deal with.
Finger spin:
The topspinner is a common variation is the arsenal of the finger spinner. The most common method of delivery is for the ball to be delivered with the arm supinated further than the stock delivery with the side of the hand pointing towards the batsman, and the ball is released off the outside of the first finger, in such a way that it spins directly towards the batsman. However, a second method used by Muttiah Muralitharan is for the arm to be rotated further so that the back of the hand is facing the batsman; the ball is then given a large amount of spin by flexion of the wrist. The right-handed offspinner bowler will look to pitch this delivery on or outside off-stump, in anticipation that the batsman will play for the turn and give an edge behind the wicket. The left-arm orthodox bowler will typically bowl the ball on middle stump, looking to beat the inside edge of the bat and gain a bowled or lbw dismissal. Muttiah Muralitharan, Tim May, and Harbhajan Singh are examples of offspinners who frequently used this delivery.
Wrist spin:
The topspinner is a common variation in the arsenal of the wrist spinner, and typically the first variation taught to young wrist spin bowlers after they have mastered their stock delivery. The most common method of delivery is for the ball to be delivered with the arm pronated further than the stock delivery with the side of the hand pointing towards the batsman, and the ball is released off the third finger, in such as way that it spins directly towards the batsman. The right-arm legspin bowler will typically bowl the ball on middle stump, looking to beat the inside edge of the bat and gain a bowled or lbw dismissal. To a left-handed batsman, he will look to use the ball to gain an outside edge and dismiss the batsman caught. Shane Warne and Anil Kumble are example of modern wrist spinners who frequently bowled the topspinner. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apeirophobia**
Apeirophobia:
Apeirophobia (from Ancient Greek: ᾰ̓́πειρος, romanized: ápeiros, lit. 'infinite, boundless') is the phobia of infinity and/or eternity, causing discomfort and sometimes panic attacks. It normally starts in adolescence or earlier and it is currently not known how it normally develops over time. Apeirophobia may be caused by existential dread about eternal life or eternal oblivion following death. Due to this, it is often connected with thanatophobia (fear of dying). Sufferers commonly report feelings of derealization which may cause the perception of a dreamlike or distorted reality. Existential OCD may sometimes be the cause of obsessive thoughts about infinity or eternity, which can lead to or trigger apeirophobia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Compaq tc1000**
Compaq tc1000:
The TC1000 is a 10.4" laplet designed by Compaq, before it was purchased by HP. It used the Transmeta Crusoe processor. Unlike many other tablet PCs of its time (which can only operate either in a traditional laptop configuration, or with the keyboard folded behind the screen), the display is fully detachable from the keyboard. The product was developed and manufactured using ODM model from LG Electronics, Inc. of South Korea.
Compaq tc1000:
The TC1000 was replaced by the HP Compaq TC1100 which features a faster Pentium M processor and an integrated digitizer from Wacom (the TC1000 used a Finepoint digitizer which required a AAAA battery and lacked pressure-input, being binary on/off only), among other small upgrades.
The TC1000 comes with Windows XP Tablet PC Edition but it is capable of running Linux.A restore CD is available on the Internet Archive [1] to restore the machine to its factory configuration.
Design Awards:
The TC1000 has won numerous industrial design awards. These include: 2003 IDEA Bronze Award 2003 IF Product Design Award 2003 ID Magazine Honorable Mention (http://www.idonline.com/) 2003 Good Design Award | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Drama Desk Award for Outstanding Lighting Design for a Play**
Drama Desk Award for Outstanding Lighting Design for a Play:
The Drama Desk Award for Outstanding Lighting Design for a Play is an annual award presented by Drama Desk in recognition of achievements in the theatre among Broadway, Off Broadway and Off-Off Broadway productions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interior Gateway Routing Protocol**
Interior Gateway Routing Protocol:
Interior Gateway Routing Protocol (IGRP) is a distance vector interior gateway protocol (IGP) developed by Cisco. It is used by routers to exchange routing data within an autonomous system.
Interior Gateway Routing Protocol:
IGRP is a proprietary protocol. IGRP was created in part to overcome the limitations of RIP (maximum hop count of only 15, and a single routing metric) when used within large networks. IGRP supports multiple metrics for each route, including bandwidth, delay, load, and reliability; to compare two routes these metrics are combined into a single metric, using a formula which can be adjusted through the use of pre-set constants. By default, the IGRP composite metric is a sum of the segment delays and the lowest segment bandwidth. The maximum configurable hop count of IGRP-routed packets is 255 (default 100), and routing updates are broadcast every 90 seconds (by default). IGRP uses protocol number 9 for communication.IGRP is considered a classful routing protocol. Because the protocol has no field for a subnet mask, the router assumes that all subnetwork addresses within the same Class A, Class B, or Class C network have the same subnet mask as the subnet mask configured for the interfaces in question. This contrasts with classless routing protocols that can use variable length subnet masks. Classful protocols have become less popular as they are wasteful of IP address space.
Advancement:
In order to address the issues of address space and other factors, Cisco created EIGRP (Enhanced Interior Gateway Routing Protocol). EIGRP adds support for VLSM (variable length subnet mask) and adds the Diffusing Update Algorithm (DUAL) in order to improve routing and provide a loopless environment. EIGRP has completely replaced IGRP, making IGRP an obsolete routing protocol. In Cisco IOS versions 12.3 and greater, IGRP is completely unsupported. In the new Cisco CCNA curriculum (version 4), IGRP is mentioned only briefly, as an "obsolete protocol". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Opus number**
Opus number:
In musicology, the opus number is the "work number" that is assigned to a musical composition, or to a set of compositions, to indicate the chronological order of the composer's production. Opus numbers are used to distinguish among compositions with similar titles; the word is abbreviated as "Op." for a single work, or "Opp." when referring to more than one work.
Opus number:
To indicate the specific place of a given work within a music catalogue, the opus number is paired with a cardinal number; for example, Beethoven's Piano Sonata No. 14 in C-sharp minor (1801, nicknamed Moonlight Sonata) is "Opus 27, No. 2", whose work-number identifies it as a companion piece to "Opus 27, No. 1" (Piano Sonata No. 13 in E-flat major, 1800–01), paired in same opus number, with both being subtitled Sonata quasi una Fantasia, the only two of the kind in all of Beethoven's 32 piano sonatas. Furthermore, the Piano Sonata, Op. 27 No. 2, in C-sharp minor is also catalogued as "Sonata No. 14", because it is the fourteenth sonata composed by Ludwig van Beethoven.
Opus number:
Given composers' inconsistent or non-existent assignment of opus numbers, especially during the Baroque (1600–1750) and the Classical (1750–1827) eras, musicologists have developed other catalogue-number systems; among them the Bach-Werke-Verzeichnis (BWV-number), and the Köchel-Verzeichnis (K- and KV-numbers) which enumerate the works of Johann Sebastian Bach and Wolfgang Amadeus Mozart, respectively.
Etymology:
In the classical period, the Latin word opus ("work", "labour"), plural opera, was used to identify, list, and catalogue a work of art.By the 15th and 16th centuries, the word opus was used by Italian composers to denote a specific musical composition, and by German composers for collections of music. In compositional practice, numbering musical works in chronological order dates from 17th-century Italy, especially Venice. In common usage, the word opus is used to describe the best work of an artist with the term magnum opus.In Latin, the words opus (singular) and opera (plural) are related to the words opera (singular) and operae (plural), which gave rise to the Italian words opera (singular) and opere (plural), likewise meaning "work". In contemporary English, the word opera has specifically come to denote the dramatic musical genres of opera or ballet, which were developed in Italy. As a result, the plural opera of opus tends to be avoided in English. In other languages such as German, however, it remains common.
Early usage:
In the arts, an opus number usually denotes a work of musical composition, a practice and usage established in the seventeenth century when composers identified their works with an opus number. In the eighteenth century, publishers usually assigned opus numbers when publishing groups of like compositions, usually in sets of three, six or twelve compositions. Consequently, opus numbers are not usually in chronological order, unpublished compositions usually had no opus number, and numeration gaps and sequential duplications occurred when publishers issued contemporaneous editions of a composer's works, as in the sets of string quartets by Joseph Haydn (1732–1809) and Ludwig van Beethoven (1770–1827); Haydn's Op. 76, the Erdödy quartets (1796–97), comprises six discrete quartets consecutively numbered Op. 76 No. 1 – Op. 76 No. 6; whilst Beethoven's Op. 59, the Rasumovsky quartets (1805–06), comprises String Quartet No. 7, String Quartet No. 8, and String Quartet No. 9.
19th century to date:
From about 1800, composers usually assigned an opus number to a work or set of works upon publication. After approximately 1900, they tended to assign an opus number to a composition whether published or not. However, practices were not always perfectly consistent or logical. For example, early in his career, Beethoven selectively numbered his compositions (some published without opus numbers), yet in later years, he published early works with high opus numbers. Likewise, some posthumously published works were given high opus numbers by publishers, even though some of them were written early in Beethoven's career. Since his death in 1827, the un-numbered compositions have been cataloged and labeled with the German acronym WoO (Werk ohne Opuszahl), meaning "work without opus number"; the same has been done with other composers who used opus numbers. (There are also other catalogs of Beethoven's works – see Catalogues of Beethoven compositions.) The practice of enumerating a posthumous opus ("Op. posth.") is noteworthy in the case of Felix Mendelssohn (1809–47); after his death, the heirs published many compositions with opus numbers that Mendelssohn did not assign. In life, he published two symphonies (Symphony No. 1 in C minor, Op. 11; and Symphony No. 3 in A minor, Op. 56), furthermore he published his symphony-cantata Lobgesang, Op. 52, which was posthumously counted as his Symphony No. 2; yet, he chronologically wrote symphonies between symphonies Nos. 1 and 2, which he withdrew for personal and compositional reasons; nevertheless, the Mendelssohn heirs published (and cataloged) them as the Italian Symphony No. 4 in A major, Op. 90, and as the Reformation Symphony No. 5 in D major and D minor, Op. 107.
19th century to date:
While many of the works of Antonín Dvořák (1841–1904) were given opus numbers, these did not always bear a logical relationship to the order in which the works were written or published. To achieve better sales, some publishers, such as N. Simrock, preferred to present less experienced composers as being well established, by giving some relatively early works much higher opus numbers than their chronological order would merit. In other cases, Dvořák gave lower opus numbers to new works to be able to sell them to other publishers outside his contract obligations. This way it could happen that the same opus number was given to more than one of his works. Opus number 12, for example, was assigned, successively, to five different works (an opera, a concert overture, a string quartet, and two unrelated piano works). In other cases, the same work was given as many as three different opus numbers by different publishers. The sequential numbering of his symphonies has also been confused: (a) they were initially numbered by order of publication, not composition; (b) the first four symphonies to be composed were published after the last five; and (c) the last five symphonies were not published in order of composition. The New World Symphony originally was published as No. 5, later was known as No. 8, and definitively was renumbered as No. 9 in the critical editions published in the 1950s.
19th century to date:
Other examples of composers' historically inconsistent opus-number usages include the cases of César Franck (1822–1890), Béla Bartók (1881–1945), and Alban Berg (1885–1935), who initially numbered, but then stopped numbering their compositions. Carl Nielsen (1865–1931) and Paul Hindemith (1895–1963) were also inconsistent in their approaches. Sergei Prokofiev (1891–1953) was consistent and assigned an opus number to a composition before composing it; at his death, he left fragmentary and planned, but numbered, works. In revising a composition, Prokofiev occasionally assigned a new opus number to the revision; thus Symphony No. 4 is two thematically related but discrete works: Symphony No. 4, Op. 47, written in 1929; and Symphony No. 4, Op. 112, a large-scale revision written in 1947. Likewise, depending upon the edition, the original version of Piano Sonata No. 5 in C major, is cataloged both as Op. 38 and as Op. 135.
19th century to date:
Despite being used in more or less normal fashion by a number of important early-twentieth-century composers, including Arnold Schoenberg (1874–1951) and Anton Webern (1883–1945), opus numbers became less common in the later part of the twentieth century.
Other catalogues:
To manage inconsistent opus-number usages — especially by composers of the Baroque (1600–1750) and of the Classical (1720—1830) music eras — musicologists have developed comprehensive and unambiguous catalogue number-systems for the works of composers such as: Johann Sebastian Bach — catalogued with a BWV-number; a Bach-Werke-Verzeichnis number assigned by Wolfgang Schmieder; however, older sources occasionally use S-numbers.
Dietrich Buxtehude — catalogued with a BuxWV-number, a Buxtehude-Werke-Verzeichnis work number.
Marc-Antoine Charpentier - identified with an H-number per H.W. Hitchcock’s comprehensive catalogue.
Frédéric Chopin — four catalogue systems have been applied: (i) B-numbers, by Maurice J.E. Brown; (ii) KK-numbers, by Krystyna Kobylańska; (iii) work-letters (A, C, D, E, P and S), by Józef Michał Chomiński; and (iv) WN-numbers in the Chopin National Edition. Generally, these alternative music-catalogue systems identified compositions that the composer had not numbered.
Claude Debussy — identified with an L-number, per François Lesure's comprehensive catalogue.
Antonín Dvořák — identified with a B-number, per Jarmil Burghauser's comprehensive catalogue; which resolved the problems of different and duplicate opus-numbers assigned by the publishers of Dvořák's music.
Joseph Haydn — identified with a Hob.-number, per the 1957 catalogue by Anthony van Hoboken. Although he assigned Hoboken-numbers to the string quartets, those compositions usually are known by opus numbers.
Franz Liszt — identified with an S-number, per the catalogue The Music of Liszt (1960), by Humphrey Searle.
Wolfgang Amadeus Mozart — identified either with a K-number or with a KV-number (Köchel-Verzeichnis nummer), per the catalogue system of Ludwig Ritter von Köchel.
Niccolò Paganini — identified with an MS-number, per the 1982 Catalogo tematico, by Moretti and Sorrento.
Domenico Scarlatti — identified with three catalogue systems; (i) L-numbers, per the 1906 catalogue by Alessandro Longo; (ii) K-numbers and Kk-numbers, per the 1953 catalogue by Ralph Kirkpatrick; and (iii) P-numbers, per the 1967 catalogue by Giorgio Pestelli.
Franz Schubert — identified with a D-number, per the catalogue of Otto Erich Deutsch.
Maurice Ravel — identified with an M-number, per the 1986 catalogue by Marcel Marnat.
Henry Purcell — identified with a Z-number, per the catalogue by Franklin B. Zimmerman.
Antonio Vivaldi - identified with a RV number, per the Ryom-Verzeichnis catalogue by Peter Ryom.
Gustav Holst — identified with an H. catalogue number, per A Thematic Catalogue of Gustav Holst's Music by Imogen Holst. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Strikeouts per nine innings pitched**
Strikeouts per nine innings pitched:
In baseball statistics, strikeouts per nine innings pitched (K/9, SO/9, or SO/9IP) is the mean of strikeouts (or Ks) by a pitcher per nine innings pitched. It is determined by multiplying the number of strikeouts by nine, and dividing by the number of innings pitched. To qualify, a pitcher must have pitched 1,000 innings, which generally limits the list to starters. A separate list is maintained for relievers with 300 innings pitched or 200 appearances.
Leaders:
The all-time leader in this statistic through 2023 is Chris Sale (11.06). The only other pitchers who had averaged over 10 strikeouts are Robbie Ray (11.03), Jacob deGrom (10.96), Yu Darvish (10.70), Max Scherzer (10.69), Randy Johnson (10.61), Stephen Strasburg (10.55), Gerrit Cole (10.45), Kerry Wood (10.32), Pedro Martinez (10.04) and Aaron Nola (10.02).The top three in 2022 were Carlos Rodon (11.98), Shohei Ohtani (11.87), and Gerrit Cole (11.53).Among qualifying relievers, Aroldis Chapman (14.88) was the all-time leader in strikeouts per nine innings through 2020, followed by Craig Kimbrel (14.66), Kenley Jansen (13.25), Rob Dibble (12.17), David Robertson (11.93), and Billy Wagner (11.92).
Analysis:
One effect of K/9 is that it may reward or "inflate" the numbers for pitchers with high batting averages on balls in play (BABIP). Two pitchers may have the same K/9 rates despite striking out a different percentage of batters since one pitcher will pitch to more batters to obtain the same cumulative number of strikeouts. For example, a pitcher who strikes out one batter in an inning, but also gives up a walk or a hit, strikes out a lower percentage of batters than a pitcher who strikes out one batter in an inning without allowing a baserunner, but both have the same K/9. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nutritional rating systems**
Nutritional rating systems:
Nutritional rating systems are used to communicate the nutritional value of food in a more-simplified manner, with a ranking (or rating), than nutrition facts labels. A system may be targeted at a specific audience. Rating systems have been developed by governments, non-profit organizations, private institutions, and companies. Common methods include point systems to rank (or rate) foods based on general nutritional value or ratings for specific food attributes, such as cholesterol content. Graphics and symbols may be used to communicate the nutritional values to the target audience.
Types:
Guest Nutrition Value Guest Nutrition Value (GNV Code) is a nutrient profiling digital IP system which is a comprehensive nutrition standard that takes into account the unique dietary needs and preferences of each guest. It combines various factors such as guest nutrition observances, allergens, nutrition facts, specific ingredients, and religious food types to provide a personalized nutrition value for every guest using their individual characteristics which impact personal health in positive ways. GNV focuses on the needs of the consumer and ensures consumer-specific nutrient profiles can be delivered by food service operations worldwide. The GNV code was developed by Chef Matthew Murphy, a high-profile chef in the United States of America who also has a Masters in Applied Nutrition.
Types:
Recipe Nutrition Profile Recipe Nutrition Profile (RNP Code) is a nutrient profiling digital IP system which is a cutting-edge nutrition standard that brings together a wealth of information about the contents of a recipe, including allergen information, nutrition facts, and food types, to create a comprehensive profile of its nutritional value.
Types:
Using this information, RNP Code generates a spider graph that allows chefs and food enthusiasts to explore the connections between the ingredients, flavours, and nutrients in a dish's composition. This visualization tool provides an intuitive way to understand how the different components of a recipe contribute to its overall nutritional profile, as well as its flavour and texture. RNP focuses on the needs of the professional Chef and food service operative and gives them targeted nutrient profiles to improve recipes. This is a global standard that is reflective of the GNV consumer code. The RNP code was developed by Chef Matthew Murphy, a high profile chef in the United States of America who also has a Masters in Applied Nutrition.
Types:
Food Compass Food Compass is a nutrient profiling system which ranks foods based on their healthfulness using characteristics that impact health in positive or negative ways. It was developed at Tufts University.
Types:
Glycemic index Glycemic index is a ranking of how quickly food is metabolized into glucose when digested. It compares available carbohydrates gram-for-gram in foods to provide a numerical, evidence-based index of postprandial (post-meal) blood sugar level. The concept was invented by David J. Jenkins and colleagues in 1981 at the University of Toronto. The glycemic load (GL) of food is a number which estimates how much a food will raise a person's blood glucose level.
Types:
Guiding Stars Guiding Stars is a patented food-rating system which rates food based on nutrient density with a scientific algorithm. Foods are credited with vitamins, minerals, dietary fiber, whole grains and Omega-3 fatty acids, and discredited for saturated fat, trans fats, and added sodium (salt) and sugar.
Types:
Rated foods are tagged with one, two or three stars, with three stars the best ranking. The program began at Hannaford Supermarkets in 2006, and is found in over 1,900 supermarkets in Canada and the US. Guiding Stars has expanded into public schools, colleges and hospitals.The evidence-based, proprietary algorithm is based on the dietary guidelines and recommendations of regulatory and health organizations, including the US Food and Drug Administration and Department of Agriculture and the World Health Organization. The algorithm was developed by a scientific advisory panel composed of experts in nutrition and health from Dartmouth College, Harvard University, Tufts University, the University of North Carolina, and other colleges.
Types:
Health Star Rating System The Health Star Rating System (HSR) is an Australian and New Zealand Government initiative that assigns health ratings to packaged foods and beverages. Ratings scale by half star increments between half a star up to five stars, with the higher the rating, the healthier the product. A calculator uses nutritional information such as total sugar, sodium, energy and other variants to obtain a rating for the product. Points are added for "healthy" nutrients such as fibres, proteins and vegetable matter whilst points are deducted for "unhealthy" nutrients that have been scientifically linked to chronic health disease, such as fats and sugars.
Types:
Nutri-Score Nutri-Score is a nutrition label guide recommended by the European Commission and World Health Organization. It is a 5-color nutrition label selected by the French government in March 2017 for display on food products to facilitate consumer understanding of nutrient composition. It relies on the computation of a nutrient profiling system derived from the United Kingdom Food Standards Agency score.A Nutri-Score for a particular food item is given one of five color-coded letters, with 'A' (enlarged letter, dark green) as a score indicating excellent nutrient composition, and 'E' (dark orange) as a low-rated, nutrient-poor score. The calculation of the score involves seven different parameters of nutrient content per 100 g of food typically displayed on food packages. High content of fruits and vegetables, dietary fiber, and protein promote a higher score, while high content of calories, sugar, saturated fat, and sodium promote a detrimental score.
Types:
NutrInform NutrInform is an Italian alternative to Nutri-Score, backed by the country's Ministry of Agricultural, Food and Forestry Policies.
Types:
Nutripoints Nutripoints is a food-rating system which places foods on a numerical scale based on their overall nutritional value. The method is based on an analysis of 26 positive factors (such as vitamins, minerals, protein and fiber) and negative factors (such as cholesterol, saturated fat, sugar and sodium) relative to calories. The Nutripoint score of the food is the end result. The higher the value, the more nutrition per calorie (nutrient-dense) and the fewest negative factors exist in the food.Nutripoints was developed by Doctor of Public Health Roy E. Vartabedian during the 1980s and was released in 1990 with his book, Nutripoints, which was published in thirteen countries in ten languages. The food-rating system is part of a program to help people measure, balance, and upgrade their diet for improvement in well-being. The system rates over 3,600 foods, from apples and oranges to fast foods and brand-name products.
Types:
Nutrition iQ The Nutrition iQ program is a joint venture of the Joslin Clinic and the supermarket operator SuperValu. The labeling system consists of color-coded tags denoting a food product's status. This is based on attributes such as vitamin and mineral content, fiber content, 100%-juice content, Omega-3 or low saturated-fat content, whole-grain content, calcium content, protein content, low- or reduced-sodium content and low- or reduced-calorie content. The first phase of the program began in 2009, covering center-store food products; coverage of fresh-food departments followed in 2011.
Types:
Points Food System Weight Watchers developed the Points Food System for use with its Flex Plan. The system's primary objective is to maintain a healthy weight and to track weight loss or gain over time. It is designed to allow users to eat any food, tracking the number of points for each food consumed.
Members try to keep to their points target for a given time within a given range, which is personalized based on the member's height, weight and other factors (such as gender). A weekly points allowance for is established to provide for special occasions and occasional overindulgences.
Types:
Naturally Nutrient Rich Developed by Adam Drewnowski of the University of Washington, the Naturally Nutrient Rich system is based on mean-percentage daily values (DVs) for 14 nutrients in food with 2,000 calories. It proposes to assign nutrient-density values to foods within and across food groups. The score allows consumers to identify and select nutrient-dense foods, permitting flexibility in discretionary calories consumed.
Types:
ReViVer Score Developed by ReViVer, a nutritionally-oriented restaurant in New York City, the ReViVer Score expresses nutrient density of menu items relative to calories from a variety of fast-food and casual restaurants based on ten nutrients: vitamins A, C, and E, folate, calcium, magnesium, potassium, iron, fiber, and omega-3 fats. A score of 100 indicates that a meal provides at least 100% of the recommended daily intake for all ten nutrients, proportional to its energy (calorie) content.
Past systems:
NuVal The overall nutritional quality index was a nutritional-rating system developed at the Yale-Griffin Prevention Research Center. It assigned foods a score between 1 and 100 which reflected overall nutrition relative to calories consumed. Marketed as NuVal, it was widely adopted in United States grocery stores before it was discontinued in 2017 amid accusations of conflicts of interest and for its refusal to publish the scoring algorithm. Scoring inconsistencies occurred, in which processed foods scored higher than canned fruits and vegetables.
Past systems:
Smart Choices Program Launched late in 2009, the Smart Choices Program (SCP) was a rating system developed by a coalition of companies from the food industry. The criteria for rating food products used 18 different attributes. The system had varying levels of acceptability based on 16 types of food which allowed for wide discretion in the selection of foods to include in the program. The program was discontinued in October 2009 after sharp criticism for including products such as Froot Loops, Lucky Charms, and Frosted Flakes as Smart Choices.On August 19, 2009, the FDA wrote a letter to SCP manager saying: "FDA and FSIS would be concerned if any FOP labeling systems used criteria that were not stringent enough to protect consumers against misleading claims, were inconsistent with the Dietary Guidelines for Americans, or had the effect of encouraging consumers to choose highly processed foods and refined grains instead of fruits, vegetables, and whole grains." SCP was suspended in 2009 after the FDA's announcement that they will be addressing both on front-of- package and on-shelf systems. SCP Chair Mike Hughes said: "It is more appropriate to postpone active operations and channel our information and learning to the agency to support their initiative." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gene Abel**
Gene Abel:
Gene Gordon Abel is an American psychiatrist and controversial clinician. He is a couple's counselor and also works with men and boys suspected of sexual deviancy. He is the creator of the Abel Assessment for Sexual Interest (AASI), a sex offender assessment tool that has been considered unreliable by independent studies and inadmissible in court in various jurisdictions. He also designed a screening test called the Diana Screen, to be used, e.g., to screen job applicants for deviant sexual tendencies – a test which has been similarly criticized as having dubious scientific value.
Career:
Abel was previously a professor of medicine at the Columbia University School of Medicine, and currently teaches at the Morehouse School of Medicine and the Emory University School of Medicine. He is the medical director of the Behavioral Medicine Institute of Atlanta.
Awards and recognitions:
Abel is a fellow of the American Psychiatric Association and the Academy of Behavioral Medicine Research and a past president of the Society of Behavioral Medicine (1981–82). He was the recipient of a 1990 Masters and Johnson Award of the Society for Sex Therapy and Research, a 1991 Significant Achievement Award of the Association for the Treatment of Sexual Abusers, and a 2013 Distinguished Alumni Award of the University of Iowa Carver College of Medicine.
Lack of known scientific basis for assessment test methodology:
Abel has been criticized for having no clear, well-accepted scientific basis for his assessment process. Abel wrote a report called "The Abel and Harlow Child Molestation Prevention Study" in a chapter of a 2001 self-published book he wrote called The Stop Child Molestation Book (coauthored with Nora Harlow, using the self-publication service Xlibris), and the Abel Assessment test is based on findings of a study associated with that report. However, the report was never subject to peer review and was not published in any professional journal, and it provides no detailed description of his testing methodology for scientific study and independent verification.Abel is also said to have exaggerated various statistics in order to support his conclusions and methodology. For example, in the early 1990s he announced that he had figures suggesting that sex offenders commonly have multiple paraphilias. However, a 1991 journal publication refuted this claim in a report criticizing Abel's methods for double counting, and thus skewing the study's statistical weight.Mental health professionals have used the AASI to civilly commit sex offenders, even though the Assessment is not admissible in many courts in the United States. In 2002, the Assessment was found to be inadmissible in court cases in the Commonwealth of Massachusetts, a ruling that was upheld by the Massachusetts Court of Appeals in 2005. In a 2002 decision on the admissibility of the test by Texas appellate judge Brian Quinn, the court said that since Abel's proprietary scoring methodology is not publicly known, it "could be mathematically based, founded upon indisputable empirical research, or simply the magic of young Harry Potter's mixing potions at the Hogwarts School of Witchcraft and Wizardry". The 9th Circuit Court of Appeals also ruled in 2004 that the Assessment is a tool that should be used only as treatment, and that it cannot detect whether a person has sexually abused children. Independent studies of the Assessment have concluded it to be unreliable in adults and that there is not yet enough information to support its use with adolescents, whereas Abel states in his book that a therapist can not only use the Assessment for assessing adults, but also as a tool to determine whether a child is attracted to other children.The Diana Screen has also been a source of controversy for Abel due to it being a pass/fail assessment. The assessment purports to determine if someone has molested a child. Abel has promoted the use of the Diana Screen as a business opportunity for individuals and agencies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Karmic astrology**
Karmic astrology:
Karmic astrology as practiced by some astrologers who believe in reincarnation though the concepted they can read the person's karma in a Natal chart by studying in particular Lunar nodes and retrograde planets. Other astrologers, such as Dane Rudhyar's protégé Alexander Ruperti, have lectured that everything in the Natal Chart is karmic.
Description:
Both benevolent acts and selfishly motivated acts eventually come back to us as good and bad karma respectively. This could be in the same lifetime, or centuries down the line. Karmic astrology is the science of discerning as accurately as possible, through the positions of planets in your birth/divisional chart, the reasons why you are the way you are and why you behave as you do. It also provides you guidance to resolve past life situations so as to wipe your slate clean and make room for more positive karma in your current lifetime.
Description:
The Karmic planetary positions are divided into five parts: Sun: The Planet Sun tells about your life’s purpose since it is a soul planet, so it will indicate about your weaknesses, fears, rational thinking etc Your Karmic life purpose weaknesses, fears Moon: The Planet Moon symbolizes the memory of our Karmic Past, detailed analysis of the Moon will clear all your unresolved past issues.
Description:
Memories of your Karmic past - unresolved past life issues that have been re-simulated.
Saturn: In general context the planet Saturn is famous for creating troubles and issues but the Karmic substances says that the planet Saturn judges your Karma and provides fruits accordingly. But at times it will stand as a stumbling block as well.
Description:
Your Karmic Stumbling blocks Rahu: Do you know that planet Rahu is the root cause of Karma and without Rahu one cannot perform Karma internally or externally? Your Karmic roots Ketu: You may be aware of the importance of planet Ketu which is called a spiritual planet, it is the planet which guides us about our Karmic Path and makes us follow positive path that leads you towards a beautiful and comfortable life.
Description:
Your Karmic Path Apart from all this, actually Vedic Folks will analyze your birth chart, about your Karma or action and Karmic factors also, along with the 5th & 9th houses and their lords would also be analyzed for getting exact and transparent results. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Posterior humeral circumflex artery**
Posterior humeral circumflex artery:
The posterior humeral circumflex artery (posterior circumflex artery, or posterior circumflex humeral artery) arises from the third part of the axillary artery at the distal border of the subscapularis.
Anatomy:
Course and relations It passes posteriorward with the axillary nerve through the quadrangular space. It winds laterally around the surgical neck of the humerus.
Distribution It is distributed to the shoulder joint, teres major, teres minor, deltoid, and (long and lateral heads of) triceps brachii.
Anastomoses It forms anastomoses with the anterior humeral circumflex artery, (deltoid branch of) profunda brachii artery, (acromial branches of) suprascapular artery, (acromial branches of) and thoracoacromial artery. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Burow's solution**
Burow's solution:
Burow's solution is an aqueous solution of aluminium triacetate. It is available in the U.S. as an over-the-counter drug for topical administration, with brand names including Domeboro (Moberg Pharma), Domeboro Otic (ear drops), Star-Otic, and Borofair.
The preparation has astringent and antibacterial properties and may be used to treat a number of skin conditions, including insect bites and stings, rashes caused by poison ivy and poison sumac, swelling, allergies, and bruises. However, its main use is for treatment of otitis (ear infection), including otomycosis (fungal ear infection).
History:
The creator of Burow's solution was Karl August Burow (1809-1874), a military surgeon and anatomist. Burow was also the inventor of some plastic surgery and wound healing techniques which are still in wide use today.
Use:
Otitis Burow's solution may be used to treat various forms of ear infections, known as otitis. As a drug it is inexpensive and non-ototoxic. In cases of otomycosis it is less effective than clotrimazole but remains an effective treatment.
Use:
Skin irritation Most versions of Burow's solution can be used as a soak or compress. As an FDA approved astringent it is used for the relief of skin irritations due to poison ivy, oak and sumac, and rashes from allergic reactions to soaps, detergents, cosmetics and jewelry. This is due to the combination of two active ingredients found in this version of Burow's solution, i.e. aluminum sulfate tetradecahydrate and calcium acetate monohydrate.The solution is used by some to reduce inflammation and potential infection from conditions such as ingrown nails, in a warm water soak. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Overmatch**
Overmatch:
Overmatch is a concept in modern military thinking which prizes having overwhelming advantages over an adversary to a more significant margin than in traditional warfare. It is related to military superiority. Overmatch uses a military force's "capabilities or unique tactics" to compel the opposing forces to stop using their own equipment or tactics, as doing so would lead to their own defeat or destruction. By fielding the right mix of capabilities, the commander can present multiple dilemmas to the enemy, thus compelling the enemy to withdraw.: 57:00
Definition:
According to the US Army, the definition of overmatch is "the concept where my (insert lethality system here) can willfully and without prejudice or luck defeat your (insert your protective system here)."According to Raytheon, overmatch is a verb which means "to defeat threats at every level – strategic, tactical and technological."According to Ben Barry, "overmatch is a very polite, clinical way of saying could be defeated.” Example — AI versus human pilots AI agents are not subject to the physiological constraints of a human pilot, such as the danger of flying at low altitude, or the g-forces of the aircraft accelerations. Human pilots noted that the AI agents flew with fine motor control. In the 2020 AlphaDogfight Trials (see image to the right), the AI agents battled for the chance to dogfight an expert human pilot. The winning AI agent consistently defeated an expert human pilot. The technology will be installed in actual aircraft by 2024.Note: DoD's Joint AI Center (JAIC) has convened 100 online participants from 13 countries to discuss how to use AI in a way that is consonant with their national ethical principles, termed the 'AI Partnership for Defense' in 2020.One possible application is to elevate the role of human pilots to mission commanders, leaving AIs as wingmen to perform as high-skill operators of low-cost robotic craft.
History:
After the end of major operations in the Global War on Terror the US Army emphasized overmatch in their modernization effort.In 2017 a task force was formed to modernize the Army. Its recommendation was to form the Army Futures Command, to engage in systematic development of capabilities to overmatch its adversaries.
In 2021 the 40th Chief of Staff of the Army identified"Overmatch will belong to the side that can make decisions faster. To meet emerging challenges, the Army is transforming to provide the joint force with speed, range, and convergence of cutting edge technologies that will generate the decision dominance and overmatch required to win the next fight.”
Analysis:
Overmatch has been criticized as unsustainable in the long term and requiring immense investments in the military and cutting-edge technologies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chlorobactane**
Chlorobactane:
Chlorobactane is the diagenetic product of an aromatic carotenoid produced uniquely by green-pigmented green sulfur bacteria (GSB) in the order Chlorobiales. Observed in organic matter as far back as the Paleoproterozoic, its identity as a diagnostic biomarker has been used to interpret ancient environments.
Background:
Chlorobactene is a monocyclic accessory pigment used by green sulfur bacteria to capture electrons from wavelengths in the visible light spectrum. Green sulfur bacteria (GSB) live in anaerobic and sulfidic (euxinic) zones in the presence of light, so they are found most often in meromictic lakes and ponds, sediments, and certain regions of the Black Sea. The enzyme CrtU converts γ-carotene into chlorobactene by shifting the C17 methyl group from the C1 site to the C2 site.
Preservation:
Following transport and burial, diagenetic processes saturate the hydrocarbon chain, turning it into the fully saturated structure of chlorobactane.
Preservation:
Isoreneiratene is an aromatic light-harvesting molecule interpreted as a biomarker for brown-pigmented GSB in the same order, Chlorobiales, and its fossil form (isorenieratane) is often found co-occurring with chlorobactene in ancient organic material. Purple sulfur bacteria (PSB) also live in euxinic regions. They produce a different accessory pigment, okenone, that is preserved as okenane and often observed co-occurring with chlorobactane.
Measurement techniques:
Gas chromatography coupled to mass spectrometry (GC/MS) Organic molecules are first extracted from rocks using solvents, capitalizing on chemical properties like the polarity of the molecules to dissolve the molecules. Usually, less than one percent of the organic material from a rock is successfully pulled out in this process, leaving behind undissolved material called kerogen. The organic-rich extract is subsequently purified using silica gel column packed chromatography – eluting the extract through the column with targeted solvents pulls out contaminants and remnant undissolved organic material, which will bind to the polar silica moieties. When the sample is then run through a gas chromatography (GC) column, the compounds separate based on their boiling points and interaction with a stationary phase within the column. The temperature ramping of a gas chromatography column can be programmed to obtain optimal separation of the compounds. After the GC, the molecules are ionized and fragmented into smaller, charged molecules. A mass spectrometer then separates the individual compounds based on their mass-to-charge (M/Z) ratio and measures their relative abundance, producing a characteristic mass spectrum. Peaks representing the relative abundance of the compounds are identified as molecules based on their relative retention times, matches to a library of mass spectra with known compound identities, and comparison to standards.
Case Study: Ocean Euxinia:
Because green-pigmented green sulfur bacteria require higher light intensities than their brown-pigmented counterparts, the presence of chlorobactane in the rock record has been used as key evidence in interpretations for a very shallow euxinic layer in the ocean. The euxinic zone may have changed depth in the ocean at various points in Earth's history, such as with the advent of an oxygenated atmosphere around 2.45 billion years ago and the shallowing of the oxic zone within the last six kyr. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Centipede mathematics**
Centipede mathematics:
Centipede mathematics is a term used, sometimes derogatorily, to describe the generalisation and study of mathematical objects satisfying progressively fewer and fewer restrictions. This type of study is likened to studying how a centipede behaves when its legs are removed one by one. The term is attributed to Polish mathematician Antoni Zygmund. Zygmund is said to have described the metaphor of the centipede thus: "You take a centipede and pull off ninety-nine of its legs and see what it can do." Thus, Zygmund has been known by many mathematicians as the "Centipede Surgeon".
Centipede mathematics:
The study of semigroups is cited as an example of doing centipede mathematics. One starts with the notion of an abelian group. First delete the commutativity restriction to obtain the concept of a group. The restriction of existence of inverses is then removed. This produces a monoid. If one now removes the restriction regarding the existence of identity, the resulting object turns out to be a semigroup. Still more legs can be removed. If the associativity restriction is also discarded one gets a magma or a groupoid. The restrictions that define an abelian group may be removed in different orders also. The study of ternary ring has been cited as an example of centipede mathematics. The progressive removal of axioms of Euclidean geometry and studying the resulting geometrical objects also illustrate the methodology of centipede mathematics.The following quote summarises the value and usefulness of the concept: "The term ‘centipede mathematics’ is new to me, but its practice is surely of great antiquity. The binomial theorem (tear off the leg that says that the exponent has to be a natural number) is a good example. A related notion is the importance of good notation and the importance of overloading, aka abuse of language, to establish useful analogies." — Gavin Wraith. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OpenFrame**
OpenFrame:
OpenFrame is a mainframe rehosting solution developed by TmaxSoft that aims to help customers move existing mainframe assets to the cloud quickly and with minimal risk. It replaces legacy CICS/IMS/JES mainframe engines and shifts business applications written in legacy code like COBOL and PL/I to Linux. This allows reduced licensing costs compared to the mainframe.
It also includes a test tool which helps users determine if the migration will preserve functionality without additional adjustments.The current version of OpenFrame is 7.0, which was first released in Japan in September, 2015. The previous version, OpenFrame 6.0 was released in the U.S. market in 2009.
Mainframe Migration:
Organizations that run on mainframes tend to have difficulty with costs and agility. Rehosting is one approach an organization may take to migrate their mainframe operations to the cloud, with other options including batch-job migration and full re-engineering. With the rehosting option, the entire mainframe is emulated on the cloud so that the end-user experience is essentially unchanged.
Compatibility:
OpenFrame advertises the following components can be migrated and continue working without modification, provided they run on open systems components such as Linux: Compilers COBOL PL/I Assembler Datasets Flat files GDGs VSAM Databases IMS DB2 IDMS Oracle Online Systems CICS Batch Systems JES JCL
Notable Users:
Kela Kela, the Finnish government agency in charge of the nation's social security programs, used OpenFrame to rehost its mainframe. The agency had estimated that the rising costs of maintaining a mainframe would become prohibitive in the near future, and saw a shortage of IT professionals skilled in working in a mainframe environment. As a result, Kela was able to lift over 10 million lines of code to the rehosted environment and reduce the cost of system maintenance. Since the rehosted iteration was functionally similar to the mainframe system, Kela was also able to keep its existing IT staff in place.
Notable Users:
GE Capital GE Capital opted to use OpenFrame to modernize its aging IT infrastructure, which was mostly made up of mainframes. Before rehosting, the GE Capital system was managing 5 million account schedules, over 382 interfaces, with up to 1,700 concurrent users, resulting in an average of 3.5 million transactions per day. In addition to high costs, the disaster recovery process was slow and the system was generally inefficient. OpenFrame allowed GE Capital to rehost without redeveloping any applications or changing the user interface. The results included 66% reduction in costs associated with running the system and a 240% increase in disaster recovery speed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microelectronics International**
Microelectronics International:
Microelectronics International is a peer-reviewed scientific journal published quarterly by Emerald Group Publishing. The editor is John Atkinson. It covers research on miniaturized electronic devices, microcircuit engineering, semiconductor technology, and systems engineering. Publishing formats include original technical papers, research papers, case studies, reviews, and book reviews. The journal was established in 1982 as Hybrid Circuits (ISSN 0265-3028).
Abstracting and indexing:
This journal is abstracted and indexed in the following databases: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mutation frequency**
Mutation frequency:
Mutation frequency and mutation rates are highly correlated to each other. Mutation frequencies test are cost effective in laboratories however; these two concepts provide vital information in reference to accounting for the emergence of mutations on any given germ line.There are several test utilized in measuring the chances of mutation frequency and rates occurring in a particular gene pool. Some of the test are as follows: Avida Digital Evolution Platform Fluctuation Analysis Mutation frequency and rates provide vital information about how often a mutation may be expressed in a particular genetic group or sex. Yoon et., 2009 suggested that as sperm donors ages increased the sperm mutation frequencies increased. This reveals the positive correlation in how males are most likely to contribute to genetic disorders that reside within X-linked recessive chromosome.
Mutation frequency:
There are additional factors affecting mutation frequency and rates involving evolutionary influences. Since, organisms may pass mutations to their offspring incorporating and analyzing the mutation frequency and rates of a particular species may provide a means to adequately comprehend its longevity
Aging:
The time course of spontaneous mutation frequency from middle to late adulthood was measured in four different tissues of the mouse. Mutation frequencies in the cerebellum (90% neurons) and male germ cells were lower than in liver and adipose tissue. Furthermore, the mutation frequencies increased with age in liver and adipose tissue, whereas in the cerebellum and male germ cells the mutation frequency remained constantDietary restricted rodents live longer and are generally healthier than their ad libitum fed counterparts. No changes were observed in the spontaneous chromosomal mutation frequency of dietary restricted mice (aged 6 and 12 months) compared to ad libitum fed control mice. Thus dietary restriction appears to have no appreciable effect on spontaneous mutation in chromosomal DNA, and the increased longevity of dietary restricted mice apparently is not attributable to reduced chromosomal mutation frequency. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Film badge dosimeter**
Film badge dosimeter:
A film badge dosimeter or film badge is a personal dosimeter used for monitoring cumulative radiation dose due to ionizing radiation.
Film badge dosimeter:
The badge consists of two parts: photographic film and a holder. The film emulsion is black and white photographic film with varying grain size to affect its sensitivity to incident radiation such as gamma rays, X-rays and beta particles.After use by the wearer, the film is removed, developed, and examined to measure exposure. When the film is irradiated, an image of the protective case is projected on the film. Lower energy photons are attenuated preferentially by differing absorber materials. This property is used in film dosimetry to identify the energy of radiation to which the dosimeter was exposed. Some film dosimeters have two emulsions, one for low-dose and the other for high-dose measurements. These two emulsions can be on separate film substrates or on either side of a single substrate. Knowing the energy allows for accurate measurement of radiation dose. The device was developed by Ernest O. Wollan whilst working on the Manhattan Project, though photographic film had been used as a crude measure of exposure prior to this. Though film dosimeters are still in use worldwide there has been a trend towards using other dosimeter materials that are less energy dependent and can more accurately assess radiation dose from a variety of radiation fields with higher accuracy.
Description:
The silver film emulsion is sensitive to radiation and once developed, exposed areas increase in optical density (i.e. blacken) in response to incident radiation. One badge may contain several films of different sensitivities or, more usually, a single film with multiple emulsion coatings. The combination of a low-sensitivity and high-sensitivity emulsion extends the dynamic range to several orders of magnitude. Wide dynamic range is highly desirable as it allows measurement of very large accidental exposures without degrading sensitivity to more usual low level exposure.
Description:
Film holder The film holder usually contains a number of filters that attenuate radiation, such that radiation types and energies can be differentiated by their effect when the film is developed.
Description:
To monitor gamma rays or x-rays, the filters are metal, usually lead, aluminum, and copper. To monitor beta particle emission, the filters use various densities of plastic or even label material. It is typical for a single badge to contain a series of filters of different thicknesses and of different materials; the precise choice may be determined by the environment to be monitored. The use of several different materials allows an estimation of the energy/wavelength of the incident radiation.
Description:
Filters are usually placed on both the back and front of the holder, to ensure operation regardless of orientation. Additionally, the filters need to be sufficiently large (typically 5 mm or more) to minimize the effect of radiation incident at oblique angles causing exposure of the film under an adjacent filter.
Usage:
The badge is typically worn on the outside of clothing, around the chest or torso to represent dose to the "whole body". This location monitors exposure of most vital organs and represents the bulk of body mass. Additional dosimeters can be worn to assess dose to extremities or in radiation fields that vary considerably depending on orientation of the body to the source.
Usage:
The dose measurement quantity, personal dose equivalent Hp(d), is defined by the International Commission on Radiological Protection (ICRP) as the dose equivalent in soft tissue at an appropriate depth, d, below a specified point on the human body. The specified point is specific to the position where the individual’s dosimeter is worn. Tissue depth of interest include the tissue depth of the live layer of skin (0.07 mm), lens of the eye, (0.30 cm), and "deep" dose, or dose to the whole body (1.0 cm).
Usage:
The film badge is still widely used, but is being replaced by thermoluminescent dosimeters (TLDs), aluminium oxide based dosimeters, and the electronic personal dosimeter (EPD). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.