text
stringlengths
60
353k
source
stringclasses
2 values
**Erect image** Erect image: In optics, an erect image is one that appears right-side up. An image is formed when rays from a point on the original object meet again after passing through an optical system. In an erect image, directions are the same as those in the object, in contrast to an inverted image. It is one of the properties of images formed in a plane mirror. Erect image: Some telescopes and other devices such as the camera obscura present an inverted image on the viewing surface. Mirrors and compound prism elements can be used to achieve an erect image instead.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kids Click** Kids Click: Helen Tam Yuk Ying (traditional Chinese: 譚玉瑛; simplified Chinese: 谭玉瑛; pinyin: Tán Yùyīng, born 27 November 1963) is a host of children's programmes in Hong Kong.She joined Television Broadcasts Limited (TVB) in 1980 and has hosted for children's shows since the 1980s, including 430 Space Shuttle (1982–1989), Flash Fax (1989–1999), Kids Click (2000–2004) and After School ICU (2005 - 2014). Therefore, she is often called "Sister Tam Yuk Ying" (譚玉瑛姐姐) as if she is an elder sister. Biography: Due to unsatisfactory HKCEE results, Helen Tam was unable to be promoted to form 6, and she applied for courses in shipping engineering and constructions, but was rejected by both because she did not have enough weight. As the talent training class of TVB was open for recruitment, she applied in the hope of learning magic, and she was subsequently enrolled.At the completion of the training class, Helen had a chance to act in a TVB serial drama. On 26 April 1982, she begin her career in children's shows when she was assigned as a co-host in the newly established children' show, 430 Space Shuttle. In the course of 30 years, the children's shows were re-organized several times by TVB, but Helen remained a core member of the shows and is the only person to host all of the shows. She was most well known by her role as an English-teaching witch in Flash Fax. Some of her partners, such as Stephen Chow, Tony Leung and Athena Chu, have become successful film actors. In 2012 TVB celebrated her 30-year contributions to the children's shows. However, due to human resource decisions, in 2014, she was forced to relinquish her roles as a children TV host after her contract was not renewed, ending her 32-year tenure. She then made her debut as an entertainment news host, and made her debut on April 17, 2014, In spite of her famous role, Tam is not fond of having her own children.As a prominent figure among teenagers, she regularly appears in TV commercials and promotional films targeted at students. She played the role of a school principal in the "DSE Myth" series produced by the Hong Kong Examinations and Assessment Authority. She explained to the candidates the instructions and procedures for candidates of the Hong Kong Diploma of Secondary Education Examination to ease candidates' concerns.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Web Services for Remote Portlets** Web Services for Remote Portlets: Web Services for Remote Portlets (WSRP) is an OASIS-approved network protocol standard designed for communications with remote portlets. Overview: The WSRP specification defines a web service interface for interacting with presentation-oriented web services. Initial work was produced through the joint efforts of the Web Services for Interactive Applications (WSIA) and Web Services for Remote Portlets (WSRP) OASIS Technical Committees. With the approval of WSRP v1 as an OASIS standard in September, 2003, these two technical committees merged and continued the work as the Web Services for Remote Portlets (WSRP) OASIS Technical Committee. Overview: Scenarios that motivate WSRP functionality include: content hosts, such as portal servers, providing portlets as presentation-oriented web services that can be used by aggregation engines; content aggregators, such as portal servers, consuming presentation-oriented web services provided by portal or non-portal content providers and integrating them into a portal framework. Implementation: The WSRP specification does not make any statements as to implementation. Java's portlet specification, JSR 168, and WSRP are not competing technologies. JSR 168 may be used to define a portlet, and WSRP may be used to define a portlet's operations to remote containers. JSR 168 portlets and WSRP may be used together to define a portlet and to provide remote operations. Similarly, .NET portlets may be created for use with WSRP. Interoperability between JSR 168 and .NET WSRP implementations has been demonstrated. Implementation: There are several WSRP implementations to assist developers: The Oracle WebCenter provides a standards based implementation of WSRP 1.0 and 2.0 producer and consumers. The IBM WebSphere Portal provides an implementation of WSRP 1.0 and 2.0 producer and consumers. Up to version 7.0 the Liferay Portal / DXP provides an implementation of WSRP 1.0 and 2.0 producer and consumers available in both its commercial Enterprise Edition and open source Community Edition. Microsoft provides a WSRP producer and consumer WebPart for SharePoint 2007, but only a WSRP consumer WebPart for SharePoint 2010 and SharePoint 2013. The OpenPortal WSRP project's goal is to create a high quality, enterprise-class WSRP v1 and v2 producer and consumer with an associated developer community. The GateIn Portal project (JBoss & eXo Platform), provides an implementation of both WSRP v1 and v2 (as of GateIn 3.1.0), producer and consumer using GateIn and GateIn Portlet Container. Implementation: Apache WSRP4J was an Apache Incubator subproject spearheaded by IBM with the stated goal of "kick starting the broad adoption" of WSRP. WSRP4J was designed to assist in the development and deployment of WSRP v1 services. WSRP4J was in incubator status, primarily due to patent concerns revolving around the WSRP specification. Given WSRP4J's incubator status, the project did not produce formal releases. The project has been terminated in 2010.The first release, WSRP v1, provided a limited interoperability platform. Further versions of WSRP v1 were abandoned so that effort could be concentrated on WSRP v2. WSRP v2 augments the initial standard with cross-portlet coordination and access management features. This major update to the standard permits a more useful integration of multiple content sources, regardless of whether they are local or remote, into a new web application. In addition, WSRP v2 supports Web 2.0 technologies, such as AJAX and REST, without requiring them. WSRP v2 was approved by OASIS on April 1, 2008.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Explosive belt** Explosive belt: An explosive belt (also called suicide belt or a suicide vest) is an improvised explosive device, a belt or a vest packed with explosives and armed with a detonator, worn by suicide bombers. Explosive belts are usually packed with ball bearings, nails, screws, bolts, and other objects that serve as shrapnel to maximize the number of casualties in the explosion. History: The Chinese used explosive vests during the Second Sino-Japanese War. A Chinese soldier detonated a grenade vest and killed 20 Japanese at Sihang Warehouse. Chinese troops strapped explosives like grenade packs or dynamite to their bodies and threw themselves under Japanese tanks to blow them up. This tactic was used during the Battle of Shanghai, where a Chinese suicide bomber stopped a Japanese tank column by exploding himself beneath the lead tank, and at the Battle of Taierzhuang, where Chinese troops rushed at Japanese tanks and blew themselves up with dynamite and grenades. During one incident at Taierzhuang, Chinese suicide bombers destroyed four Japanese tanks with grenade bundles.The use of suicidal attacks to inflict damage upon an enemy predates the Second World War, in which Kamikaze units (suicidal air attacks) and Kaiten ("living torpedoes") were used to attack Allied forces. Japanese soldiers routinely detonated themselves by attacking Allied tanks while carrying antitank mines, magnetic demolition charges, hand grenades and other explosive devices. Description: The explosive belt usually consists of several cylinders filled with explosive (de facto pipe bombs), or in more sophisticated versions with plates of explosive. The explosive is surrounded by a fragmentation jacket that produces the shrapnel responsible for most of the bomb's lethality, effectively making the jacket a crude, body-worn, Claymore mine. Once the vest is detonated, the explosion resembles an omnidirectional shotgun blast. The most dangerous and the most widely used shrapnel are steel balls 3–7 mm (1⁄8–9⁄32 in) in diameter. Other shrapnel material can be anything of suitable size and hardness, most often nails, screws, nuts, and thick wire. Shrapnel is responsible for about 90% of all casualties caused by this kind of device. Description: A "loaded" vest may weigh between 5 and 20 kilograms (10 and 45 lb) and may be hidden under thick clothes, usually jackets or snow coats. A suicide vest may cover the entire stomach and usually has shoulder straps. Description: A common security procedure against suspected suicide bombers is to move the suspect at least 15 metres (50 ft) away from other people, then ask them to remove their upper clothing. While this procedure is relatively uncontroversial for use on males, it may cause an issue when dealing with females suspected of being suicide bombers. Male security personnel may be reluctant to inspect or strip-search females, and can be accused of sexual harassment after having done so. Alternatively, an infrared detector can be used. There are assertions that using a millimeter wave scanner would be viable for the task, but the concept has been disputed. Description: The discovery of remains as well as incidentally unexploded belts or vests can offer forensic clues to the investigation after the attack. Forensic investigation: Suicide bombers who wear the vests are often obliterated by the explosion; the best evidence of their identity is the head, which often remains relatively intact because it is separated and thrown clear of the body by the explosion. Journalist Joby Warrick conjectured: "The vest's tight constraints and the positioning of the explosive pouches would channel the energy of the blast outward, toward whoever stood directly in front of him. Some of that energy wave would inevitably roll upward, ripping the bomber's body apart at its weakest point, between the neck bones and lower jaw. It accounts for the curious phenomenon in which suicide bombers' heads are severed clean at the moment of detonation and are later found in a state of perfect preservation several metres away from the torso's shredded remains."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Facial colliculus** Facial colliculus: The facial colliculus is an elevated area located in the pontine tegmentum (dorsal pons), within the floor of the fourth ventricle (i.e. the rhomboid fossa). It is formed by fibres from the facial motor nucleus looping over the abducens nucleus. The facial colliculus is an essential landmark of the rhomboid fossa. Anatomy: The facial colliculus occurs within the rhomboid fossa (i.e. the floor of the fourth ventricle) where it is placed lateral to its (midline) median sulcus. Structure The facial colliculus is formed by brachial motor nerve fibres of the facial nerve (CN VII) looping over the (ipsilateral) abducens nucleus, forming a bump upon the surface. Clinical significance: A facial colliculus lesion would result in ipsilateral facial paralysis (i.e. Bell's palsy) and inhibited ipsilateral and unopposed contralateral eye deviation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Effeminacy** Effeminacy: Effeminacy is the embodiment of traits and/or expressions in those who are not of the female sex (e.g. boys and men) that are often associated with what is generally perceived to be feminine behaviours, mannerisms, styles, or gender roles, rather than with traditionally masculine behaviours, mannerisms, styles or roles. Effeminacy and other gender expressions are independent of a person's sexuality or sexual identity and are displayed by people of all sexualities and none. Effeminacy is seen in some societies as something embodied by some in the homosexual male community. The embodiment of effeminacy by people in some societies has resulted in prejudice, discrimination, antagonism and insults towards those who display it. History: Terminology Effeminate comes from Latin effeminātus, from the factitive prefix ex- (from ex 'out') and femina 'woman'; it means 'made feminine, emasculated, weakened'. Another Latin term is mollities, meaning 'softness'. History: In ancient Koine Greek, the word for effeminate is κίναιδος kinaidos (cinaedus in its Latinized form), or μαλακός malakoi: a man "whose most salient feature was a supposedly 'feminine' love of being sexually penetrated by other men"."A cinaedus is a man who cross-dresses or flirts like a girl. Indeed, the word's etymology suggests an indirect sexual act emulating a promiscuous woman. This term has been borrowed from the Greek kinaidos (which may itself have come from a language of Ionian Greeks of Asia Minor, primarily signifying a purely effeminate dancer who entertained his audiences with a tympanum or tambourine in his hand, and adopted a lascivious style, often suggestively wiggling his buttocks in such a way as to suggest anal intercourse....The primary meaning of cinaedus never died out; the term never became a dead metaphor."Other vernacular words for effeminacy include: pansy, nelly, pretty boy, nancy boy, girly boy, molly, sissy, pussy, tomgirl, femboy, roseboy, baby, and girl (when applied to a boy or, especially, adult man). The word effete similarly implies effeminacy or over-refinement, but comes from the Latin term effetus meaning 'having given birth; exhausted', from ex- and fetus 'offspring'. The term tomgirl, meaning a girlish boy, comes from an inversion of tomboy, meaning a boyish girl. The term girly boy comes from a gender-inversion of girly girl. History: Ancient Greece and Rome Greece Greek historian Plutarch recounts that Periander, the tyrant of Ambracia, asked his "boy", "Aren't you pregnant yet?" in the presence of other people, causing the boy to kill him in revenge for being treated as if effeminate or a woman (Amatorius 768F). History: When Aeschines was accused of treason by Athenians Timarchus and Demosthenes in 346 BC, he brought a counter suit claiming Timarchus had prostituted himself to (or been "kept" by) other men (Against Timarchus). He also attributed Demosthenes' nickname Batalos ("arse") to his "unmanliness and kinaidiā" and frequently commented on his "unmanly and womanish temper", even criticising his clothing: "If anyone took those dainty little coats and soft shirts off you... and took them round for the jurors to handle, I think they'd be quite unable to say, if they hadn't been told in advance, whether they had hold of a man's clothing or a woman's."Demosthenes is also implicated in passive homosexuality and the prostitution of youth: "There is a certain Aristion, a Plataean..., who as a youth was outstandingly good-looking and lived for a long time in Demosthenes' house. Allegations about the part he was playing [lit., 'undergoing or doing what'] there vary, and it would be most unseemly for me to talk about it."The late Greek Erôtes ("Loves", "Forms of Desire", "Affairs of the Heart"), preserved with manuscripts by Lucian, contains a debate "between two men, Charicles and Callicratidas, over the relative merits of women and boys as vehicles of male sexual pleasure." Callicratidas, "far from being effeminised by his sexual predilection for boys... Callicratidas's inclination renders him hypervirile... Callicratidas's sexual desire for boys, then, makes him more of a man; it does not weaken or subvert his male gender identity but rather consolidates it." In contrast, "Charicles' erotic preference for women seems to have had the corresponding effect of effeminising him: when the reader first encounters him, for example, Charicles is described as exhibiting 'a skillful use of cosmetics, so as to be attractive to women.'" Rome Over-refinement, fine clothes and other possessions, the company of women, certain trades, and too much fondness with women were all deemed effeminate traits in Roman society. Taking an inappropriate sexual position, passive or "bottom", in same-gender sex was considered effeminate and unnatural. Touching the head with a finger and wearing a goatee were also considered effeminate.Roman consul Scipio Aemilianus questioned one of his opponents, P. Sulpicius Galus: "For the kind of man who adorns himself daily in front of a mirror, wearing perfume; whose eyebrows are shaved off; who walks around with plucked beard and thighs; who when he was a young man reclined at banquets next to his lover, wearing a long-sleeved tunic; who is fond of men as he is of wine: can anyone doubt that he has done what cinaedi are in the habit of doing?"Roman orator Quintilian described, "The plucked body, the broken walk, the female attire," as "signs of one who is soft [mollis] and not a real man."For Roman men masculinity also meant self-control, even in the face of painful emotions, illnesses, or death. Cicero says, "There exist certain precepts, even laws, that prohibit a man from being effeminate in pain," and Seneca adds, "If I must suffer illness, it will be my wish to do nothing out of control, nothing effeminately."Emperor/philosopher Julian the Apostate, in his Against the Galileans, wrote: ''Why are the Egyptians more intelligent and more given to crafts, and the Syrians unwarlike and effeminate, but at the same time intelligent, hot-tempered, vain and quick to learn?'' In his Commentaries on the Gallic Wars, Julius Caesar wrote that the Belgians were the bravest of all Gauls because "merchants least frequently resort to them, and import those things which tend to effeminate the mind".Emperor Marcus Aurelius evidently considered effeminacy an undesirable trait, but it is unclear what or who was being referred to. History: The Bible Malakos is listed among other vices in the New Testament book of I Corinthians 6:9. Translations use different terms to express this. The online Greek Interlinear Bible uses Strongs concordance (last corrected in 2008) translates Malakoi as "Catamites", and Arsenokoitia as "sodomites". The word malakos, #3120 in the Greek Dictionary of The New Testament of James Strong's Exhaustive Concordance to The Bible states: "of uncertain affinity". Gay men: China The Chinese term for 'girlie men' is niang pao. In September 2021, the Associated Press reported that the mainland Chinese government has banned effeminate men from appearing in television commercials. The Chinese government instructed broadcasters to stop showing "sissy men". Gay men: United States In the United States, boys are often homosocial, and gender role performance determines social rank. While gay boys receive the same enculturation, they are less compliant. Martin Levine summarizes: "Harry (1982, 51–52), for example, found that 42 percent of his gay respondents were 'sissies' during childhood. Only 11 percent of his heterosexual samples were gender-role nonconformists. Bell, Weinberg, and Hammersmith (1981, 188) reported that half of their male homosexual subjects practised gender-inappropriate behaviour in childhood. Among their heterosexual men, the rate of noncompliance was 25 percent. Saghir and Robins (1973, 18) found that one-third of their gay man respondents conformed to gender role dictates. Only 3 percent of their heterosexual men deviated from the norm." Thus effeminate boys, or sissies, are physically and verbally harassed (Saghir and Robins, 1973, 17–18; Bell, Weinberg, and Hammersmith 1981, 74–84), causing them to feel worthless and "de-feminise".Before the Stonewall riots, inconsistent gender role performance had been noticed among gay men: "They have a different face for different occasions. In conversations with each other, they often undergo a subtle change. I have seen men who appeared to be normal suddenly smile roguishly, soften their voices, and simper as they greeted homosexual friends [...] Many times I saw these changes occur after I had gained a homosexual's confidence and he could safely risk my disapproval. Once as I watched a luncheon companion become an effeminate caricature of himself, he apologized, 'It is hard to always remember that one is a man.'" Before Stonewall, "closet" culture accepted homosexuality as effeminate behaviour, and thus emphasized camp, drag, and swish, including an interest in fashion and decorating. Masculine gay men were marginalised and formed their own communities, such as the leather subculture, and/or wore clothes that were commonly associated with working-class individuals, such as sailor uniforms. Gay men: After Stonewall, "clone culture" became dominant and effeminacy is now marginalised. One indicator of this is a definite preference shown in personal ads for masculine-behaving men. The avoidance of effeminacy by men, including gay ones, has been linked to possible impedance of personal and public health. Regarding HIV/AIDS, masculine behaviour was stereotyped as being unconcerned about safe sex practices while engaging in promiscuous sexual behaviour. Early reports from New York City indicated that more women had themselves tested for HIV/AIDS than men. David Halperin compares "universalising" and "minoritising" notions of gender deviance: "'Softness' either may represent the specter of potential gender failure that haunts all normative masculinity, an ever-present threat to the masculinity of every man, or it may represent the disfiguring peculiarity of a small class of deviant individuals."The term effeminiphobia (sometimes effemiphobic, as used by Randy P. Conner) was coined by Will Fellows to describe strong anti-effeminacy. Michael Bailey coined the similar term femiphobia to describe the ambivalence gay men and culture have about effeminate behaviour in 1995. Gay author Tim Bergling popularized the term sissyphobia in Sissyphobia: Gay Men and Effeminate Behavior, although it was used before. Transgender writer and biologist Julia Serano has coined the similar term effemimania. Feminist Sociologist Rhea Ashley Hoskin suggests that these terms can be understood as relating to a larger construct of femmephobia, or "prejudice, discrimination, or antagonism directed against someone who is perceived to identify, embody, or express femininely and toward people and objects gendered femininely." Since the 2000s, Peter Hennen's cultural analysis of gay masculinities has found effeminacy to be a "historically varying concept deployed primarily as a means of stabilising a given society's concept of masculinity and controlling the conduct of its men based upon the repudiation of the feminine". Modern context: Femboy (alternatively spelled femboi) is a modern slang term used to refer to an individual, typically but not exclusively a male, who displays traditionally feminine characteristics, such as wearing dresses, skirts, and/or thigh-highs. (Sometimes fem/femme instead of femboy for non-binary gender individuals). It is a portmanteau of feminine and boy. The term femboy emerged by at least the 1990s and gained traction online, used in both sexual and non-sexual contexts. Recently, femboys have become increasingly visible due to their inclusion in popular media, and trends such as "Femboy Friday" and "Femboy Hooters". These trends involve self-identifying femboys posting images of themselves in online groups and forums, dressed in feminine clothing or a form of cosplay. Cosplay has become exceedingly popular among online femboys, usually cosplaying female, non-binary, or effeminate male characters. Modern context: While the term can be used as an insult directed towards trans women, it is also used as a positive/self-describing term within the LGBT community.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Computing with words and perceptions** Computing with words and perceptions: In computing with words and perceptions (CWP), the objects of computation are words, perceptions, and propositions drawn from a natural language. The central theme of CWP is the concept of a generalised constraint. The meaning of a proposition is expressed as a generalised constraint. CWP is a necessary tool when the available information is perception-based or not precise enough to use numbers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sound reinforcement system** Sound reinforcement system: A sound reinforcement system is the combination of microphones, signal processors, amplifiers, and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects, such as reverb, as opposed to simply amplifying the sources unaltered. Sound reinforcement system: A sound reinforcement system for a rock concert in a stadium may be very complex, including hundreds of microphones, complex live sound mixing and signal processing systems, tens of thousands of watts of amplifier power, and multiple loudspeaker arrays, all overseen by a team of audio engineers and technicians. On the other hand, a sound reinforcement system can be as simple as a small public address (PA) system, consisting of, for example, a single microphone connected to a 100-watt amplified loudspeaker for a singer-guitarist playing in a small coffeehouse. In both cases, these systems reinforce sound to make it louder or distribute it to a wider audience.Some audio engineers and others in the professional audio industry disagree over whether these audio systems should be called sound reinforcement (SR) systems or PA systems. Distinguishing between the two terms by technology and capability is common, while others distinguish by intended use (e.g., SR systems are for live event support and PA systems are for reproduction of speech and recorded music in buildings and institutions). In some regions or markets, the distinction between the two terms is important, though the terms are considered interchangeable in many professional circles. Basic concept: A typical sound reinforcement system consists of; input transducers (e.g., microphones), which convert sound energy such as a person singing into an electric signal, signal processors which alter the signal characteristics (e.g., equalizers that adjust the bass and treble, compressors that reduce signal peaks, etc.), amplifiers, which produce a powerful version of the resulting signal that can drive a loudspeaker and output transducers (e.g., loudspeakers in speaker cabinets), which convert the signal back into sound energy (the sound heard by the audience and the performers). These primary parts involve varying numbers of individual components to achieve the desired goal of reinforcing and clarifying the sound to the audience, performers, or other individuals. Basic concept: Signal path Sound reinforcement in a large format system typically involves a signal path that starts with the signal inputs, which may be instrument pickups (on an electric guitar or electric bass) or a microphone that a vocalist is singing into or a microphone placed in front of an instrument or guitar amplifier. These signal inputs are plugged into the input jacks of a thick multicore cable (often called a snake). The snake then delivers the signals of all of the inputs to one or more mixing consoles. Basic concept: In a coffeehouse or small nightclub, the snake may be only routed to a single mixing console, which an audio engineer will use to adjust the sound and volume of the onstage vocals and instruments that the audience hears through the main speakers and adjust the volume of the monitor speakers that are aimed at the performers. Basic concept: Mid- to large-size performing venues typically route the onstage signals to two mixing consoles: the front of house (FOH), and the stage monitor system, which is often a second mixer at the side of the stage. In these cases, at least two audio engineers are required; one to do the main mix for the audience at FOH and another to do the monitor mix for the performers on stage. Basic concept: Once the signal arrives at an input on a mixing console, this signal can be adjusted in many ways by the sound engineer. A signal can be equalized (e.g., by adjusting the bass or treble of the sound), compressed (to avoid unwanted signal peaks), or panned (that is sent to the left or right speakers). The signal may also be routed into an external effects processor, such as a reverb effect, which outputs a wet (effected) version of the signal, which is typically mixed in varying amounts with the dry (effect-free) signal. Many electronic effects units are used in sound reinforcement systems, including digital delay and reverb. Some concerts use pitch correction effects (e.g., AutoTune), which electronically correct any out-of-tune singing. Basic concept: Mixing consoles also have additional sends, also referred to as auxes or aux sends (an abbreviation for "auxiliary send"), on each input channel so that a different mix can be created and sent elsewhere for another purpose. One usage for aux sends is to create a mix of the vocal and instrument signals for the monitor mix (this is what the onstage singers and musicians hear from their monitor speakers or in-ear monitors). Another use of an aux send is to select varying amounts of certain channels (via the aux send knobs on each channel), and then route these signals to an effects processor. A common example of the second use of aux sends is to send all of the vocal signals from a rock band through a reverb effect. While reverb is usually added to vocals in the main mix, it is not usually added to electric bass and other rhythm section instruments. Basic concept: The processed input signals are then mixed to the master faders on the console. The next step in the signal path generally depends on the size of the system in place. In smaller systems, the main outputs are often sent to an additional equalizer, or directly to a power amplifier, with one or more loudspeakers (typically two, one on each side of the stage in smaller venues, or a large number in big venues) that are connected to that amplifier. In large-format systems, the signal is typically first routed through an equalizer then to a crossover. A crossover splits the signal into multiple frequency bands with each band being sent to separate amplifiers and speaker enclosures for low, middle, and high-frequency signals. Low-frequency signals are sent to amplifiers and then to subwoofers, and middle and high-frequency sounds are typically sent to amplifiers which power full-range speaker cabinets. Using a crossover to separate the sound into low, middle and high frequencies can lead to a "cleaner", clearer sound (see bi-amplification) than routing all of the frequencies through a single full-range speaker system. Nevertheless, many small venues still use a single full-range speaker system, as it is easier to set up and less expensive. System components: Input transducers Many types of input transducers can be found in a sound reinforcement system, with microphones being the most commonly used input device. Microphones can be classified according to their method of transduction, polar pattern or their functional application. Most microphones used in sound reinforcement are either dynamic or condenser microphones. One type of directional microphone, called cardioid mics, are widely used in live sound, because they reduce pickup from the side and rear, helping to avoid unwanted feedback from the stage monitor system. System components: Microphones used for sound reinforcement are positioned and mounted in many ways, including base-weighted upright stands, podium mounts, tie-clips, instrument mounts, and headset mounts. Microphones on stands are also placed in front of instrument amplifiers to pick up the sound. Headset mounted and tie-clip mounted microphones are often used with wireless transmission to allow performers or speakers to move freely. Early adopters of headset mounted microphones technology included country singer Garth Brooks, Kate Bush, and Madonna.Other types of input transducers include magnetic pickups used in electric guitars and electric basses, contact microphones used on stringed instruments, and pianos and phonograph pickups (cartridges) used in record players. Electronic instruments such as synthesizers can have their output signal routed directly to the mixing console. A DI unit may be necessary to adapt some of these sources to the inputs of the console. System components: Wireless Wireless systems are typically used for electric guitar, bass, handheld microphones and in-ear monitor systems. This lets performers move about the stage during the show or even go out into the audience without the worry of tripping over or disconnecting cables. System components: Mixing consoles Mixing consoles are the heart of a sound reinforcement system. This is where the sound engineer can adjust the volume and tone of each input, whether it is a vocalist's microphone or the signal from an electric bass, and mix, equalize and add effects to these sound sources. Doing the mixing for a live show requires a mix of technical and artistic skills. A sound engineer needs to have an expert knowledge of speaker and amplifier set-up, effects units and other technologies and a good "ear" for what the music should sound like in order to create a good mix. System components: Multiple consoles can be used for different purposes in a single sound reinforcement system. The front of house (FOH) mixing console is typically located where the operator can see the action on stage and hear what the audience hears. For broadcast and recording applications, the mixing console may be placed within an enclosed booth or outside in an OB van. Large music productions often use a separate stage monitor mixing console which is dedicated to creating mixes for the performers' on-stage. These consoles are typically placed at the side of the stage so that the operator can communicate with the performers on stage. System components: Signal processors Small PA systems for venues such as bars and clubs are now available with features that were formerly only available on professional-level equipment, such as digital reverb effects, graphic equalizers, and, in some models, feedback prevention circuits which electronically sense and prevent audio feedback when it becomes a problem. Digital effects units may offer multiple pre-set and variable reverb, echo and related effects. Digital loudspeaker management systems offer sound engineers digital delay (to ensure speakers are in sync with each other), limiting, crossover functions, EQ filters, compression and other functions in a single rack-mountable unit. In previous decades, sound engineers typically had to transport a substantial number of rack-mounted analog effects unit devices to accomplish these tasks. System components: Equalizers Equalizers are electronic devices that allow audio engineers to control the tone and frequencies of the sound in a channel, group (e.g., all the mics on a drumkit) or an entire stage's mix. The bass and treble controls on a home stereo are a simple type of equalizer. Equalizers exist in professional sound reinforcement systems in three forms: shelving equalizers (typically for a whole range of bass and treble frequencies), graphic equalizers and parametric equalizers. Graphic equalizers have faders (vertical slide controls) which together resemble a frequency response curve plotted on a graph. The faders can be used to boost or cut specific frequency bands. System components: Using equalizers, frequencies which are too weak, such as a singer with modest projection in their lower register, can be boosted. Frequencies which are too loud, such as a "boomy" sounding bass drum, or an overly resonant dreadnought guitar can be cut. Sound reinforcement systems typically use graphic equalizers with one-third octave frequency centers. These are typically used to equalize output signals going to the main loudspeaker system or the monitor speakers on stage. Parametric equalizers are often built into each channel in mixing consoles, typically for the mid-range frequencies. They are also available as separate rack-mount units which can be connected to a mixing board. Parametric equalizers typically use knobs and sometimes buttons. The audio engineer can select which frequency band to cut or boost, and then use additional knobs to adjust how much to cut or boost this frequency range. Parametric equalizers first became popular in the 1970s and have remained the program equalizer of choice for many engineers since then. System components: A high-pass (low-cut) and/or low-pass (high-cut) filter may also be included on equalizers or audio consoles. High-pass and low-pass filters restrict a given channel's bandwidth extremes. Cutting very low frequency sound signals (termed infrasonic, or subsonic) reduces the waste of amplifier power which does not produce audible sound and which moreover can be hard on the low-range speakers. A low-pass filter to cut ultrasonic energy is useful to prevent interference from radio frequencies, lighting control, or digital circuitry creeping into the power amplifiers. Such filters are often paired with graphic and parametric equalizers to give the audio engineer full control of the frequency range. High-pass filters and low-pass filters used together function as a band-pass filter, eliminating undesirable frequencies both above and below the auditory spectrum. A band-stop filter, does the opposite. It allows all frequencies to pass except for one band in the middle. A feedback suppressor, using an microprocessor, automatically detects the onset of feedback and applies a narrow band-stop filter (a notch filter) at specific frequency or frequencies pertaining to the feedback. System components: Compressors Dynamic range compression is designed to help the audio engineer to manage the dynamic range of audio signals. Prior to the invention of automatic compressors, audio engineers accomplished the same goal by "riding the faders", listening carefully to the mix and lowering the faders of any singer or instrument which was getting too loud. A compressor accomplishes this by reducing the gain of a signal that is above a defined level (the threshold) by a defined amount determined by the ratio setting. Most compressors available are designed to allow the operator to select a ratio within a range typically between 1:1 and 20:1, with some allowing settings of up to ∞:1. A compressor with high compression ratio is typically referred to as a limiter. The speed that the compressor adjusts the gain of the signal (attack and release) is typically adjustable as is the final output or make-up gain of the device. System components: Compressor applications vary widely. Some applications use limiters for component protection and gain structure control. Artistic signal manipulation using a compressor is a subjective technique widely utilized by mix engineers to improve clarity or to creatively alter the signal in relation to the program material. An example of artistic compression is the typical heavy compression used on the various components of a modern rock drum kit. The drums are processed to be perceived as sounding more punchy and full. System components: Noise gates A noise gate mutes signals below a set threshold level. A noise gate's function is in, a sense, opposite to that of a compressor. Noise gates are useful for microphones which will pick up noise that is not relevant to the program, such as the hum of a miked electric guitar amplifier or the rustling of papers on a minister's lectern. Noise gates are also used to process the microphones placed near the drums of a drum kit in many hard rock and metal bands. Without a noise gate, the microphone for a specific instrument such as the floor tom will also pick up signals from nearby drums or cymbals. With a noise gate, the threshold of sensitivity for each microphone on the drum kit can be set so that only the direct strike and subsequent decay of the drum will be heard, not the nearby sounds. System components: Effects Reverberation and delay effects are widely used in sound reinforcement systems to enhance the sound of the mix and create a desired artistic effect. Reverb and delay add a sense of spaciousness to the sound. Reverb can give the effect of singing voice or instrument being present in anything from a small room to a massive hall, or even in a space that does not exist in the physical world. The use of reverb often goes unnoticed by the audience, as it often sounds more natural than if the signal was left "dry" (without effects). Many modern mixing boards designed for live sound include on-board reverb effects. System components: Other effects include modulation effects such as Flanger, phaser, and chorus and spectral manipulation or harmonic effects such as the exciter and harmonizer. The use of effects in the reproduction of 2010-era pop music is often in an attempt to mimic the sound of the studio version of the artist's music in a live concert setting. For example, an audio engineer may use an Auto Tune effect to produce unusual vocal sound effects that a singer used on their recordings. System components: The appropriate type, variation, and level of effects is quite subjective and is often collectively determined by a production's audio engineer, artists, bandleader, music producer, or musical director. System components: Feedback suppressor A feedback suppressor detects unwanted audio feedback and suppresses it, typically by automatically inserting a notch filter into the signal path of the system. Audio feedback can create unwanted loud, screaming noises that are disruptive to the performance, and can damage speakers and performers' and audience members' ears. Audio feedback from microphones occurs when a microphone is too near a monitor or main speaker and the sound reinforcement system amplifies itself. Audio feedback through a microphone is almost universally regarded as a negative phenomenon, many electric guitarists use guitar feedback as part of their performance. This type of feedback is intentional, so the sound engineer does not try to prevent it. System components: Power amplifiers A power amplifier is an electronic device that uses electrical power and circuitry to boost a line level signal and provides enough electrical power to drive a loudspeaker and produce sound. All loudspeakers, including headphones, require power amplification. Most professional audio power amplifiers also provide protection from clipping typically as some form of limiting. A power amplifier pushed into clipping can damage loudspeakers. Amplifiers also typically provide protection against short circuits across the output and overheating. System components: Audio engineers select amplifiers that provide enough headroom. Headroom refers to the amount by which the signal-handling capabilities of an audio system exceed a designated nominal level. Headroom can be thought of as a safety zone allowing transient audio peaks to exceed the nominal level without damaging the system or the audio signal, e.g., via clipping. Standards bodies differ in their recommendations for nominal level and headroom. Selecting amplifiers with enough headroom helps to ensure that the signal will remain clean and undistorted. System components: Like most sound reinforcement equipment, professional power amplifiers are typically designed to be mounted within standard 19-inch racks. Rack-mounted amps are typically housed in road cases to prevent damage to the equipment during transportation. Active loudspeakers have internally mounted amplifiers that have been selected by the manufacturer to match the requirements of the loudspeaker. Some active loudspeakers also have equalization, crossover and mixing circuitry built in. System components: Since amplifiers can generate a significant amount of heat, thermal dissipation is an important factor for operators to consider when mounting amplifiers into equipment racks. Many power amplifiers feature internal fans to draw air across their heat sinks. The heat sinks can become clogged with dust, which can adversely affect the cooling capabilities of the amplifier. System components: In the 1970s and 1980s, most PAs employed heavy class AB amplifiers. In the late 1990s, power amplifiers in PA applications became lighter, smaller, more powerful, and more efficient, with the increasing use of switching power supplies and class D amplifiers, which offered significant weight- and space-savings as well as increased efficiency. Often installed in railroad stations, stadia, and airports, class D amplifiers can run with minimal additional cooling and with higher rack densities, compared to older amplifiers. System components: Digital loudspeaker management systems (DLMS) that combine digital crossover functions, compression, limiting, and other features in a single unit are used to process the mix from the mixing console and route it to the various amplifiers. Systems may include several loudspeakers, each with its own output optimized for a specific range of frequencies (i.e. bass, midrange, and treble). Bi-amping and tri-amping of a sound reinforcement system with the aid of a DLMS results in more efficient use of amplifier power by sending each amplifier only the frequencies appropriate for its respective loudspeaker and eliminating losses associated with passive crossover circuits. System components: Main loudspeakers A simple and inexpensive PA loudspeaker may have a single full-range loudspeaker driver, housed in a suitable enclosure. More elaborate, professional-caliber sound reinforcement loudspeakers may incorporate separate drivers to produce low, middle, and high frequency sounds. A crossover network routes the different frequencies to the appropriate drivers. In the 1960s, horn loaded theater and PA speakers were commonly columns of multiple drivers mounted in a vertical line within a tall enclosure. System components: The 1970s to early 1980s was a period of innovation in loudspeaker design with many sound reinforcement companies designing their own speakers using commercially available drivers. The areas of innovation were in cabinet design, durability, ease of packing and transport, and ease of setup. This period also saw the introduction of the hanging or flying of main loudspeakers at large concerts. During the 1980s the large speaker manufacturers started producing standard products using the innovations of the 1970s. These were mostly smaller two way systems with 12", 15" or double 15" woofers and a high frequency driver attached to a high frequency horn. The 1980s also saw the start of loudspeaker companies focused on the sound reinforcement market. System components: The 1990s saw the introduction of line arrays, where long vertical arrays of loudspeakers in smaller cabinets are used to increase efficiency and provide even dispersion and frequency response. Trapezoidal-shaped enclosures became popular as this shape allowed many of them to be easily arrayed together. This period also saw the introduction of inexpensive molded plastic speaker enclosures mounted on tripod stands. Many feature built-in power amplifiers which made them practical for non-professionals to set up and operate successfully. The sound quality available from these simple powered speakers varies widely depending on the implementation. System components: Many sound reinforcement loudspeaker systems incorporate protection circuitry to prevent damage from excessive power or operator error. Resettable fuses, specialized current-limiting light bulbs, and circuit breakers were used alone or in combination to reduce driver failures. During the same period, the professional sound reinforcement industry made the Neutrik Speakon NL4 and NL8 connectors the standard speaker connectors, replacing 1/4" jacks, XLR connectors, and Cannon multipin connectors which are all limited to a maximum of 15 amps of current. XLR connectors are still the standard input connector on active loudspeaker cabinets. System components: To help users avoid overpowering them, loudspeakers have a power rating (in watts) which indicates their maximum power capacity. Thanks to the efforts of the Audio Engineering Society (AES) and the loudspeaker industry group ALMA in developing the EIA-426 testing standard, power-handling specifications became more trustworthy. Lightweight, portable speaker systems for small venues route the low-frequency parts of the music (electric bass, bass drum, etc.) to a powered subwoofer. Routing the low-frequency energy to a separate amplifier and subwoofer can substantially improve the bass response of the system. Also, clarity may be enhanced because low-frequency sounds can cause intermodulation and other distortion in speaker systems. Professional sound reinforcement speaker systems often include dedicated hardware for safely flying them above the stage area, to provide more even sound coverage and to maximize sightlines within performance venues. System components: Monitor loudspeakers Monitor loudspeakers, also called foldback loudspeakers, are speaker cabinets used onstage to help performers to hear their singing or playing. As such, monitor speakers are pointed towards a performer or a section of the stage. They are generally sent a different mix of vocals or instruments than the mix that is sent to the main loudspeaker system. Monitor loudspeaker cabinets are often a wedge shape, directing their output upwards towards the performer when set on the floor of the stage. Simple two-way, dual-driver designs with a speaker cone and a horn are common, as monitor loudspeakers need to be smaller to save space on the stage. These loudspeakers typically require less power and volume than the main loudspeaker system, as they only need to provide sound for a few people who are in relatively close proximity to the loudspeaker. Some manufacturers have designed loudspeakers for use either as a component of a small PA system or as a monitor loudspeaker. A number of manufacturers produce powered monitor speakers, which contain an integrated amplifier. System components: Using monitor speakers instead of in-ear monitors typically results in an increase of stage volume, which can lead to more feedback issues and progressive hearing damage for the performers in front of them. The clarity of the mix for the performer on stage is also typically compromised as they hear more extraneous noise from around them. The use of monitor loudspeakers, active (with an integrated amplifier) or passive, requires more cabling and gear on stage, resulting in a more cluttered stage. These factors, amongst others, have led to the increasing popularity of in-ear monitors. System components: In-ear monitors In-ear monitors are headphones that have been designed for use as monitors by a live performer. They are either of a universal fit or custom fit design. The universal fit in-ear monitors feature rubber or foam tips that can be inserted into virtually anybody's ear. Custom-fit in-ear monitors are created from an impression of the user's ear that has been made by an audiologist. In-ear monitors are almost always used in conjunction with a wireless transmitting system, allowing the performer to freely move about the stage while receiving their monitor mix. System components: In-ear monitors offer considerable isolation for the performer using them, no on-stage sound is heard and the monitor engineer can deliver a much more accurate and clear mix for the performer. With in-ear monitors, each performer can be sent their own customized mix; although this was also the case with monitor speakers, the in-ear monitors of one performer cannot be heard by the other musicians. A downside of this isolation is that the performer cannot hear the crowd or the comments from other performers on stage that do not have microphones (e.g., if the bass player wishes to communicate to the drummer). This has been remedied in larger productions by setting up microphones facing the audience that can be mixed into the in-ear monitor sends.Since their introduction in the mid-1980s, in-ear monitors have grown to be the most popular monitoring choice for large touring acts. The reduction or elimination of loudspeakers other than instrument amplifiers on stage has allowed for cleaner and less problematic mixing for both the front of house and monitor engineers. [Audio feedback is greatly reduced and there is less sound reflecting off the back wall of the stage out into vocal mics and the audience, which improves the clarity of the front-of-house mix. Applications: Sound reinforcement systems are used in a broad range of different settings, each of which poses different challenges. Applications: Rental systems Audio-visual rental systems have to be able to withstand heavy use and even abuse from renters. For this reason, rental companies tend to own speaker cabinets that are heavily braced and protected with steel corners, and electronic equipment such as power amplifiers or effects are often mounted into protective road cases. Rental companies also tend to select gear that have electronic protection features, such as speaker-protection circuitry and amplifier limiters. Applications: Rental systems for non-professionals need to be easy to use and set up and they must be easy to repair and maintain for the renting company. From this perspective, speaker cabinets need to have easy-to-access horns, speakers, and crossover circuitry, so that repairs or replacements can be made. Applications: Many touring acts and large venue corporate events will rent large sound reinforcement systems that typically include one or more audio engineers on staff with the renting company. In the case of rental systems for tours, there are typically several audio engineers and technicians from the rental company that tour with the band to set up and calibrate the equipment. The individual that mixes the band is often selected and provided by the band, as they are familiar with the various aspects of the show and understand how the band wants the show to sound. Applications: Live music clubs and dance events Setting up sound reinforcement for live music clubs and dance events often poses unique challenges, because there is such a large variety of venues that are used as clubs, ranging from former warehouses or music theaters to small restaurants or basement pubs with concrete walls. Dance events may be held in huge warehouses, aircraft hangars or outdoor spaces. In some cases, clubs are housed in multi-story venues with balconies or in L-shaped rooms, which makes it hard to get a consistent sound for all audience members. The solution is to use fill-in speakers to obtain good coverage, using a delay to ensure that the audience does not hear the same reinforced sound at different times. Applications: The number of subwoofer speaker cabinets and power amplifiers dedicated to low-frequency sounds used in a club depends on the type of club, the genres of music played there, and the size of the venue. A small coffeehouse where traditional folk, bluegrass or jazz groups are the main performers may have no subwoofers, and instead rely on the full-range main PA speakers to reproduce bass sounds. On the other hand, a club where hard rock or heavy metal music bands play or a nightclub where DJs play dance music may have multiple large subwoofers, as these genres and music styles typically use powerful, deep bass sound. Applications: A challenge with designing sound systems for clubs is that the sound system may need to be used for both prerecorded music played by DJs and live music. A club system designed for DJs needs a DJ mixer and space for record players. In contrast, a live music club needs a mixing board designed for live sound, an onstage monitor system, and a multicore snake cable running from the stage to the mixer. Clubs that feature both types of shows may face challenges providing the desired equipment and set-up for both uses. Clubs can be a hostile environment for sound gear, in that the air may be hot, humid, and smoky. In some clubs, keeping power amplifiers cool may be a challenge. Applications: Church sound Churches and similar houses of worship often pose design challenges. Speakers may need to be unobtrusive to blend in with antique woodwork and stonework. In some cases, audio designers have designed custom-painted speaker cabinets. Some facilities, such as sanctuaries or chapels are long rooms with low ceilings and additional fill-in speakers are needed throughout the room to give good coverage. Once installed, church systems are often operated by amateur volunteers from the congregation, which means that they must be easy to operate and troubleshoot. To this end, some mixing consoles designed for houses of worship have automatic mixers, which turn down unused channels to reduce noise, and automatic feedback elimination circuits which detect and notch out frequencies that are feeding back. These features may also be available in multi-function consoles used in convention facilities and multi-purpose venues. Applications: Touring systems Touring sound systems are available in many different sizes and shapes as they have to be powerful and versatile enough to cover many different halls and venues. Touring systems range from mid-sized systems for bands playing nightclub and other mid-sized venues to large systems for groups playing stadiums, arenas and outdoor festivals. Tour sound systems are often designed with substantial redundancy features, so that in the event of equipment failure or amplifier overheating, the system will continue to function. Touring systems for bands performing for crowds of a few thousand people and up are typically set up and operated by a team of technicians and engineers who travel with the performers to every show. Applications: Mainstream bands that are going to perform in mid- to large-sized venues during their tour schedule one to two weeks of technical rehearsal with the entire concert system and production staff, including audio engineers, at hand. This allows the audio and lighting engineers to become familiar with the show and establish presets on their digital equipment (e.g., digital mixers) for each part of the show, if needed. Many modern musical groups work with their front of house and monitor mixing engineers during this time to establish what their general idea is of how the show and mix should sound, both for themselves on stage and for the audience. Applications: This often involves programming different effects and signal processing for use on specific songs, to make the songs sound somewhat similar to the studio versions. To manage a show with a lot of effects changes, the mixing engineers for the show often choose to use a digital mixing console so that they can save and automatically recall these many settings in between each song. This time is also used by the system technicians to get familiar with the specific combination of gear that is going to be used on the tour and how it acoustically responds during the show. These technicians remain busy during the show, making sure the SR system is operating properly and that the system is tuned correctly, as the acoustic response of a room or venue will respond differently throughout the day depending on the temperature, humidity, and number of people in the room or space. Applications: Live theater Sound for live theater, operatic theater, and other dramatic applications may pose problems similar to those of churches; theaters may be in heritage buildings where speakers and wiring is required to blend in with the architecture. The need for clear sightlines may make the use of regular speaker cabinets unacceptable; instead, slim, low-profile speakers are often used instead. Applications: In live theater and drama, performers move around onstage, which means that wireless microphones may be necessary. Some of the higher-budget theater shows and musicals are mixed in surround sound live, often with the show's sound operator triggering sound effects that are being mixed with music and dialogue by the show's mixing engineer. These systems are usually much more extensive to design, typically involving separate sets of speakers for different zones in the theater. Applications: Classical music and opera A subtle type of sound reinforcement called acoustic enhancement is used in some concert halls where classical music such as symphonies and opera is performed. Acoustic enhancement systems add more sound to the hall and prevent dead spots in the audience seating area by "...augment[ing] a hall's intrinsic acoustic characteristics." The systems use "...an array of microphones connected to a computer [which is] connected to an array of loudspeakers." However, as concertgoers have become aware of the use of these systems, debates have arisen, because "...purists maintain that the natural acoustic sound of [Classical] voices [or] instruments in a given hall should not be altered."Kai Harada's article Opera's Dirty Little Secret states that opera houses have begun using electronic acoustic enhancement systems "...to compensate for flaws in a venue's acoustical architecture." Despite the uproar that has arisen amongst operagoers, Harada points out that none of the opera houses using acoustic enhancement systems "...use traditional, Broadway-style sound reinforcement, in which most if not all singers are equipped with radio microphones mixed to a series of unsightly loudspeakers scattered throughout the theatre." Instead, most opera houses use the sound reinforcement system for acoustic enhancement, and for subtle boosting of offstage voices, onstage dialogue, and sound effects (e.g., church bells in Tosca or thunder in Wagnerian operas).These systems use microphones, computer processing "with delay, phase, and frequency-response changes", and then send the signal "... to a large number of loudspeakers placed in extremities of the performance venue." Another acoustic enhancement system, VRAS uses "...different algorithms based on microphones placed around the room." The Deutsche Staatsoper in Berlin and the Hummingbird Centre in Toronto use a LARES system. The Ahmanson Theatre in Los Angeles, the Royal National Theatre in London, and the Vivian Beaumont Theater in New York City use the SIAP system. Applications: Lecture halls and conference rooms Lecture halls and conference rooms pose the challenge of reproducing speech clearly in a large hall, which may have reflective, echo-producing surfaces. One issue with reproducing speech is that the microphone used to pick up the sound of an individual's voice may also pick up unwanted sounds, such as the rustling of papers on a podium. A more tightly directional microphone may help to reduce unwanted background noises. Applications: Another challenge with doing live sound for individuals who are speaking at a conference is that, in comparison with professional singers, individuals who are invited to speak at a forum may not be familiar with how microphones work. Some individuals may accidentally point the microphone towards a speaker or monitor speaker, which may cause audio feedback. Applications: In some conferences, sound engineers have to provide microphones for a large number of people who are speaking, in the case of a panel conference or debate. In some cases, automatic mixers are used to control the levels of the microphones and turn off the channels for microphones that are not being spoken into, to reduce unwanted background noise and reduce the likelihood of feedback. Applications: Sports sound systems Systems for sports facilities often have to deal with substantial echo, which can make speech unintelligible. Sports and recreational sound systems often face environmental challenges as well, such as the need for weather-proof outdoor speakers in outdoor stadiums and humidity- and splash-resistant speakers in swimming pools. Another challenge with sports sound reinforcement setups is that in many arenas and stadiums, the spectators are on all four sides of the playing field. This requires 360-degree sound coverage. This is very different from the norm with music festivals and music halls, where the musicians are on stage and the audience is seated in front of the stage. Setting up and testing: Large-scale sound reinforcement systems are designed, installed, and operated by audio engineers and audio technicians. During the design phase of a newly constructed venue, audio engineers work with architects and contractors, to ensure that the proposed design will accommodate the speakers and provide an appropriate space for sound technicians and the racks of audio equipment. Audio engineers will also provide advice on which audio components would best suit the space and its intended use, and on the correct placement and installation of these components. During the installation phase, audio engineers ensure that high-power electrical components are safely installed and connected and that ceiling or wall-mounted speakers are properly mounted (or "flown") onto rigging. When the sound reinforcement components are installed, the audio engineers test and calibrate the system so that its sound production will be even across the frequency spectrum. Setting up and testing: System testing A sound reinforcement system should be able to accurately reproduce a signal from its input, through any processing, to its output without any coloration or distortion. However, due to inconsistencies in venue sizes, shapes, building materials, and even crowd densities, this is not always possible without prior calibration of the system. This can be done in one of several ways. Setting up and testing: The oldest method of system calibration involves a set of healthy ears, test program material (i.e. music or speech), a graphic equalizer, and a familiarity with the desired frequency response. One must then listen to the program material through the system, take note of any noticeable frequency deviation or resonances, and correct them using the equalizer. Engineers typically use a familiar playlist to calibrate a new system. This by ear process is still done by many engineers, even when analysis equipment is used, as a final check of how the system sounds with music or speech playing through the system. Another method of manual calibration requires a pair of high-quality headphones patched into the input signal before any processing. One can then use this direct signal as a reference with which to identify any differences in frequency response. Setting up and testing: Since the development of digital signal processing (DSP), there have been many pieces of equipment and computer software designed to shift the bulk of the work of system calibration from human auditory interpretation to software algorithms that run on microprocessors. One tool for calibrating a sound system using either DSP or Analog Signal Processing is a Real Time Analyzer (RTA). This tool is usually used by piping pink noise into the system and measuring the result with a special calibrated microphone connected to the RTA. Using this information, the system can be adjusted to help achieve the desired response. The displayed response from the RTA mic cannot be taken as a perfect representation of the room as the analysis will be different, sometimes drastically, when the mic is placed in different position in front of the system. Setting up and testing: More recently, sound engineers have seen the introduction of dual fast-Fourier transform (FFT) based audio analysis software which allows an engineer to view not only frequency vs. amplitude (pitch vs. volume) information that an RTA provides, but also to see the same signals (sounds) in the time domain. This provides the engineer with much more meaningful data than an RTA alone. Also, dual FFT analysis allows one to compare the source signal with the output signal and view the difference. This is a very fast way to calibrate a system to sound as close as possible to the original source material. As with any such measurement tool, it must always be verified using actual human ears. Some DSP system processing devices have been designed for use by non-professionals that automatically make adjustments in the system EQ based upon what is being read from the RTA mic. These are practically never used by professionals, as they almost never calibrate the system as well as a professional audio engineer can manually. Equipment supply stores: Professional audio stores sell microphones, speaker enclosures, monitor speakers, mixing boards, rack-mounted effects units and related equipment designed for use by audio engineers and technicians. Professional audio stores are also called "pro audio stores", "pro sound stores", "sound reinforcement" companies, "PA system companies" or "audio-visual companies", with the latter name being used when a store supplies a significant amount of video equipment for events, such as video projectors and screens. Stores often use the word "professional" or "pro" in their name or the description of their store, to differentiate their stores from consumer electronics stores, which sell consumer-grade loudspeakers, home cinema equipment, and amplifiers, which are designed for private, in-home use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Software Tools Users Group** Software Tools Users Group: The Software Tools Users Group (STUG) was a technical organization started in 1976, in parallel with Usenix. The STUG goal was to develop a powerful and portable Unix-like system that could be implemented on top of virtually any operating system, providing the capabilities and features of Unix in a non-proprietary system. With its focus on building clean, portable, reusable code shared amongst multiple applications and runnable on any operating system, the Software Tools movement reestablished the tradition of open source and the concepts of empowering users to define, develop, control, and freely distribute their computing environment. History: In 1976, Brian Kernighan (then of Bell Labs) and P. J. Plauger published Software Tools, the first of their books on programming inspired by the recent creation of the Unix operating system by Kernighan's colleagues at Bell Labs. The "Software Tools" series spread the essence of "C/Unix thinking" with makeovers for Fortran and Pascal. Kernighan's Ratfor (rational FORTRAN preprocessor) was eventually put in the public domain. History: Deborah K. Scherrer, Dennis E. Hall, and Joseph S. Sventek, then researchers at the Lawrence Berkeley National Laboratory quickly picked up the Software Tools book and philosophy. They expanded the initial set of a few dozen tools from the book into an entire Virtual Operating System (VOS), providing an almost complete set of the Unix tools, a Unix-like programming library, and an operating system interface that could be implemented on top of virtually any system. They freely distributed their VOS collection worldwide. Their work generated ports of the software to over 50 operating systems and a users group of more than 2000.An LBNL research report appeared in Communications of the ACM in September 1980.Scherrer, also on the Usenix Board at the Time, established and coordinated the Software Tools Users Group, aligning itself with Usenix Starting in 1979, STUG and Usenix held parallel conferences. STUG also produced a series of newsletters. STUG also coordinated with the European Unix Users Group and spawned similar groups in other parts of the world.The Software Tools movement eventually triggered several commercial companies to port and distribute the Software Tools to microcomputer systems such as CP/M and MS-DOS. Awards: On January 24, 1996, Scherrer's, Hall's, and Sventek's work was recognized with a USENIX Lifetime Achievement Award (“The Flame”). In 1993 Scherrer had previously been honored with a “UNIX Academic Driver” award presented by Bell Labs, for “Outstanding Contributions to the UNIX community”. Her work included the Software Tools movement as well as contributions to USENIX. Other Major Contributors: The Software Tools project was the result of efforts from hundreds of people at many, many sites. The USENIX STUG Lifetime Achievement Award includes the names of many, but certainly not all, major contributors to the Software Tools project. Legacy: By the late-1980s, Unix was becoming more available, Microsoft had taken over the PC market, and the need for the VOS environment started to subside. The STUG group decided to discontinue, choosing to donate the group's financial legacy to endow a yearly USENIX “STUG Award”. This award “recognizes significant contributions to the community that reflect the spirit and character demonstrated by those who came together in the Software Tools Users Group. Recipients of the annual STUG Award conspicuously exhibit a contribution to the reusable code base to all and/or the provision of a significant enabling technology to users in a widely available form.” .
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Embedded C++** Embedded C++: Embedded C++ (EC++) is a dialect of the C++ programming language for embedded systems. It was defined by an industry group led by major Japanese central processing unit (CPU) manufacturers, including NEC, Hitachi, Fujitsu, and Toshiba, to address the shortcomings of C++ for embedded applications. The goal of the effort is to preserve the most useful object-oriented features of the C++ language yet minimize code size while maximizing execution efficiency and making compiler construction simpler. The official website states the goal as "to provide embedded systems programmers with a subset of C++ that is easy for the average C programmer to understand and use". Differences from C++: Embedded C++ excludes some features of C++. Some compilers, such as those from Green Hills and IAR Systems, allow certain features of ISO/ANSI C++ to be enabled in Embedded C++. IAR Systems calls this "Extended Embedded C++". Compilation An EC++ program can be compiled with any C++ compiler. But, a compiler specific to EC++ may have an easier time doing optimization. Compilers specific to EC++ are provided by companies such as: IAR Systems Freescale Semiconductor, (spin-off from Motorola in 2004 who had acquired Metrowerks in 1999) Tasking Software, part of Altium Limited Green Hills Software Criticism: The language has had a poor reception with many expert C++ programmers. In particular, Bjarne Stroustrup says, "To the best of my knowledge EC++ is dead (2004), and if it isn't it ought to be." In fact, the official English EC++ website has not been updated since 2002. Nevertheless, a restricted subset of C++ (based on Embedded C++) has been adopted by Apple Inc. as the exclusive programming language to create all I/O Kit device drivers for Apple's macOS, iPadOS and iOS operating systems of the popular Macintosh, iPhone, and iPad products. Apple engineers felt the exceptions, multiple inheritance, templates, and runtime type information features of standard C++ were either insufficient or not efficient enough for use in a high-performance, multithreaded kernel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zerumbone synthase** Zerumbone synthase: Zerumbone synthase (EC 1.1.1.326, ZSD1) is an enzyme with systematic name 10-hydroxy-alpha-humulene:NAD+ oxidoreductase. This enzyme catalyses the following chemical reaction 10-hydroxy-alpha-humulene + NAD+ ⇌ zerumbone + NADH + H+The enzyme was cloned from shampoo ginger, Zingiber zerumbet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of NeuroInterventional Surgery** Journal of NeuroInterventional Surgery: The Journal of NeuroInterventional Surgery is a peer-reviewed medical journal covering the field of neurointerventional surgery. It is published by the BMJ Group on behalf of the Society of NeuroInterventional Surgery. It is also the official journal of the Interventional Chapter of the Australian and New Zealand Society of Neuroradiology.It is abstracted and indexed by Current Contents, CINAHL and Index Medicus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Faroese Braille** Faroese Braille: Faroese Braille is the braille alphabet of the Faroese language. It has the same basic letter assignments as the Scandinavian Braille and is quite similar to the Icelandic Braille. Faroese Braille: All base letters are as in International Braille (meaning the French Braille alphabet, as that was the first one created). The letters are also the same as the other Nordic Braille alphabets, just as they are in the normal printed Nordic alphabets. For example, å/á, ö/ø and ä/æ are the same letters not only in Braille between, say, Faroese and Swedish Braille, but also recognized as the same characters between, for example, ink-printed Norwegian and Swedish (it is merely a stylistic choice in which language uses which). That is to say, all letter assignments in the Swedish and Icelandic Braille alphabets are the same in the Faroese one.For example, ð is the same letter in both Faroese and Icelandic ink-print characters, and their Braille alphabets. The difference in the alphabets comes only in the Faroese diphthongs (ei being 26, ey 356, oy 24 – that is to say, "ei" is represented by one dot filled in, in the second row of the first column and the third row of the second column of a Braille character). These diphthongs are also considered single sounds when spelling Faroese in general, as in, it always would be spelled "ey" instead of "e-y" and the two letters cannot be separated. These assignments conveniently do not exist in the Icelandic Braille alphabet, so they are an easy way to tell if the Braille is Faroese or Icelandic. Likewise, the Icelandic letter þ (which no longer exists in Faroese) is assigned to 1246, which is a character that does not exist already in the Faroese Braille alphabet. Summarized, it is just as easy to read Icelandic Braille if one is a Faroese-speaker, as it is to read Icelandic ink-printed text if one can read Faroese. Punctuation: The apostrophe, ⠄, is also used as the mark of abbreviations, while ⠲ is used as a period / full stop.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trivet** Trivet: A trivet is an object placed between a serving dish or bowl, and a dining table, usually to protect the table from heat damage. Whilst tri- means three, and -vet comes from -ped, meaning 'foot' / 'feet', trivets often have four 'feet', and some trivets, including many wooden trivets, have no 'feet' at all. Trivet: Trivet also refers to a tripod used to elevate pots from the coals of an open fire (the word trivet itself ultimately comes from Latin tripes meaning "tripod"). Metal trivets are often tripod-like structures with three legs to support the trivet horizontally to hold the dish or pot above the table surface. These are often included with modern non-electric pressure cookers. A trivet may often contain a receptacle for a candle that can be lit to keep food warm. Trivet: A three-legged design can reduce wobbling on uneven surfaces. Modern trivets are made from metal, wood, ceramic, fabric, silicone or cork. Trivet: When roasting any meat in an oven, trivet racks - which typically fit into roasting pans - are often used to enable the meat joint to be held above the direct heat of the roasting pan and allow the juices of the joint to drip into the roasting pan for the subsequent making of gravy. A trivet can also be made of freshly cut carrot, celery and onion. This not only raises the meat, it has the further advantage of providing a gravy-friendly liquid when the vegetables and juices are sieved at the end of cooking. History: Trivets have been in use since antiquity, and are sometimes referred to as "fire stands". In the tomb of the Chinese ruler Zhao Mo (2nd century BCE) were found several metal trivets that had been used by him during his lifetime, now stored at the Museum of the Mausoleum of the Nanyue King. Fire-stands were also uncovered at archaeological sites in Israel, dating back to the Philistine time-period (circa 1st millennium BCE).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subjective visual vertical** Subjective visual vertical: Subjective Visual Vertical (SVV) is a diagnostic test of the inner ear to assess a patient's perception of verticality and detect if there are signs of an abnormal tilt that can cause dizziness or vertigo. It investigates the function of the utricle, one of two otolith organs located in the vertebrate inner ear, to evaluate the perception of verticality. As its name suggests, the test is subjective and cannot directly diagnose Acute Vestibular Syndrome (AVS), Ménière’s disease, vestibular migraine, vestibular neuritis or other central nervous system pathologies. Technique and usage: This test is conducted in various ways. One method involves a dark room where a patient sits and adjusts a remotely controlled laser projection line to his perceived horizontal or vertical position. Sometimes this involves a dynamic element like a rotary chair. Another method, known as the bucket test, uses a bucket over a patient's head. The clinician rotates the bucket until the a line at the bottom of the bucket is perceived to be vertical. The Subjective Virtual Visual goggle is a trademarked method, which employs a goggle displaying a vertical line and a hand-held remote. It allows the clinician to administer the test while tilting the patient's head. It is used for the following objectives: Diagnosis of vestibular disorders Assessment of the effectiveness of vestibular rehabilitation in patients suffering from vertigo Assessment of chronic dizziness and other otolith disorders Differentiation between peripheral and central vestibular disorders
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Canon EF 28-200mm lens** Canon EF 28-200mm lens: The EF28-200mm f/3.5-5.6 USM lens was a superzoom lens made by Canon Inc. Canon EF 28-200mm lens: The lens has an EF-type mount, which fits the Canon EOS line of cameras. Due to its big focal range, it is useful for travel photography. One review rated the image quality of this lens as poor.This lens has covered roughly the same focal length range for 35 mm (full frame) like the EF-S 18–135mm lens is currently covering for cropped sensor (35 mm equivalent focal length: 29–216mm). The EF-S 18–135mm lens is one of the standard lenses, often sold with Canon EF-S cameras as bundle. Currently a similar lenses are available for cameras with Canon EF lens mount only from third party manufacturers. Available Canon lenses, which have the closest focal length are e.g. the EF 28–300mm lens, which is much heavier, or the EF 70–200mm f/4L.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conformal field theory** Conformal field theory: A conformal field theory (CFT) is a quantum field theory that is invariant under conformal transformations. In two dimensions, there is an infinite-dimensional algebra of local conformal transformations, and conformal field theories can sometimes be exactly solved or classified. Conformal field theory has important applications to condensed matter physics, statistical mechanics, quantum statistical mechanics, and string theory. Statistical and condensed matter systems are indeed often conformally invariant at their thermodynamic or quantum critical points. Scale invariance vs conformal invariance: In quantum field theory, scale invariance is a common and natural symmetry, because any fixed point of the renormalization group is by definition scale invariant. Conformal symmetry is stronger than scale invariance, and one needs additional assumptions to argue that it should appear in nature. The basic idea behind its plausibility is that local scale invariant theories have their currents given by Tμνξν where ξν is a Killing vector and Tμν is a conserved operator (the stress-tensor) of dimension exactly d . For the associated symmetries to include scale but not conformal transformations, the trace Tμμ has to be a non-zero total derivative implying that there is a non-conserved operator of dimension exactly d−1 Under some assumptions it is possible to completely rule out this type of non-renormalization and hence prove that scale invariance implies conformal invariance in a quantum field theory, for example in unitary compact conformal field theories in two dimensions. Scale invariance vs conformal invariance: While it is possible for a quantum field theory to be scale invariant but not conformally invariant, examples are rare. For this reason, the terms are often used interchangeably in the context of quantum field theory. Two dimensions vs higher dimensions: The number of independent conformal transformations is infinite in two dimensions, and finite in higher dimensions. This makes conformal symmetry much more constraining in two dimensions. All conformal field theories share the ideas and techniques of the conformal bootstrap. But the resulting equations are more powerful in two dimensions, where they are sometimes exactly solvable (for example in the case of minimal models), in contrast to higher dimensions, where numerical approaches dominate. Two dimensions vs higher dimensions: The development of conformal field theory has been earlier and deeper in the two-dimensional case, in particular after the 1983 article by Belavin, Polyakov and Zamolodchikov. The term conformal field theory has sometimes been used with the meaning of two-dimensional conformal field theory, as in the title of a 1997 textbook. Higher-dimensional conformal field theories have become more popular with the AdS/CFT correspondence in the late 1990s, and the development of numerical conformal bootstrap techniques in the 2000s. Two dimensions vs higher dimensions: Global vs local conformal symmetry in two dimensions The global conformal group of the Riemann sphere is the group of Möbius transformations PSL2(C) , which is finite-dimensional. On the other hand, infinitesimal conformal transformations form the infinite-dimensional Witt algebra: the conformal Killing equations in two dimensions, ∂μξν+∂νξμ=∂⋅ξημν, reduce to just the Cauchy-Riemann equations, ∂z¯ξ(z)=0=∂zξ(z¯) , the infinity of modes of arbitrary analytic coordinate transformations ξ(z) yield the infinity of Killing vector fields zn∂z Strictly speaking, it is possible for a two-dimensional conformal field theory to be local (in the sense of possessing a stress-tensor) while still only exhibiting invariance under the global PSL2(C) . This turns out to be unique to non-unitary theories; an example is the biharmonic scalar. This property should be viewed as even more special than scale without conformal invariance as it requires Tμμ to be a total second derivative. Two dimensions vs higher dimensions: Global conformal symmetry in two dimensions is a special case of conformal symmetry in higher dimensions, and is studied with the same techniques. This is done not only in theories that have global but not local conformal symmetry, but also in theories that do have local conformal symmetry, for the purpose of testing techniques or ideas from higher-dimensional CFT. In particular, numerical bootstrap techniques can be tested by applying them to minimal models, and comparing the results with the known analytic results that follow from local conformal symmetry. Two dimensions vs higher dimensions: Conformal field theories with a Virasoro symmetry algebra In a conformally invariant two-dimensional quantum theory, the Witt algebra of infinitesimal conformal transformations has to be centrally extended. The quantum symmetry algebra is therefore the Virasoro algebra, which depends on a number called the central charge. This central extension can also be understood in terms of a conformal anomaly. Two dimensions vs higher dimensions: It was shown by Alexander Zamolodchikov that there exists a function which decreases monotonically under the renormalization group flow of a two-dimensional quantum field theory, and is equal to the central charge for a two-dimensional conformal field theory. This is known as the Zamolodchikov C-theorem, and tells us that renormalization group flow in two dimensions is irreversible.In addition to being centrally extended, the symmetry algebra of a conformally invariant quantum theory has to be complexified, resulting in two copies of the Virasoro algebra. In Euclidean CFT, these copies are called holomorphic and antiholomorphic. In Lorentzian CFT, they are called left-moving and right moving. Both copies have the same central charge. Two dimensions vs higher dimensions: The space of states of a theory is a representation of the product of the two Virasoro algebras. This space is a Hilbert space if the theory is unitary. Two dimensions vs higher dimensions: This space may contain a vacuum state, or in statistical mechanics, a thermal state. Unless the central charge vanishes, there cannot exist a state that leaves the entire infinite dimensional conformal symmetry unbroken. The best we can have is a state that is invariant under the generators Ln≥−1 of the Virasoro algebra, whose basis is (Ln)n∈Z . This contains the generators L−1,L0,L1 of the global conformal transformations. The rest of the conformal group is spontaneously broken. Conformal symmetry: Definition and Jacobian For a given spacetime and metric, a conformal transformation is a transformation that preserves angles. We will focus on conformal transformations of the flat d -dimensional Euclidean space Rd or of the Minkowski space R1,d−1 If x→f(x) is a conformal transformation, the Jacobian Jνμ(x)=∂fμ(x)∂xν is of the form Jνμ(x)=Ω(x)Rνμ(x), where Ω(x) is the scale factor, and Rνμ(x) is a rotation (i.e. an orthogonal matrix) or Lorentz transformation. Conformal symmetry: Conformal group The conformal group is locally isomorphic to SO(1,d+1) (Euclidean) or SO(2,d) (Minkowski). This includes translations, rotations (Euclidean) or Lorentz transformations (Minkowski), and dilations i.e. scale transformations xμ→λxμ. This also includes special conformal transformations. For any translation Ta(x)=x+a , there is a special conformal transformation Sa=I∘Ta∘I, where I is the inversion such that I(xμ)=xμx2. In the sphere Sd=Rd∪{∞} , the inversion exchanges 0 with ∞ . Translations leave ∞ fixed, while special conformal transformations leave 0 fixed. Conformal algebra The commutation relations of the corresponding Lie algebra are [Pμ,Pν]=0,[D,Kμ]=−Kμ,[D,Pμ]=Pμ,[Kμ,Kν]=0,[Kμ,Pν]=ημνD−iMμν, where P generate translations, D generates dilations, Kμ generate special conformal transformations, and Mμν generate rotations or Lorentz transformations. The tensor ημν is the flat metric. Conformal symmetry: Global issues in Minkowski space In Minkowski space, the conformal group does not preserve causality. Observables such as correlation functions are invariant under the conformal algebra, but not under the conformal group. As shown by Lüscher and Mack, it is possible to restore the invariance under the conformal group by extending the flat Minkowski space into a Lorentzian cylinder. The original Minkowski space is conformally equivalent to a region of the cylinder called a Poincaré patch. In the cylinder, global conformal transformations do not violate causality: instead, they can move points outside the Poincaré patch. Correlation functions and conformal bootstrap: In the conformal bootstrap approach, a conformal field theory is a set of correlation functions that obey a number of axioms. Correlation functions and conformal bootstrap: The n -point correlation function ⟨O1(x1)⋯On(xn)⟩ is a function of the positions xi and other parameters of the fields O1,…,On . In the bootstrap approach, the fields themselves make sense only in the context of correlation functions, and may be viewed as efficient notations for writing axioms for correlation functions. Correlation functions depend linearly on fields, in particular ∂x1⟨O1(x1)⋯⟩=⟨∂x1O1(x1)⋯⟩ We focus on CFT on the Euclidean space Rd . In this case, correlation functions are Schwinger functions. They are defined for xi≠xj , and do not depend on the order of the fields. In Minkowski space, correlation functions are Wightman functions. They can depend on the order of the fields, as fields commute only if they are spacelike separated. A Euclidean CFT can be related to a Minkowskian CFT by Wick rotation, for example thanks to the Osterwalder-Schrader theorem. In such cases, Minkowskian correlation functions are obtained from Euclidean correlation functions by an analytic continuation that depends on the order of the fields. Correlation functions and conformal bootstrap: Behaviour under conformal transformations Any conformal transformation x→f(x) acts linearly on fields O(x)→πf(O)(x) , such that f→πf is a representation of the conformal group, and correlation functions are invariant: ⟨πf(O1)(x1)⋯πf(On)(xn)⟩=⟨O1(x1)⋯On(xn)⟩. Primary fields are fields that transform into themselves via πf . The behaviour of a primary field is characterized by a number Δ called its conformal dimension, and a representation ρ of the rotation or Lorentz group. For a primary field, we then have where x′=f−1(x). Correlation functions and conformal bootstrap: Here Ω(x) and R(x) are the scale factor and rotation that are associated to the conformal transformation f . The representation ρ is trivial in the case of scalar fields, which transform as πf(O)(x)=Ω(x′)−ΔO(x′) . For vector fields, the representation ρ is the fundamental representation, and we would have πf(Oμ)(x)=Ω(x′)−ΔRμν(x′)Oν(x′) A primary field that is characterized by the conformal dimension Δ and representation ρ behaves as a highest-weight vector in an induced representation of the conformal group from the subgroup generated by dilations and rotations. In particular, the conformal dimension Δ characterizes a representation of the subgroup of dilations. In two dimensions, the fact that this induced representation is a Verma module appears throughout the literature. For higher-dimensional CFTs (in which the maximally compact subalgebra is larger than the Cartan subalgebra), it has recently been appreciated that this representation is a parabolic or generalized Verma module.Derivatives (of any order) of primary fields are called descendant fields. Their behaviour under conformal transformations is more complicated. For example, if O is a primary field, then πf(∂μO)(x)=∂μ(πf(O)(x)) is a linear combination of ∂μO and O . Correlation functions of descendant fields can be deduced from correlation functions of primary fields. However, even in the common case where all fields are either primaries or descendants thereof, descendant fields play an important role, because conformal blocks and operator product expansions involve sums over all descendant fields. Correlation functions and conformal bootstrap: The collection of all primary fields Op , characterized by their scaling dimensions Δp and the representations ρp , is called the spectrum of the theory. Dependence on field positions The invariance of correlation functions under conformal transformations severely constrain their dependence on field positions. In the case of two- and three-point functions, that dependence is determined up to finitely many constant coefficients. Higher-point functions have more freedom, and are only determined up to functions of conformally invariant combinations of the positions. The two-point function of two primary fields vanishes if their conformal dimensions differ. 0. Correlation functions and conformal bootstrap: If the dilation operator is diagonalizable (i.e. if the theory is not logarithmic), there exists a basis of primary fields such that two-point functions are diagonal, i.e. i≠j⟹⟨OiOj⟩=0 . In this case, the two-point function of a scalar primary field is ⟨O(x1)O(x2)⟩=1|x1−x2|2Δ, where we choose the normalization of the field such that the constant coefficient, which is not determined by conformal symmetry, is one. Similarly, two-point functions of non-scalar primary fields are determined up to a coefficient, which can be set to one. In the case of a symmetric traceless tensor of rank ℓ , the two-point function is traces |x1−x2|2Δ, where the tensor Iμ,ν(x) is defined as Iμ,ν(x)=ημν−2xμxνx2. Correlation functions and conformal bootstrap: The three-point function of three scalar primary fields is 123 12 13 23 |Δ2+Δ3−Δ1, where xij=xi−xj , and 123 is a three-point structure constant. With primary fields that are not necessarily scalars, conformal symmetry allows a finite number of tensor structures, and there is a structure constant for each tensor structure. In the case of two scalar fields and a symmetric traceless tensor of rank ℓ , there is only one tensor structure, and the three-point function is 123 traces 12 13 23 |Δ2+Δ3−Δ1, where we introduce the vector 13 23 23 13 12 13 23 |. Correlation functions and conformal bootstrap: Four-point functions of scalar primary fields are determined up to arbitrary functions g(u,v) of the two cross-ratios 12 34 13 24 14 23 13 24 2. The four-point function is then 24 14 14 13 12 34 |Δ3+Δ4g(u,v). Correlation functions and conformal bootstrap: Operator product expansion The operator product expansion (OPE) is more powerful in conformal field theory than in more general quantum field theories. This is because in conformal field theory, the operator product expansion's radius of convergence is finite (i.e. it is not zero). Provided the positions x1,x2 of two fields are close enough, the operator product expansion rewrites the product of these two fields as a linear combination of fields at a given point, which can be chosen as x2 for technical convenience. Correlation functions and conformal bootstrap: The operator product expansion of two fields takes the form 12 k(x1−x2)Ok(x2), where 12 k(x) is some coefficient function, and the sum in principle runs over all fields in the theory. (Equivalently, by the state-field correspondence, the sum runs over all states in the space of states.) Some fields may actually be absent, in particular due to constraints from symmetry: conformal symmetry, or extra symmetries. Correlation functions and conformal bootstrap: If all fields are primary or descendant, the sum over fields can be reduced to a sum over primaries, by rewriting the contributions of any descendant in terms of the contribution of the corresponding primary: 12 pPp(x1−x2,∂x2)Op(x2), where the fields Op are all primary, and 12 p is the three-point structure constant (which for this reason is also called OPE coefficient). The differential operator Pp(x1−x2,∂x2) is an infinite series in derivatives, which is determined by conformal symmetry and therefore in principle known. Correlation functions and conformal bootstrap: Viewing the OPE as a relation between correlation functions shows that the OPE must be associative. Furthermore, if the space is Euclidean, the OPE must be commutative, because correlation functions do not depend on the order of the fields, i.e. O1(x1)O2(x2)=O2(x2)O1(x1) The existence of the operator product expansion is a fundamental axiom of the conformal bootstrap. However, it is generally not necessary to compute operator product expansions and in particular the differential operators Pp(x1−x2,∂x2) . Rather, it is the decomposition of correlation functions into structure constants and conformal blocks that is needed. The OPE can in principle be used for computing conformal blocks, but in practice there are more efficient methods. Correlation functions and conformal bootstrap: Conformal blocks and crossing symmetry Using the OPE O1(x1)O2(x2) , a four-point function can be written as a combination of three-point structure constants and s-channel conformal blocks, 12 34 Gp(s)(xi). Correlation functions and conformal bootstrap: The conformal block Gp(s)(xi) is the sum of the contributions of the primary field Op and its descendants. It depends on the fields Oi and their positions. If the three-point functions ⟨O1O2Op⟩ or ⟨O3O4Op⟩ involve several independent tensor structures, the structure constants and conformal blocks depend on these tensor structures, and the primary field Op contributes several independent blocks. Conformal blocks are determined by conformal symmetry, and known in principle. To compute them, there are recursion relations and integrable techniques.Using the OPE O1(x1)O4(x4) or O1(x1)O3(x3) , the same four-point function is written in terms of t-channel conformal blocks or u-channel conformal blocks, 14 23 13 24 Gp(u)(xi). Correlation functions and conformal bootstrap: The equality of the s-, t- and u-channel decompositions is called crossing symmetry: a constraint on the spectrum of primary fields, and on the three-point structure constants. Correlation functions and conformal bootstrap: Conformal blocks obey the same conformal symmetry constraints as four-point functions. In particular, s-channel conformal blocks can be written in terms of functions gp(s)(u,v) of the cross-ratios. While the OPE O1(x1)O2(x2) only converges if 12 min 23 24 |) , conformal blocks can be analytically continued to all (non pairwise coinciding) values of the positions. In Euclidean space, conformal blocks are single-valued real-analytic functions of the positions except when the four points xi lie on a circle but in a singly-transposed cyclic order [1324], and only in these exceptional cases does the decomposition into conformal blocks not converge. Correlation functions and conformal bootstrap: A conformal field theory in flat Euclidean space Rd is thus defined by its spectrum {(Δp,ρp)} and OPE coefficients (or three-point structure constants) {Cpp′p″} , satisfying the constraint that all four-point functions are crossing-symmetric. From the spectrum and OPE coefficients (collectively referred to as the CFT data), correlation functions of arbitrary order can be computed. Features of conformal field theories: Unitarity A conformal field theory is unitary if its space of states has a positive definite scalar product such that the dilation operator is self-adjoint. Then the scalar product endows the space of states with the structure of a Hilbert space. Features of conformal field theories: In Euclidean conformal field theories, unitarity is equivalent to reflection positivity of correlation functions: one of the Osterwalder-Schrader axioms.Unitarity implies that the conformal dimensions of primary fields are real and bounded from below. The lower bound depends on the spacetime dimension d , and on the representation of the rotation or Lorentz group in which the primary field transforms. For scalar fields, the unitarity bound is Δ≥12(d−2). Features of conformal field theories: In a unitary theory, three-point structure constants must be real, which in turn implies that four-point functions obey certain inequalities. Powerful numerical bootstrap methods are based on exploiting these inequalities. Compactness A conformal field theory is compact if it obeys three conditions: All conformal dimensions are real. Features of conformal field theories: For any Δ∈R there are finitely many states whose dimensions are less than Δ There is a unique state with the dimension Δ=0 , and it is the vacuum state, i.e. the corresponding field is the identity field.(The identity field is the field whose insertion into correlation functions does not modify them, i.e. ⟨I(x)⋯⟩=⟨⋯⟩ .) The name comes from the fact that if a 2D conformal field theory is also a sigma model, it will satisfy these conditions if and only if its target space is compact. Features of conformal field theories: It is believed that all unitary conformal field theories are compact in dimension d>2 . Without unitarity, on the other hand, it is possible to find CFTs in dimension four and in dimension 4−ϵ that have a continuous spectrum. And in dimension two, Liouville theory is unitary but not compact. Extra symmetries A conformal field theory may have extra symmetries in addition to conformal symmetry. For example, the Ising model has a Z2 symmetry, and superconformal field theories have supersymmetry. Examples: Mean field theory A generalized free field is a field whose correlation functions are deduced from its two-point function by Wick's theorem. For instance, if ϕ is a scalar primary field of dimension Δ , its four-point function reads 12 34 13 24 14 23 |2Δ. For instance, if ϕ1,ϕ2 are two scalar primary fields such that ⟨ϕ1ϕ2⟩=0 (which is the case in particular if Δ1≠Δ2 ), we have the four-point function 12 34 |2Δ2. Examples: Mean field theory is a generic name for conformal field theories that are built from generalized free fields. For example, a mean field theory can be built from one scalar primary field ϕ . Then this theory contains ϕ , its descendant fields, and the fields that appear in the OPE ϕϕ . The primary fields that appear in ϕϕ can be determined by decomposing the four-point function ⟨ϕϕϕϕ⟩ in conformal blocks: their conformal dimensions belong to 2Δ+2N : in mean field theory, the conformal dimension is conserved modulo integers. Similarly, it is possible to construct mean field theories starting from a field with non-trivial Lorentz spin. For example, the 4d Maxwell theory (in the absence of charged matter fields) is a mean field theory built out of an antisymmetric tensor field Fμν with scaling dimension Δ=2 Mean field theories have a Lagrangian description in terms of a quadratic action involving Laplacian raised to an arbitrary real power (which determines the scaling dimension of the field). For a generic scaling dimension, the power of the Laplacian is non-integer. The corresponding mean field theory is then non-local (e.g. it does not have a conserved stress tensor operator). Examples: Critical Ising model The critical Ising model is the critical point of the Ising model on a hypercubic lattice in two or three dimensions. It has a Z2 global symmetry, corresponding to flipping all spins. The two-dimensional critical Ising model includes the M(4,3) Virasoro minimal model, which can be solved exactly. There is no Ising CFT in d≥4 dimensions. Examples: Critical Potts model The critical Potts model with q=2,3,4,⋯ colors is a unitary CFT that is invariant under the permutation group Sq . It is a generalization of the critical Ising model, which corresponds to q=2 . The critical Potts model exists in a range of dimensions depending on q The critical Potts model may be constructed as the continuum limit of the Potts model on d-dimensional hypercubic lattice. In the Fortuin-Kasteleyn reformulation in terms of clusters, the Potts model can be defined for q∈C , but it is not unitary if q is not integer. Examples: Critical O(N) model The critical O(N) model is a CFT invariant under the orthogonal group. For any integer N , it exists as a interacting, unitary and compact CFT in d=3 dimensions (and for N=1 also in two dimensions). It is a generalization of the critical Ising model, which corresponds to the O(N) CFT at N=1 The O(N) CFT can be constructed as the continuum limit of a lattice model with spins that are N-vectors, discussed here. Examples: Alternatively, the critical O(N) model can be constructed as the ε→1 limit of Wilson-Fisher fixed point in d=4−ε dimensions. At ε=0 , the Wilson-Fisher fixed point becomes the tensor product of N free scalars with dimension Δ=1 . For 0<ε<1 the model in question is non-unitary.When N is large, the O(N) model can be solved perturbatively in a 1/N expansion by means of the Hubbard–Stratonovich transformation. In particular, the N→∞ limit of the critical O(N) model is well-understood. Examples: Conformal gauge theories Some conformal field theories in three and four dimensions admit a Lagrangian description in the form of a gauge theory, either abelian or non-abelian. Examples of such CFTs are conformal QED with sufficiently many charged fields in d=3 or the Banks-Zaks fixed point in d=4 Applications: Continuous phase transitions Continuous phase transitions (critical points) of classical statistical physics systems with D spatial dimensions are often described by Euclidean conformal field theories. A necessary condition for this to happen is that the critical point should be invariant under spatial rotations and translations. However this condition is not sufficient: some exceptional critical points are described by scale invariant but not conformally invariant theories. If the classical statistical physics system is reflection positive, the corresponding Euclidean CFT describing its critical point will be unitary. Applications: Continuous quantum phase transitions in condensed matter systems with D spatial dimensions may be described by Lorentzian D+1 dimensional conformal field theories (related by Wick rotation to Euclidean CFTs in D+1 dimensions). Apart from translation and rotation invariance, an additional necessary condition for this to happen is that the dynamical critical exponent z should be equal to 1. CFTs describing such quantum phase transitions (in absence of quenched disorder) are always unitary. Applications: String theory World-sheet description of string theory involves a two-dimensional CFT coupled to dynamical two-dimensional quantum gravity (or supergravity, in case of superstring theory). Consistency of string theory models imposes constraints on the central charge of this CFT, which should be c=26 in bosonic string theory and c=10 in superstring theory. Coordinates of the spacetime in which string theory lives correspond to bosonic fields of this CFT. Applications: AdS/CFT correspondence Conformal field theories play a prominent role in the AdS/CFT correspondence, in which a gravitational theory in anti-de Sitter space (AdS) is equivalent to a conformal field theory on the AdS boundary. Notable examples are d = 4, N = 4 supersymmetric Yang–Mills theory, which is dual to Type IIB string theory on AdS5 × S5, and d = 3, N = 6 super-Chern–Simons theory, which is dual to M-theory on AdS4 × S7. (The prefix "super" denotes supersymmetry, N denotes the degree of extended supersymmetry possessed by the theory, and d the number of space-time dimensions on the boundary.)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2,4 Dienoyl-CoA reductase** 2,4 Dienoyl-CoA reductase: 2,4 Dienoyl-CoA reductase also known as DECR1 is an enzyme which in humans is encoded by the DECR1 gene which resides on chromosome 8. This enzyme catalyzes the following reactions DECR1 participates in the beta oxidation and metabolism of polyunsaturated fatty enoyl-CoA esters. Specifically, it catalyzes the reduction of 2,4 dienoyl-CoA thioesters of varying length by NADPH cofactor to 3-trans-enoyl-CoA of equivalent length. Unlike the breakdown of saturated fat, cis and trans polyunsaturated fatty acid degradation requires three additional enzymes to generate a product compatible with the standard beta oxidation pathway. DECR is the second such enzyme (the others being enoyl CoA isomerase and dienoyl CoA isomerase) and is the rate limiting step in this auxiliary flow. DECR is capable of reducing both 2-trans,4-cis-dienoyl-CoA and 2-trans,4-trans-dienoyl-CoA thioesters with equal efficiency. This is unusual, since most enzymes are highly stereoselective or stereospecific. There is no clear explanation for DECR's of lack of stereospecificity. Structure: Eukaryotic DECR exists in both the mitochondria (mDECR) and the peroxisome (pDECR, coded by gene DECR2). The enzymes from each organelle are homologous and part of the short-chain dehydrogenase/reductase SDR super-family. mDECR is 124 kDa consisting of 335 amino acids before post-translational modification. The secondary structure shares many of the motifs of SDR, including a Rossmann fold for strong NADPH binding. The protein exists as a homotetramer in physiological environment, but has been shown to also form monomers and dimers in solution.Crystallization of mDECR shows the enzyme provides a network of hydrogen bonds from key residues in the active site to NADPH and the 2,4-dienoyl-CoA which positions the hydride at 3.4 Å to the Cδ, compared with 4.0 Å to the Cβ (not shown). The enolate intermediate discussed earlier is stabilized by residues additional hydrogen bonds to Tyr166 and Asn148. Lys214 and Ser210 (conserved residues in all SDR enzymes) are thought to increase the pKa of Tyr166 and stabilize the transition state. Additionally, at one end of the active site there is a flexible loop that provides sufficient room for long carbon chains. This likely gives the enzyme flexibility to process fatty acid chains of various lengths. Substrate length for mDECR catalysis is thought to be limited at 20 carbons, at which this very long chain fatty acid is first partially oxidized by pDECR in the peroxisome. Enzyme mechanism: Eukaryotic DECR 2,4 Dienoyl-CoA thioester reduction by NADPH to 3-Enoyl CoA occurs by a two-step sequential mechanism via an enolate intermediate. DECR binds NADPH and the fatty acid thioester and positions them for specific hydride transfer to the Cδ on the hydrocarbon chain. The electrons from the Cγ-Cδ double bond move over to the Cβ-Cγ position, and those from the Cα-Cβ form an enolate. In the final step, a proton is abstracted from the water to the Cα and the thioester is reformed, resulting in a single Cβ-Cγ trans double bond. Since the final proton comes from water, the pH has a significant effect on the catalytic rate with the enzyme demonstrating maximal activity at ~6.0. A decrease in activity at pH < 6.0 can be explained by de-protonation of titratable residues that affect protein folding or substrate binding. Mutant proteins with modifications at key acidic amino acids (E154, E227, E276, D300, D117) show order of magnitude increases in Km and/or decreases in Vmax. Enzyme mechanism: Prokaryotic DECR 2,4 Dienoyl-CoA Reductase from Escherichia coli shares very similar kinetic properties to that of eukaryotes, but differs significantly in both structure and mechanism. In addition to NADPH, E. Coli DECR requires a set of FAD, FMN and iron–sulfur cluster molecules to complete the electron transfer. A further distinction is E. Coli DECR produces the final 2-trans-enoyl-CoA without the need for Enoyl CoA Isomerase. The active site contains accurately positioned Tyr166 that donates a proton to the Cγ after hydride attack at the Cδ, completing the reduction in a single concerted step. Surprisingly, mutation of the Tyr166 does not eliminate enzyme activity but instead changes the product to 3-trans-enoyl-CoA. The current explanation is that Glu164, an acidic residue in the active site, acts as a proton donor to Cα when Tyr166 is not present. Function: DECR is one of three auxiliary enzymes involved in a rate-limiting step of unsaturated fatty acid oxidation in mitochondria. In particular, this enzyme contributes to breaking the double bonds at all even-numbered positions, and some double bonds at odd-numbered position. The structure of the ternary complex of pDCR (peroxisomal 2,4-dienoyl CoA reductases) with NADP and its substrate provides essential and unique insights into the mechanism of catalysis. Unlike other members belonging to the SDR family, catalysis by pDCR does not involve a tyrosine-serine pair. Instead, a catalytically critical aspartate, together with an invariant lysine, polarizes a water molecule to donate a proton for the formation of the product. Although pDCR can use 2,4-hexadienoyl CoA as a substrate, the affinities for short chain fatty acids are lower. Analysis of the hinge movement of DCRs from the mitochondrion and peroxisomes sheds light on the reason behind the unique ability of the peroxisome to shorten very long chain fatty acids. Clinical significance: Mutations in the DECR1 gene may result in 2,4 Dienoyl-CoA reductase deficiency, a rare but lethal disorder. Clinical significance: Due to its role in fatty acid oxidation, DECR may serve as a therapeutic target for treating non-insulin dependent diabetes mellitus (NIDDM), which features hyperglycemia due to increased fatty acid oxidation.In knockout mice studies, DECR1−/− subjects accumulate significant concentrations of mono and polyunsaturated fatty acids in the liver during fasting (such as oleic acid, palmitoleic acid, linoleic acid, and linolenic acid). Mutant subjects were also found to have poor tolerance to cold, decrease in diurnal activity, and an overall reduction in adaptation to metabolic stressors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nurturant parent model** Nurturant parent model: The nurturant parent model also known as the "Nurturing Parent" is a metaphor used for a belief system, (built upon an underlying value system) that goes in contrast with the Stern Father (Strict Father) parenting belief system. Each system is reflects a contrasting value system in parenthood, i.e. conservative parenting and liberal parenting. The "Nurturant Parent" is one of the various parenting styles in practice in the world. A Nurturing Parent gives his/her children both "roots in the ground" and "wings to fly". The parent accomplishes this by conveying, role-modeling and enforcing boundaries which encourage the child to explore their personal freedom (trying their new wings) while practicing self-discipline as well. The Nurturant Parent model has a healthy respect for children's inherent intelligence. Thus children are allowed to explore their environment under a careful watch by their parents, who are responsible for protecting the child from serious mistakes, by offering guidance. A child will be picked up if the child cries because the parent wants the child to feel safe and supported. If a child grows up believing their needs are likely to be met, (s)he will grow out confident, ready to face challenges. Meanwhile the Nurturant Parent also encourages their children to have their roots deeply implanted in stable grounds. This is done by making the child practice appropriate amount of self-discipline and self-connection. They may be asked to do age-appropriate house chores, limit money they spend, take part in discussions of "feelings" and "thoughts" and practice setting healthy boundaries with strangers, friends and adults in general. Nurturant parent model: The above elaboration was originally expressed more simply as 'a family model where children are expected to explore their surroundings; at the same time, being protected by their parents.'Other ideas: True discipline is much more than strict, unquestioning obedience. Mutual respect and compassion are also rights Mutual respect and compassion are best taught by example The outside world is no more inherently hostile than it is inherently friendly. The world commands respect Research: This model is based on a study conducted by the Boston College Graduate Program in Human Development where researchers were investigating the parenting style preferred by parents of extraordinarily creative children. Most parenting books recommend the authoritative style. The researchers discovered another parenting style which they called "the nurturing parent" that focuses on responsibility, empathy, and creativity. The basic approach these parents used was to: Trust in their children's fairness and good judgment Respect their children's autonomy, thoughts and feelings Support their children's interests and goals Enjoy their children's company Protect their children from doing injury to self or others, not by establishing rules but by communicating values and discussing their children's behavior back with them Modeling the self-control, sensitivity and values they believe their children will need Further mentions: In his unfinished book, Caring Parents: a Guide to Successful Parenting, clinical social worker Herbert Jay Rosenfield encourages use of the acronym "RECEPEE", for "Reasonable Expectations, Clearly Expressed, Performed Everyday and by Example". "The factors that children need to develop good self-esteem … are primarily 'gifts' from us parents!" writes Rosenfield, who offers another acronym "UCARE": Uniqueness that is positive, achieved through praise, encouragement, and positive feedback Connectiveness to family, to extended family, and to a neighborhood that is safe, healthy and moderate Age-appropriate autonomy: responsibilities and privileges that parallel their age and capabilities Role Examples: parent models with good self-esteem and behavior, whom they can emulateReverend George Englehardt stated succinctly, in 1991, that "parental responsibility is to provide their children with a safe, loving, nurturing environment".The nurturant parent model is also discussed by George Lakoff in his books, including Moral Politics and Whose Freedom? In these books, the nurturant parent model is contrasted with the strict father model. Lakoff argues that if the metaphor of nation as family and government as parent is used, then progressive politics correspond to the nurturant parent model. For example, progressives want the government to make sure that the citizens are protected and assisted to achieve their potential. This might take the form of tough environmental regulations or healthcare assistance. Further mentions: The model is also consistent with slow parenting in that children are encouraged to explore the world for themselves. They have to learn to face the risks that nature presents. Although slow parenting might go further and reduce the level of protection offered by parents, it would not advocate withholding it entirely.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dock4-Ex49** Dock4-Ex49: Dock4-Ex49 is a splice variant of the signalling protein Dock4 (Dedicator of cytokinesis 4). It has been found in the brain, inner ear and eye. It is able to bind and activate the small G protein Rac and may regulate the organisation of the actin cytoskeleton in the Stereocilia of the inner ear.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lagrangian (field theory)** Lagrangian (field theory): Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom. Lagrangian (field theory): One motivation for the development of the Lagrangian formalism on fields, and more generally, for classical field theory, is to provide a clean mathematical foundation for quantum field theory, which is infamously beset by formal difficulties that make it unacceptable as a mathematical theory. The Lagrangians presented here are identical to their quantum equivalents, but, in treating the fields as classical fields, instead of being quantized, one can provide definitions and obtain solutions with properties compatible with the conventional formal approach to the mathematics of partial differential equations. This enables the formulation of solutions on spaces with well-characterized properties, such as Sobolev spaces. It enables various theorems to be provided, ranging from proofs of existence to the uniform convergence of formal series to the general settings of potential theory. In addition, insight and clarity is obtained by generalizations to Riemannian manifolds and fiber bundles, allowing the geometric structure to be clearly discerned and disentangled from the corresponding equations of motion. A clearer view of the geometric structure has in turn allowed highly abstract theorems from geometry to be used to gain insight, ranging from the Chern–Gauss–Bonnet theorem and the Riemann–Roch theorem to the Atiyah–Singer index theorem and Chern–Simons theory. Overview: In field theory, the independent variable is replaced by an event in spacetime (x, y, z, t), or more generally still by a point s on a Riemannian manifold. The dependent variables are replaced by the value of a field at that point in spacetime φ(x,y,z,t) so that the equations of motion are obtained by means of an action principle, written as: where the action, S , is a functional of the dependent variables φi(s) , their derivatives and s itself where the brackets denote {⋅∀α} and s = {sα} denotes the set of n independent variables of the system, including the time variable, and is indexed by α = 1, 2, 3, ..., n. The calligraphic typeface, L , is used to denote the density, and dns is the volume form of the field function, i.e., the measure of the domain of the field function. Overview: In mathematical formulations, it is common to express the Lagrangian as a function on a fiber bundle, wherein the Euler–Lagrange equations can be interpreted as specifying the geodesics on the fiber bundle. Abraham and Marsden's textbook provided the first comprehensive description of classical mechanics in terms of modern geometrical ideas, i.e., in terms of tangent manifolds, symplectic manifolds and contact geometry. Bleecker's textbook provided a comprehensive presentation of field theories in physics in terms of gauge invariant fiber bundles. Such formulations were known or suspected long before. Jost continues with a geometric presentation, clarifying the relation between Hamiltonian and Lagrangian forms, describing spin manifolds from first principles, etc. Current research focuses on non-rigid affine structures, (sometimes called "quantum structures") wherein one replaces occurrences of vector spaces by tensor algebras. This research is motivated by the breakthrough understanding of quantum groups as affine Lie algebras (Lie groups are, in a sense "rigid", as they are determined by their Lie algebra. When reformulated on a tensor algebra, they become "floppy", having infinite degrees of freedom; see e.g. Virasoro algebra.) Definitions: In Lagrangian field theory, the Lagrangian as a function of generalized coordinates is replaced by a Lagrangian density, a function of the fields in the system and their derivatives, and possibly the space and time coordinates themselves. In field theory, the independent variable t is replaced by an event in spacetime (x, y, z, t) or still more generally by a point s on a manifold. Definitions: Often, a "Lagrangian density" is simply referred to as a "Lagrangian". Scalar fields For one scalar field φ , the Lagrangian density will take the form: For many scalar fields In mathematical formulations, the scalar fields are understood to be coordinates on a fiber bundle, and the derivatives of the field are understood to be sections of the jet bundle. Vector fields, tensor fields, spinor fields The above can be generalized for vector fields, tensor fields, and spinor fields. In physics, fermions are described by spinor fields. Bosons are described by tensor fields, which include scalar and vector fields as special cases. Definitions: For example, if there are m real-valued scalar fields, φ1,…,φm , then the field manifold is Rm . If the field is a real vector field, then the field manifold is isomorphic to Rn Action The time integral of the Lagrangian is called the action denoted by S. In field theory, a distinction is occasionally made between the Lagrangian L, of which the time integral is the action and the Lagrangian density L , which one integrates over all spacetime to get the action: The spatial volume integral of the Lagrangian density is the Lagrangian; in 3D, The action is often referred to as the "action functional", in that it is a function of the fields (and their derivatives). Definitions: Volume form In the presence of gravity or when using general curvilinear coordinates, the Lagrangian density L will include a factor of {\textstyle {\sqrt {g}}} . This ensures that the action is invariant under general coordinate transformations. In mathematical literature, spacetime is taken to be a Riemannian manifold M and the integral then becomes the volume form Here, the ∧ is the wedge product and {\textstyle {\sqrt {|g|}}} is the square root of the determinant |g| of the metric tensor g on M . For flat spacetime (e.g., Minkowski spacetime), the unit volume is one, i.e. {\textstyle {\sqrt {|g|}}=1} and so it is commonly omitted, when discussing field theory in flat spacetime. Likewise, the use of the wedge-product symbols offers no additional insight over the ordinary concept of a volume in multivariate calculus, and so these are likewise dropped. Some older textbooks, e.g., Landau and Lifschitz write {\textstyle {\sqrt {-g}}} for the volume form, since the minus sign is appropriate for metric tensors with signature (+−−−) or (−+++) (since the determinant is negative, in either case). When discussing field theory on general Riemannian manifolds, the volume form is usually written in the abbreviated notation ∗(1) where ∗ is the Hodge star. That is, and so Not infrequently, the notation above is considered to be entirely superfluous, and is frequently seen. Do not be misled: the volume form is implicitly present in the integral above, even if it is not explicitly written. Definitions: Euler–Lagrange equations The Euler–Lagrange equations describe the geodesic flow of the field φ as a function of time. Taking the variation with respect to φ , one obtains Solving, with respect to the boundary conditions, one obtains the Euler–Lagrange equations: Examples: A large variety of physical systems have been formulated in terms of Lagrangians over fields. Below is a sampling of some of the most common ones found in physics textbooks on field theory. Examples: Newtonian gravity The Lagrangian density for Newtonian gravity is: where Φ is the gravitational potential, ρ is the mass density, and G in m3·kg−1·s−2 is the gravitational constant. The density L has units of J·m−3. Here the interaction term involves a continuous mass density ρ in kg·m−3. This is necessary because using a point source for a field would result in mathematical difficulties. Examples: This Lagrangian can be written in the form of L=T−V , with the T=−(∇Φ)2/8πG providing a kinetic term, and the interaction V=ρΦ the potential term. See also Nordström's theory of gravitation for how this could be modified to deal with changes over time. This form is reprised in the next example of a scalar field theory. The variation of the integral with respect to Φ is: After integrating by parts, discarding the total integral, and dividing out by δΦ the formula becomes: which is equivalent to: which yields Gauss's law for gravity. Examples: Scalar field theory The Lagrangian for a scalar field moving in a potential V(ϕ) can be written as It is not at all an accident that the scalar theory resembles the undergraduate textbook Lagrangian L=T−V for the kinetic term of a free point particle written as T=mv2/2 . The scalar theory is the field-theory generalization of a particle moving in a potential. When the V(ϕ) is the Mexican hat potential, the resulting fields are termed the Higgs fields. Examples: Sigma model Lagrangian The sigma model describes the motion of a scalar point particle constrained to move on a Riemannian manifold, such as a circle or a sphere. It generalizes the case of scalar and vector fields, that is, fields constrained to move on a flat manifold. The Lagrangian is commonly written in one of three equivalent forms: where the d is the differential. An equivalent expression is with gij the Riemannian metric on the manifold of the field; i.e. the fields ϕi are just local coordinates on the coordinate chart of the manifold. A third common form is with and U∈SU(N) , the Lie group SU(N). This group can be replaced by any Lie group, or, more generally, by a symmetric space. The trace is just the Killing form in hiding; the Killing form provides a quadratic form on the field manifold, the lagrangian is then just the pullback of this form. Alternately, the Lagrangian can also be seen as the pullback of the Maurer–Cartan form to the base spacetime. Examples: In general, sigma models exhibit topological soliton solutions. The most famous and well-studied of these is the Skyrmion, which serves as a model of the nucleon that has withstood the test of time. Electromagnetism in special relativity Consider a point particle, a charged particle, interacting with the electromagnetic field. The interaction terms are replaced by terms involving a continuous charge density ρ in A·s·m−3 and current density j in A·m−2. The resulting Lagrangian density for the electromagnetic field is: Varying this with respect to ϕ, we get which yields Gauss' law. Varying instead with respect to A , we get which yields Ampère's law. Examples: Using tensor notation, we can write all this more compactly. The term −ρϕ(x,t)+j⋅A is actually the inner product of two four-vectors. We package the charge density into the current 4-vector and the potential into the potential 4-vector. These two new vectors are We can then write the interaction term as Additionally, we can package the E and B fields into what is known as the electromagnetic tensor Fμν We define this tensor as The term we are looking out for turns out to be We have made use of the Minkowski metric to raise the indices on the EMF tensor. In this notation, Maxwell's equations are where ε is the Levi-Civita tensor. So the Lagrange density for electromagnetism in special relativity written in terms of Lorentz vectors and tensors is In this notation it is apparent that classical electromagnetism is a Lorentz-invariant theory. By the equivalence principle, it becomes simple to extend the notion of electromagnetism to curved spacetime. Examples: Electromagnetism and the Yang–Mills equations Using differential forms, the electromagnetic action S in vacuum on a (pseudo-) Riemannian manifold M can be written (using natural units, c = ε0 = 1) as Here, A stands for the electromagnetic potential 1-form, J is the current 1-form, F is the field strength 2-form and the star denotes the Hodge star operator. This is exactly the same Lagrangian as in the section above, except that the treatment here is coordinate-free; expanding the integrand into a basis yields the identical, lengthy expression. Note that with forms, an additional integration measure is not necessary because forms have coordinate differentials built in. Variation of the action leads to These are Maxwell's equations for the electromagnetic potential. Substituting F = dA immediately yields the equation for the fields, because F is an exact form. Examples: The A field can be understood to be the affine connection on a U(1)-fiber bundle. That is, classical electrodynamics, all of its effects and equations, can be completely understood in terms of a circle bundle over Minkowski spacetime. Examples: The Yang–Mills equations can be written in exactly the same form as above, by replacing the Lie group U(1) of electromagnetism by an arbitrary Lie group. In the Standard model, it is conventionally taken to be SU(3)×SU(2)×U(1) although the general case is of general interest. In all cases, there is no need for any quantization to be performed. Although the Yang–Mills equations are historically rooted in quantum field theory, the above equations are purely classical. Examples: Chern–Simons functional In the same vein as the above, one can consider the action in one dimension less, i.e. in a contact geometry setting. This gives the Chern–Simons functional. It is written as Chern–Simons theory was deeply explored in physics, as a toy model for a broad range of geometric phenomena that one might expect to find in a grand unified theory. Examples: Ginzburg–Landau Lagrangian The Lagrangian density for Ginzburg–Landau theory combines together the Lagrangian for the scalar field theory with the Lagrangian for the Yang–Mills action. It may be written as: where ψ is a section of a vector bundle with fiber Cn . The ψ corresponds to the order parameter in a superconductor; equivalently, it corresponds to the Higgs field, after noting that the second term is the famous "Sombrero hat" potential. The field A is the (non-Abelian) gauge field, i.e. the Yang–Mills field and F is its field-strength. The Euler–Lagrange equations for the Ginzburg–Landau functional are the Yang–Mills equations and where ⋆ is the Hodge star operator, i.e. the fully antisymmetric tensor. These equations are closely related to the Yang–Mills–Higgs equations. Another closely related Lagrangian is found in Seiberg–Witten theory. Examples: Dirac Lagrangian The Lagrangian density for a Dirac field is: where ψ is a Dirac spinor, ψ¯=ψ†γ0 is its Dirac adjoint, and ∂/ is Feynman slash notation for γσ∂σ . There is no particular need to focus on Dirac spinors in the classical theory. The Weyl spinors provide a more general foundation; they can be constructed directly from the Clifford algebra of spacetime; the construction works in any number of dimensions, and the Dirac spinors appear as a special case. Weyl spinors have the additional advantage that they can be used in a vielbein for the metric on a Riemannian manifold; this enables the concept of a spin structure, which, roughly speaking, is a way of formulating spinors consistently in a curved spacetime. Examples: Quantum electrodynamic Lagrangian The Lagrangian density for QED combines the Lagrangian for the Dirac field together with the Lagrangian for electrodynamics in a gauge-invariant way. It is: where Fμν is the electromagnetic tensor, D is the gauge covariant derivative, and D/ is Feynman notation for γσDσ with Dσ=∂σ−ieAσ where Aσ is the electromagnetic four-potential. Although the word "quantum" appears in the above, this is a historical artifact. The definition of the Dirac field requires no quantization whatsoever, it can be written as a purely classical field of anti-commuting Weyl spinors constructed from first principles from a Clifford algebra. The full gauge-invariant classical formulation is given in Bleecker. Examples: Quantum chromodynamic Lagrangian The Lagrangian density for quantum chromodynamics combines together the Lagrangian for one or more massive Dirac spinors with the Lagrangian for the Yang–Mills action, which describes the dynamics of a gauge field; the combined Lagrangian is gauge invariant. It may be written as: where D is the QCD gauge covariant derivative, n = 1, 2, ...6 counts the quark types, and Gαμν is the gluon field strength tensor. As for the electrodynamics case above, the appearance of the word "quantum" above only acknowledges its historical development. The Lagrangian and its gauge invariance can be formulated and treated in a purely classical fashion. Examples: Einstein gravity The Lagrange density for general relativity in the presence of matter fields is where Λ is the cosmological constant, R is the curvature scalar, which is the Ricci tensor contracted with the metric tensor, and the Ricci tensor is the Riemann tensor contracted with a Kronecker delta. The integral of EH is known as the Einstein–Hilbert action. The Riemann tensor is the tidal force tensor, and is constructed out of Christoffel symbols and derivatives of Christoffel symbols, which define the metric connection on spacetime. The gravitational field itself was historically ascribed to the metric tensor; the modern view is that the connection is "more fundamental". This is due to the understanding that one can write connections with non-zero torsion. These alter the metric without altering the geometry one bit. As to the actual "direction in which gravity points" (e.g. on the surface of the Earth, it points down), this comes from the Riemann tensor: it is the thing that describes the "gravitational force field" that moving bodies feel and react to. (This last statement must be qualified: there is no "force field" per se; moving bodies follow geodesics on the manifold described by the connection. They move in a "straight line".) The Lagrangian for general relativity can also be written in a form that makes it manifestly similar to the Yang–Mills equations. This is called the Einstein–Yang–Mills action principle. This is done by noting that most of differential geometry works "just fine" on bundles with an affine connection and arbitrary Lie group. Then, plugging in SO(3,1) for that symmetry group, i.e. for the frame fields, one obtains the equations above.Substituting this Lagrangian into the Euler–Lagrange equation and taking the metric tensor gμν as the field, we obtain the Einstein field equations Tμν is the energy momentum tensor and is defined by where g is the determinant of the metric tensor when regarded as a matrix. Generally, in general relativity, the integration measure of the action of Lagrange density is {\textstyle {\sqrt {-g}}\,d^{4}x} . This makes the integral coordinate independent, as the root of the metric determinant is equivalent to the Jacobian determinant. The minus sign is a consequence of the metric signature (the determinant by itself is negative). This is an example of the volume form, previously discussed, becoming manifest in non-flat spacetime. Examples: Electromagnetism in general relativity The Lagrange density of electromagnetism in general relativity also contains the Einstein–Hilbert action from above. The pure electromagnetic Lagrangian is precisely a matter Lagrangian matter . The Lagrangian is This Lagrangian is obtained by simply replacing the Minkowski metric in the above flat Lagrangian with a more general (possibly curved) metric gμν(x) . We can generate the Einstein Field Equations in the presence of an EM field using this lagrangian. The energy-momentum tensor is It can be shown that this energy momentum tensor is traceless, i.e. that If we take the trace of both sides of the Einstein Field Equations, we obtain So the tracelessness of the energy momentum tensor implies that the curvature scalar in an electromagnetic field vanishes. The Einstein equations are then Additionally, Maxwell's equations are where Dμ is the covariant derivative. For free space, we can set the current tensor equal to zero, jμ=0 . Solving both Einstein and Maxwell's equations around a spherically symmetric mass distribution in free space leads to the Reissner–Nordström charged black hole, with the defining line element (written in natural units and with charge Q): One possible way of unifying the electromagnetic and gravitational Lagrangians (by using a fifth dimension) is given by Kaluza–Klein theory. Effectively, one constructs an affine bundle, just as for the Yang–Mills equations given earlier, and then considers the action separately on the 4-dimensional and the 1-dimensional parts. Such factorizations, such as the fact that the 7-sphere can be written as a product of the 4-sphere and the 3-sphere, or that the 11-sphere is a product of the 4-sphere and the 7-sphere, accounted for much of the early excitement that a theory of everything had been found. Unfortunately, the 7-sphere proved not large enough to enclose all of the Standard model, dashing these hopes. Examples: Additional examples The BF model Lagrangian, short for "Background Field", describes a system with trivial dynamics, when written on a flat spacetime manifold. On a topologically non-trivial spacetime, the system will have non-trivial classical solutions, which may be interpreted as solitons or instantons. A variety of extensions exist, forming the foundations for topological field theories.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arabinogalactan** Arabinogalactan: Arabinogalactan, also known as galactoarabinan, larch arabinogalactan, and larch gum, is a biopolymer consisting of arabinose and galactose monosaccharides. Two classes of arabinogalactans are found in nature: plant arabinogalactan and microbial arabinogalactan. In plants, it is a major component of many gums, including gum arabic and gum ghatti. It is often found attached to proteins, and the resulting arabinogalactan protein (AGP) functions as both an intercellular signaling molecule and a glue to seal plant wounds.The microbial arabinogalactan is a major structural component of the mycobacterial cell wall. Both the arabinose and galactose exist solely in the furanose configuration. The galactan portion of microbial arabinogalactan is linear, consisting of approximately 30 units with alternating β-(1-5) and β-(1-6) glycosidic linkages. The arabinan chain, which consists of about 30 residues, is attached at three branch points within the galactan chain, believed to be at residues 8, 10 and 12. Arabinogalactan: The arabinan portion of the polymer is a complex branched structure, usually capped with mycolic acids; the arabinan glycosidic linkages are α-(1-3), α-(1-5), and β-(1-2). The mycobacterial arabinogalactan is recognized by a putative immune lectin intelectin present in chordates. Structure of microbial arabinogalactan: The reducing end of microbial arabinogalactan consists of the terminal sequence →5)-D-Galf-(1→4)-L-Rhap-(1→3)-D-GlcNAc. A muramyl-6-P is also found within the peptidoglycan functional group. The mycolylarabinogalactan of mycobacteria is attached to the peptidoglycan by the actinomycete-specific diglycosylphosphoryl bridge, L-Rhap-(1→3)-D-GlcNAc-(1→P).Arabinogalactan contains a galactan chain, with alternating 5-linked β-D-galactofuranosyl (Galf) and 6-linked β-D-Galf residues. The arabinan chains are attached to C-5 of some of the 6-linked Galf residues. There are three major structural domains for arabinan. The first is a domain consisting of linear 5-linked α-D-Araf residues. The second is a domain with branched 3,5 linked α-D-Araf residues substituted with 5-linked α-D-Araf units at both branched positions, and the third is A terminal non-reducing domain for end arabinan consisting of a 3,5-linked α-D-Araf residue substituted at both branched positions with the disaccharide β-D Araf-(1→2)- α-D-Araf. These three arabinan chains are attached to the galactan at residues 8, 10, and 12.The non-reducing end of arabinogalactan is covalently attached to the mycolic acids of the outer membrane. The hydrophobicity of mycolic acids is a barrier to drug entry. Additionally, the mycolyl arabinogalactan peptidoglycan is responsible for aspects of disease pathogenesis and much of the antibody response in infections. The mycolyl substituents are selectively and equally distributed on the 5-hydroxyl functions of terminal- and the penultimate 2-linked Araf residues. The mycolyl residues are clustered in groups of four on the non reducing terminal pentaarabinosyl unit (β-Araf-(1→2)-α-Araf)2-3,5-α-Araf . Thus, the majority (66%) of the pentaarabinosyl units are substituted by mycolic acids, leaving the minority (33%) available for interaction with the immune system.Approximately one of the three arabinosyl chains attached to the galactan chain contains succinyl groups. Although one succinyl group is most common, up to three succinyl groups per released arabinan fragment can be found on oligo-arabinans. However, arabinan fragments substituted with GalNH2 are not succinylated. Importantly, in the case of M. tuberculosis, and most likely in all slow growing organisms, both positive charge (protonated GalNH2 as GalNH3+) and negative charge (succinyl) are present in the middle regions of the arabinan, specifically at O-2 of the inner 3,5-α-D-Araf units. The succinyl residues are on the non-mycolylated chain. Recently, a complete primary model of arabinogalactan has been proposed. Commercial applications: It is used as a thickener in foods, in cosmetics, and is being studied for possible medical uses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Top of climb** Top of climb: In aviation, the top of climb, also referred to as the TOC or T/C, is the computed transition from the climb phase of a flight to the cruise phase, the point at which the planned climb to cruise altitude is completed. The top of climb is usually calculated by an on-board flight management system and is designed to provide the most economical climb to cruise altitude or to meet some other objective (fastest climb, greatest range, etc.). The top of climb may be calculated manually with considerable effort.Alternatively, when manual planning and monitoring a VFR flight, TOC is an elegant and efficient way for a pilot to eliminate all the vaguery and variability of departing any airport (the turns assigned, changes of runway the pilot cannot control). TOC establishes a "starting gate" for the en route portion of a flight to allow timing and tracking along the course. A clearly determined TOC is predicated in planning along the intended route of flight far enough from the departure so that the aircraft is at altitude, on course, trimmed up in cruise flight. The variability of departure is now over and accurate tracking and monitoring of progress are beginning. Top of climb: Once past TOC (and by definition stabilized on course) a pilot can efficiently ascertain whether the wind and weather are "as forecast" and the groundspeed attained will allow for the safe completion of the flight "as planned" or within an acceptable margin for safety. If the forecast headwind along the route is more than was forecast and planned on the ground it should immediately be obvious and an alternate fuel stop planned to accommodate the added time aloft. Unlike when driving a motor vehicle, the aircraft's progress over the earth (groundspeed) is in every instance unique and surprising. Despite planning from forecast data, it is essential to immediately determine this in flight and adjust for variability to assure a safe outcome. Similarly, changes in visibility en route from the flight planned forecasts must be accommodated and adjusted in visual flight. Top of climb: Pilots of small airplanes need to do a flight plan to compute fuel usage and time of the trip because they often don't have a flight management system. Because climbing to cruise altitude burns fuel quicker, the takeoff to cruise altitude is calculated separately. The airplane's Pilot Operating Handbook has a table of fuel burned, time, and distance to reach a given altitude from sea level. To calculate the values for airport at 900 m (3,000 ft), you subtract the values for sea level to 900 m (3,000 ft) from the sea level to cruise altitude.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**23 (number)** 23 (number): 23 (twenty-three) is the natural number following 22 and preceding 24. In mathematics: Twenty-three is the ninth prime number, the smallest odd prime that is not a twin prime. It is, however, a cousin prime with 19, and a sexy prime with 17 and 29; while also being the largest member of the first prime sextuplet (7, 11, 13, 17, 19, 23). Twenty-three is also the fifth factorial prime, the second Woodall prime, and a happy number in decimal. It is an Eisenstein prime with no imaginary part and real part of the form 1. In mathematics: It is also the fifth Sophie Germain prime and the fourth safe prime, and the next to last member of the first Cunningham chain of the first kind to have five terms (2, 5, 11, 23, 47). Since 14! + 1 is a multiple of 23, but 23 is not one more than a multiple of 14, 23 is the first Pillai prime. 23 is the smallest odd prime to be a highly cototient number, as the solution to x−ϕ(x) for the integers 95, 119, 143, and 529. The third decimal repunit prime after R2 and R19 is R23, followed by R1031. In mathematics: 23 is the second Smarandache–Wellin prime in base ten, as it is the concatenation of the decimal representations of the first two primes (2 and 3) and is itself also prime.It is the first prime p for which unique factorization of cyclotomic integers based on the pth root of unity breaks down.The sum of the first 23 primes is 874, which is divisible by 23, a property shared by few other numbers.In the list of fortunate numbers, 23 occurs twice, since adding 23 to either the fifth or eighth primorial gives a prime number (namely 2333 and 9699713).23 has the distinction of being one of two integers that cannot be expressed as the sum of fewer than 9 cubes of positive integers (the other is 239). See Waring's problem.23 is the number of trees on 8 unlabeled nodes. It is also a Wedderburn–Etherington number, which are numbers that can be used to count certain binary trees.The natural logarithms of all positive integers lower than 23 are known to have binary BBP-type formulae.23 is the smallest positive solution to Sunzi's original formulation of the Chinese remainder theorem.23 is the smallest prime p such that the largest consecutive pair of p -smooth numbers (11859210, 11859211) is the same as the largest consecutive pair of ( p−1 )-smooth numbers.According to the birthday paradox, in a group of 23 or more randomly chosen people, the probability is more than 50% that some pair of them will have the same birthday.A related coincidence is that 365 times the natural logarithm of 2, approximately 252.999, is very close to the number of pairs of 23 items and 22nd triangular number, 253.The first twenty-three odd prime numbers (between 3 and 89 inclusive), are all cluster primes p such that every even positive integer k≤p−3 can be written as the sum of two prime numbers that do not exceed p .The first Mersenne number of the form 2n−1 that does not yield a prime number when inputting a prime exponent is 2047 23 89 , with 11. In mathematics: On the other hand, the second composite Mersenne number contains an exponent n of twenty-three: 23 23 388 607 47 178 481. Further in the sequence of Mersenne numbers, the 23rd prime number (83) is an exponent to the 14th composite Mersenne number, which factorizes into two prime numbers, the largest of which is twenty-three digits long when written in base ten: 83 671 649 407 167 57 912 614 113 275 649 087 721. 23 ! is also twenty-three digits long in decimal, and there are only three other numbers n whose factorials generate numbers that are n digits long in base ten: 1, 22, and 24. In mathematics: In geometry The Leech lattice Λ24 is a 24-dimensional lattice through which 23 other positive definite even unimodular Niemeier lattices of rank 24 are built, and vice-versa. Λ24 represents the solution to the kissing number in 24 dimensions as the precise lattice structure for the maximum number of spheres that can fill 24-dimensional space without overlapping, equal to 196,560 spheres. These 23 Niemeier lattices are located at deep holes of radii √2 in lattice points around its automorphism group, Conway group C0 . The Leech lattice can be constructed in various ways, which include: By means of a matrix of the form (IaH/2H/2Ib) where I is the identity matrix and H is a 24 by 24 Hadamard matrix (Z/23Z ∪ ∞) with a = 2 and b = 3, and entries X(∞) = 1 and X(0) = -1 with X(n) the quadratic residue symbol mod 23 for nonzero n.Through the extended binary Golay code 24 and Witt design 24 , which produce a construction of the 196,560 minimal vectors in the Leech lattice. The extended binary Golay code is an extension of the perfect binary Golay code 23 , which has codewords of size 23. 23 has Mathieu group 23 as its automorphism group, which is the second largest member of the first generation in the happy family of sporadic groups. 23 has a minimum faithful complex representation in 22 dimensions and group-3 actions on 253 objects, with 253 equal to the number of pairs of objects in a set of 23 objects. In turn, 23 is the automorphism group of Mathieu group 24 , which works through 24 to generate 8-element octads whose individual elements occur 253 times through its entire block design.Using Niemer lattice D24 of group order 223·24! and Coxeter number 46 = 2·23, it can be made into a module over the ring of integers of quadratic field 23 ) , whereby multiplying D24 by a non-principal ideal of the ring of integers yields the Leech lattice.Conway and Sloane provided constructions of the Leech lattice from all other 23 Niemeier lattices.Twenty-three four-dimensional crystal families exist within the classification of space groups. These are accompanied by six enantiomorphic forms, which maximizes the total count to twenty-nine crystal families. In three dimensions, five cubes can be arranged to form twenty-three free pentacubes, or twenty-nine distinct one-sided pentacubes (counting reflections).There are 23 three-dimensional uniform polyhedra that are cell facets inside uniform 4-polytopes that are not part of infinite families of antiprismatic prisms and duoprisms: the five Platonic solids, the thirteen Archimedean solids, and five semiregular prisms (the triangular prism, pentagonal prism, hexagonal prism, octagonal prism, and decagonal prism). In mathematics: 23 Coxeter groups of paracompact hyperbolic honeycombs in the third dimension generate 151 unique Wythoffian constructions of paracompact honeycombs. 23 four-dimensional Euclidean honeycombs are generated from the B~4 cubic group, and 23 five-dimensional uniform polytopes are generated from the D5 demihypercubic group. In mathematics: In two-dimensional geometry, the regular 23-sided icositrigon is the first regular polygon that is not constructible with a compass and straight edge or with the aide of an angle trisector (since it is neither a Fermat prime nor a Pierpont prime), nor by neusis or a double-notched straight edge. It is also not constructible with origami, however it is through other traditional methods for all regular polygons. In science and technology: The atomic number of vanadium. The atomic mass number of the stable isotope of sodium. Normal human sex cells have 23 chromosomes. Other human cells have 46 chromosomes, arranged in 23 pairs. Scientific notation for the Avogadro constant is written as 6.02214076×1023. 23 is the width of the Arecibo message, sent to space in search for extraterrestrial intelligence. 23 is the TCP/IP port used for telnet and is the default for the telnet command. The earth's axis is tilted at approximately 23°. In religion: In Biblical numerology, it is associated with Psalm 23, also known as the Shepherd Psalm. It is possibly the most quoted and best known Psalm. Psalms is also the 23rd book in the Douay–Rheims Bible. In Islam, the Qur'an was revealed in a total of 23 years to Muhammed. Muslims believe the first verses of the Qur'an were revealed to the Islamic prophet Muhammad on the 23rd night of the 9th Islamic month, though, its disputed. Principia Discordia, the sacred text of Discordianism, holds that 23 (along with the discordian prime 5) is one of the sacred numbers of Eris, goddess of discord. In popular culture: Music Alfred Harth uses the number 23 in his artist name Alfred 23 Harth, or A23H, since the year 1+9+8+5 = 23. In popular culture: Twentythree is the name of Tristan Prettyman's debut album Twentythree an album by Carbon Based Lifeforms "Viginti Tres" (Latin for twenty-three) is a song by Tool on their album 10,000 Days Blink-182's song "What's My Age Again?" includes the lyrics "nobody likes you when you're 23." 23 is an album and title track by Blonde Redhead The Incubus song "Pardon Me" includes the lyrics "A decade ago, I never thought I would be, at 23, on the verge of spontaneous combustion, woe is me!" Frontman Brandon Boyd was 23 years old when he wrote the song and described himself as being "kind of obsessive about that number". In popular culture: "23" is a song by Jimmy Eat World, on their album Futures. The number also appears in the songs "Christmas Card" and "12."23".95" as well as on some items of clothing produced by the band. Four tet and Yellowcard both have songs titled "Twenty-Three". Dear 23, an album by The Posies Untitled 23, an album by The Church Noah23 has several albums which reference the number 23. "23 Minutes in Brussels", a song by Luna on their album Penthouse. In popular culture: The composer Alban Berg had a particular interest in the number 23, using it to structure several works. Various suggestions have been made as to the reason for this interest: that he took it from the Biorhythms theory of Wilhelm Fliess, in which a 23-day cycle is considered significant, or because he first suffered an asthma attack on 23rd of the month. In popular culture: "23" is a single by Mike Will Made It On the cover of The Beatles' 1969 album Yellow Submarine the number 23 is displayed on the chest of one of the Blue Meanies. Network 23 refers to members of the Spiral Tribe. Sometimes 23 used to discretely mark the spots of a freetekno rave. The number 23 is used a lot throughout the visuals and music by the band Gorillaz, who have even devoted a whole page of their autobiography Rise Of The Ogre to the 23 enigma theory. Film and television 23 is a German film about Karl Koch. In popular culture: In Jeepers Creepers, the Creeper appears every 23 years for 23 days to feast on human body parts In L: Change the World, the protagonist L signs his own name in the Death Note notebook and somehow knows that he has given himself 23 days to live, revealing a 23-day rule for the maximum number of days a person may live after they are added to the Japanese god of death's Death Note. In popular culture: The 1980s TV series Max Headroom was set at Network 23. In The Big Lebowski, the main characters deliberately use only lane 23 at the bowling alley. In The Matrix Reloaded, the Architect tells Neo it is of utmost importance to choose 23 people to repopulate Zion. In the TV series Lost, 23 is one of the 6 reoccurring numbers (4, 8, 15, 16, 23, 42) that appear frequently throughout the show. The Number 23 is a 2007 film starring Jim Carrey about a man who becomes obsessed with the 23 enigma. Other fields 23 skidoo (phrase) (sometimes 23 skiddoo) is an American slang phrase popularized during the early 20th century. 23 skidoo has been described as "perhaps the first truly national fad expression and one of the most popular fad expressions to appear in the U.S". The 23 enigma, proposed by William S. Burroughs plays a prominent role in the plot of the Illuminatus! Trilogy by Robert Shea and Robert Anton Wilson. The 23, in South Africa, refers to the 23 conscientious objectors who publicly refused to do military service in the Apartheid army in 1987. The following years the number increased to 143 (in 1988) and 771 (in 1989), with Apartheid being dismantled from 1990 onwards. X-23 is a character in the Marvel Universe. She is named for being the 23rd attempt to create a female genetic twin of Wolverine after attempts to create a male clone failed. 23 is the number of times Julius Caesar was stabbed in the Theatre of Pompey. In sports: Each national team competing in the FIFA World Cup or FIFA Women's World Cup is allowed a 23-player squad. This squad size has been in place since 2002 for men and 2015 for women. Nissan typically uses this number for their Motorsport manufacturer teams, as the numbers 2 and 3 are pronounced "ni" and "san" in Japanese. 23 was basketball legend Michael Jordan's jersey number prior to his first retirement, then his chosen number again when he came out of retirement after a brief stint wearing the number 45. 23 was also the jersey number of Los Angeles Lakers small forward LeBron James, however he changed it to 6 in the 2021–22 NBA season. The maximum number of players on an NHL roster.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nest** Nest: A nest is a structure built for certain animals to hold eggs or young. Although nests are most closely associated with birds, members of all classes of vertebrates and some invertebrates construct nests. They may be composed of organic material such as twigs, grass, and leaves, or may be a simple depression in the ground, or a hole in a rock, tree, or building. Human-made materials, such as string, plastic, cloth, or paper, may also be used. Nests can be found in all types of habitat. Nest: Nest building is driven by a biological urge known as the nesting instinct in birds and mammals. Generally each species has a distinctive style of nest. Nest complexity is roughly correlated with the level of parental care by adults. Nest building is considered a key adaptive advantage among birds, and they exhibit the most variation in their nests ranging from simple holes in the ground to elaborate communal nests hosting hundreds of individuals. Nests of prairie dogs and several social insects can host millions of individuals. Nest building: Purposes of nesting Structural purposes Nest building (nidification) is often driven by a biological urge in pregnant animals to protect one's offspring known as the nesting instinct. Animals build nests to protect their eggs, their offspring, or themselves from danger. The simplest nest structures are adapted to hide eggs from predators, shield them from the sun or other environmental factors, or simply keep them from being scattered in ocean currents. In some cases, nests also help provide safety in numbers for egg-laying animals. Nest building: Social purposes Many nest builders provide parental care to their young, while others simply lay their eggs and leave. Brooding (incubating eggs by sitting on them) is common among birds. In general, nest complexity increases in relation to the level of parental care provided. Nest building reinforces social behavior, allowing for larger populations in small spaces to the point of increasing the carrying capacity of an environment. Insects that exhibit the most complex nidification also exhibit the greatest social structure. Among mammals, the naked mole-rat displays a caste structure similar to the social insects while building extensive burrows that house hundreds of individuals. Nest building: Usage of environment Versatility in use of construction material may be an adaptive advantage (less energy used to gather materials) or a disadvantage (less ability to specialize construction). The available evidence suggests that natural selection more often favors specialization over flexibility in nest construction.At the most basic level, there are only two types of nest building: sculpting and assembly. Sculpting Sculpting is the process of removing material to achieve the desired outcome. Most commonly this entails burrowing into the ground or plant matter to create a nesting site. Assembly Assembly entails gathering, transporting, and arranging materials to create a novel structure. Transportation has the greatest time and energy cost so animals are usually adapted to build with materials available in their immediate environment. Building materials Plant matter is the most common construction material for nests. Other common materials include fur or feathers, perhaps from the animal itself, mud or dirt, fecal matter, and specialized secretions from the animal's body. Nest building: Effects on environment Nest building can have a substantial impact on the environment in which animals live. The combined digging activity of termites and mole-rats in South Africa has created a "mima prairie" landscape marked by huge areas of flat land punctuated by mounds 30 metres (98 ft) wide and 2 metres (6.6 ft) high. Similar structures exist in the United States, created by pocket gophers, and Argentina, rodents of the genus Ctenomys. Nest building: Lasting effects Nests constructed by megapode birds have been mistaken for anthropological features by professionals, due to their exceptional height (10 metres [33 ft]) and abundance (hundreds in a single location). Nest builders: Nest architecture may be as useful for distinguishing species as the animals' physical appearance. Species identified through such means are called ethospecies. This is especially common in wasps and termites, but also can apply to birds. In most animals, there is some variation in nest construction between individuals. Whether these differences are driven by genetics or learned behavior is unknown.With the exception of a few tunneling mammals, nest builders exhibit no specialized anatomy, instead making use of body parts primarily used for other purposes. This is possibly due to the sporadic nature of nest building, minimizing the selective pressures of anatomy used for nest building. Nest builders: Birds In general, birds are the most skilled nest builders, although not all species of birds build nests, some laying their eggs directly onto rock ledges or bare soil without first modifying the area. Complex nest building is considered to be one of the key adaptive advantages of birds. Nests help regulate temperature and reduce predation risks, thus increasing the chance that offspring live to adulthood.Bird nests vary from simple depressions in the ground known as scrapes to largely unstructured collections of branches to elaborately woven pendants or spheres. The megapodes, one of the few groups who do not directly brood their young, incubate their young in a mound of decomposing vegetation. One species, Macrocephalon maleo, uses volcanic sand warmed by geothermal heat to keep its eggs warm. Among the simple nest builders are falcons, owls, and many shorebirds. The weavers exhibit perhaps the most elaborate nests, complete with strands of grass tied into knots. Most bird nests lie somewhere in the middle, with the majority building cup-shaped nests using some combination of mud, twigs and leaves, and feathers. Some birds, such flamingos and swifts, use saliva to help hold their nest together. The edible-nest swiftlet uses saliva alone to construct their nests. The rufous hornero nest is composed entirely of mud and feces, which is placed on tree branches to allow the sun to harden it into a usable structure. The tailorbirds stitch together leaves to provide cover for their nest sites. Nest builders: The sociable weaver builds large communal nests in which many individual nests reside. They divide the nest using walls of grass placed atop a base of large sticks. At the entrances to the nest, sharp sticks are placed to ward off intruders. A single communal site can measure 2 metres (6.6 ft) in height and 8 metres (26 ft) in width. As many as 300 mating pairs may reside in the structure. Other birds often built their own nests on top of Weaver nest sites.Some birds build nests in trees, some (such as eagles, vultures, and many seabirds) will build them on rocky ledges, and others nest on the ground or in burrows. Each species has a characteristic nest style, but few are particular about where they build their nests. Most species will choose whatever site in their environment best protects their nest, taking into account the nest's style. Several species will build on a cactus whenever possible. The bushtit and Bullock's oriole will suspend their nests from the tips of slender branches. The oropendolas take hanging nests to the extreme, constructing pouches up to 1.8 metres (5.9 ft) tall using hanging vines as their base. The hanging nest is attached to thin tree branches, discouraging predation. Other species seek out crevices, using buildings or birdhouses when tree holes are not available.Typical bird nests range from 2 centimetres (0.79 in) in size (hummingbirds) to 2 metres (6.6 ft) (eagles) in diameter. The largest nest on record was made by a pair of bald eagles. It was 2.9 metres (9.5 ft) in diameter, 6 metres (20 ft) deep and was estimated to weigh more than 2 tonnes (4,400 lb). The lightest bird nests may weigh only a few grams. Incubation mounds of the mallee fowl can reach heights of 4.57 metres (15.0 ft) and widths of 10.6 metres (35 ft). It is estimated the animal uses as much as 300 tonnes (660,000 lb) of material in its construction. The extinct Sylviornis neocaledoniae may have constructed nesting mounds 50 metres (160 ft) in diameter. Nest builders: Mammals Many species of small mammals such as rodents, rabbits, and ground squirrels dig burrows into the ground to protect themselves and their young. Prairie dogs build an elaborate system of tunnels which can span large stretches of land. One such structure, called a town, spanned 25,000 square miles (65,000 km2) and held an estimated 400 million individuals. Their homes are adapted to withstand large (above-ground) temperature variation, floods, and fire. Their young are raised in the deepest chambers where the temperature is the most stable.Many mammals, including raccoons and skunks, seek natural cavities in the ground or in trees to build their nests. Raccoons, and some rodents, use leaves to build nests underground and in trees. Tree squirrels build their nests (dreys) in trees, while voles nest in tall grass. In some species, the nest serve as homes for adults while in others they are used to raise young. The duck-billed platypus and the echidna lay eggs in nests.Gorillas build fresh nests daily out of leaves and other vegetation in which they sleep at night. They sometimes also build nests during the day for resting in. The smaller species of gorilla build their nests in trees, while the larger are confined to the ground. Nests of the western gorilla, the largest species, measure about 1 metre (3.3 ft) in diameter. Nest builders: Amphibians Some species of frog build nests ranging from simple to modest complexity. Many stream-dwelling frogs lay their eggs in a gelatinous mass which they attach to underwater vegetation to prevent eggs from washing away. Nests can have other protective qualities. For example, the female Fletcher frog beats secreted mucus into a froth, creating a structure that serves as a line of defense against thermal extremes, predation, and desiccation. Nest builders: Fish Fish engage in nest building activities ranging from simply scooping out sediment to building enclosed structures out of plant matter. Male sticklebacks produce a special enzyme in their kidneys that they use to bind plants together. Nest builders: Reptiles The American alligator is known for its parenting skills. They build large nests of mud and vegetation on river banks or vegetation mats. The female digs a hole in the center to lay her eggs, covers them, and then guards them for two months until they hatch. When eggs start to hatch, she breaks open the nest which has hardened over time and leads the young to the water where she continues to care for them for another year. Alligators are very particular about their nesting sites and will abandon a site if things go wrong.Cobras use leaves and other debris to build nests in which they lay eggs that both sexes guard. They carry the vegetation to the nest site by kinking their necks. Sea turtles dig a hole in the sand above the high tide line in which they lay their eggs. They then cover the soft eggs to protect them from the sun and predators and leave. Nest builders: Dinosaurs From the fossil record, it is known that many, or perhaps all, dinosaurs laid eggs. Paleontologists have identified a number of features that allow them to distinguish a nesting site from a random clustering of eggs. Those include regular clustering patterns, the co-occurrence of whole eggs with broken eggs and/or hatchlings, and the occurrence of physical features such as evidence of excavation. Nest builders: The Oviraptor nests of Mongolia are perhaps the most famous case of dinosaur nesting. One specimen was found fossilized atop a nest in a brooding posture, proving the animal had been poorly named (Oviraptor means "egg taker").A site known as Egg Mountain in Montana provides exceptional evidence of dinosaur nesting behavior. The site features dozens of nests each with 20 or more eggs belonging to the Maiasaura. Juvenile teeth at the site exhibit signs of wear, while the leg bones are not developed enough to walk. This allowed scientists to conclude that the species provided extensive parental care for its young. It is likely the species covered its nests with sand and vegetation to keep them warm and nested in colonies for increased protection. Nest builders: Insects Social insects, including most species of ants, bees, termites, and wasps, are nest builders. Their often elaborate nests may be found above or below ground. Features often include ventilation systems and separate chambers for the queen, her eggs, and developing individuals.Bees and hornets often seek out natural cavities in which to construct their nests, also known as hives, in which they store food and raise their young. Other species of bee and some wasps dig holes in the ground or chew through wood. In the species Megachile rotundata, for example, females construct tubular-shaped nests in rotting wood as well as small holes in the ground, creating, each cell made from circular disks cut from plant leaves using the bee's mandibles. Bee nests are founded upon the wax the secrete from their bodies, while those of wasps are dependent on their ability to turn plant water into paper using their saliva. Nests often exhibit divided living, with eggs and food stores kept in distinct parts of the hive. Vespid wasps build complex nests from paper-like material where they lay eggs in individual cells. When the young hatch, their parents feed them chewed up larvae. Different species exhibit different nest structures. Paper wasp nesting consist of a single tier of cells, while yellow jacket nests can be many layers thick, reaching up to 30 centimetres (0.98 ft) in diameter. Nesting strategies can be plastic, for instance the wasp Parischnogaster mellyi will significant vary its nest construction based on environmental conditions, and the wasp Mischocyttarus mexicanus is known to nest in groups or alone depending on the distribution of potential nest sites in the area. Nest sizes vary dramatically and the largest wasp nest on record measured 1.75 metres (5.7 ft) in diameter and was 3.7 metres (12 ft) tall. Found in New Zealand, it was likely built by the German wasp. Nest builders: Termites build elaborate nests that span multiple generations and may last decades. Using chewed wood, mud, and feces they build large mounds which may extend well into the air. The largest nests, built by members of genus Amitermes, stand nearly 7 metres (23 ft) tall with a similar circumference at the base, and host millions of individuals. Termite mounds are constructed to allow for excellent air flow, regulating the mound temperature. The mounds protect against drying and predation allowing many species to lose ancestral traits such as hard bodies, skin pigmentation, and good eyesight. Magnetic termites construct their nests with flattened sides along the north–south axis to ensure maximum warming during the winter, while exposing minimal surface area to the harshest mid-day sunshine. Other termite species use their nests to farm fungi.Ant nests feature an elaborate colony structure that may extend 2 metres (6.6 ft) or more underground. As the structure gets further underground, individual chambers become farther and farther apart indicating that the ant is aware of its depth. It is hypothesized that they accomplish this by sensing the level of carbon dioxide in the soil. The leaf cutter ant builds a complex nest which can house 8 million individuals. Its nests feature numerous chambers, most notably garden chambers where they farm fungus on leaves they harvest from the forest.Species such as the carpenter ant and the wasp Polistes exclamans build "satellite nests" - smaller nests near, but separate from, the main nest. These satellite nests are used as an insurance against predators and parasites; if the original nest is attacked, surviving members can move the satellite nest. Other species such as the Black hover wasp, Parischnogaster alternata, construct nests in clusters with the central core composed of older colonies surrounded by younger colonies.The Eastern carpenter bee, Xylocopa virginica, is unique in that individuals of that species build their nests in wood, bamboo culms, agave stalks, and other similar materials, although their preferred nesting material is pine or cedar lumber. When digging the nests, they use the wood shavings scraped from the wall to create partitions within the tunnels. The nests are usually round and have about 1-4 tunnels, each with multiple branches. Because these materials are often useful for humans in construction, X. virgininica's nesting behavior presents the disadvantage of weakening wood in manmade structures. Effects on other species: The abundance of biological resources within the nest has led to a number of specialized predators. The aardvark and the ant eater use long tongues to prey upon termite and ant nests. Birds such as the honey buzzard specialize on wasp and bee nests, a resource also targeted by the tropical hornet. Symbiosis, ranging from feeding on waste to obligate parasitism, is common within the nest. Ant nests alone support symbiotes spanning six classes of arthropods which includes 35 families just from the beetles. Names of nests: A badger's nest is called a sett. A beaver's nest is called a lodge. An eagle's nest is called an eyrie. An otter's den is called a holt or a couch. A pheasant's nest is called a nide. A rabbit's nest is called a form. A squirrel's or ringtail possum's nest is called a drey. A wasp's nest is called a vespiary.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monkey drive** Monkey drive: A monkey drive is an operation where large numbers of wild monkeys are rounded up and killed in order to protect agriculture such as crops, planted rice, banana and citrus fruit trees. Monkey drives have been reported in Sierra Leone, where they were supported by the government.In 1965, Gerald Durrell organised a monkey drive in Sierra Leone during a collecting mission for Jersey Zoo (formerly the Durrell Wildlife Park). The monkey drive was out of season, and not to exterminate monkeys, but in order to capture colobus monkeys. In his book on the expedition, published in 1972, he wrote that 2000 to 3000 monkeys are killed in monkey drives in Sierra Leone each year, including the "two species" of colobus monkeys, which do no damage to cocoa plantations, and were theoretically protected by law. The species mentioned by Durrell are now considered genera: black-and-white colobus and red-and-black colobus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fairground organ** Fairground organ: A fairground organ is a musical organ covering the wind and percussive sections of an orchestra. Originated in Paris, France, it was designed for use in commercial fairground settings to provide loud music to accompany rides and attractions, mostly merry-go-rounds. Unlike organs for indoor use, they are designed to produce a large volume of sound to be heard above the noises of crowds and fairground machinery. History: As fairgrounds became more mechanised at the end of the nineteenth century, their musical needs grew. The period of greatest activity of fairground organ manufacture and development was the late 1830s, particularly with the opening of the Limonaire Frères company of Avenue Daumesnil, Paris in 1839. Virtually all ambient fairground music continued to be produced by fairground organs and similar pneumatically operated instruments until the advent of effective electrical sound amplification in the mid-1920s. The organ chassis was typically covered with an ornate and florid decorative case façade designed to attract attention in the tradition of most fairground equipment. Giacomo Gavioli patented the use of book music to play organs, which later became the basis of fairground organs. In 1910, Joseph and Antoine Limonaire took over the patents when Gavioli ceased production, leading to limonaire becoming the generic French name for fairground organs. History: The ornate case façades frequently had percussion instruments such as a glockenspiel and drums that provided visual entertainment as they played. There were often ornate human figures, such as a conductor whose arm moved in time to the music, or women whose arms struck bells. The organs were designed to mimic the musical capabilities of a typical human band. For this reason they are known as band organs in the United States. History: The motive force for a fairground organ is typically wind under pressure generated from mechanically powered bellows in the instrument's base. Without the need for a human player, the instruments are keyboard-less (except for relatively rare configurations with one or more accordions, whose keys could be seen to move). Early organs were played by a rotating barrel with the sounds triggered by metal pins, as in a music box. Later organs employed strips of cards perforated with the music data and registration (instrument) controls called book music; or interchangeable rolls of perforated paper called music rolls, similar to those used in player pianos. History: Since the advent of computer control (from the early 1970s on), some band organs have been built or converted to be played electronically. Victory, pictured above, is a hybrid of these technologies. Its traditional pneumatic instruments can be played either from traditional perforated books, or from its integrated Yamaha MIDI interface. Owner Willem Kelders can also use the interface to link organs (Rhapsody and Locomotion, driven by Victory) to play the same music together. History: Fairground organs have been used in many entertainment settings, including fairground rides static sideshows (such as bioscope shows), amusements parks, and skating rinks. Many can be seen exhibitied at steam fairs. Manufacturers of fairground organs also typically made instruments for indoor use in dance halls, called dance organs; and smaller versions for travelling street use, called street organs. Like all mechanical instruments, fairground organs have been made by a myriad of manufacturers, in various sizes and to various technical specifications, with various trademark characteristics. Active preservation initiatives and collectors' communities are associated with vintage instruments, and new instruments and music continue to be produced. Operation: Early organs were designed to be compact and operated by an unskilled person or mechanically. These were played via an integral pinned barrel requiring no human input apart from changing the number of the tune being played. These had a fixed repertoire and, if it was desired to change the tunes, a complete new pinned barrel was required. To offer a more flexible choice of repertoire, a system of robust interchangeable perforated cardboard book music was patented first by Parisian manufacturers Gavioli. Their system became widely regarded as commercially advantageous and other manufacturers followed suit. Book music offered a cheaper and more readily updated alternative to barrel music. Also used by many manufacturers including Gavioli was operation via paper music roll. These rolls were more compact and cheaper to manufacture than book music. Technically, they were more susceptible to poor handling but all systems experienced their own types of characteristic wear and tear during repeated playing. Both "book" and "roll" systems were manufactured with different operating actions which read the music via air pressure, under suction, or mechanically. To extend longevity, mechanically read cardboard book music was typically strengthened with an application of shellac. Music rolls were typically fortified via the use of robust moisture-resisting paper stocks. Operation: All the functions of the organ are (apart from the smallest organs) operated automatically from the music media. Larger instruments contain automatic organ stop register control and additional control tracks for operating percussion instruments, lighting effect and automaton figures. Builders: NOTE: non-exhaustive list of builders, past and present
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oracle Discoverer** Oracle Discoverer: Oracle Discoverer is a tool-set for ad hoc querying, reporting, data analysis, and Web-publishing for the Oracle Database environment. Oracle Corporation markets it as a business intelligence product. It was originally a stand-alone product, however it has become a component of the Oracle Fusion Middleware suite, and renamed Oracle Business Intelligence Discoverer. Components: The Discoverer product comprises: Discoverer Desktop - used to edit and run reports in a Windows client program Discoverer Plus - used to edit and run reports in a web browser Discoverer Viewer - used to run reports in a web browser. Discoverer Administrator Discoverer Catalog Discoverer End-User layer Discoverer Portlets Discoverer Portlet Provider
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unified System for Design Documentation (Russia)** Unified System for Design Documentation (Russia): Unified System for Design Documentation (USDD, ESKD, Russian: Единая Система Конструкторской Документации, lit. 'Unified System for Engineering Documentation', ЕСКД, GOST 2.316-2013) is a subset of Russian State and Commonwealth of Independent States Standards (GOST) for technical drawings.: 11  Just like many GOSTs it's edited and issued by Russian Federal Agency on Technical Regulating and Metrology (Rosstandart) and international body of Euroasian Interstate council (EASC) subsidiary for standardisation.The latest revision was published in 2013.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Corticosterone 18-monooxygenase** Corticosterone 18-monooxygenase: In enzymology, a corticosterone 18-monooxygenase (EC 1.14.15.5) is an enzyme that catalyzes the chemical reaction corticosterone + reduced adrenal ferredoxin + O2 ⇌ 18-hydroxycorticosterone + oxidized adrenal ferredoxin + H2OThe 3 substrates of this enzyme are corticosterone, reduced adrenal ferredoxin, and O2, whereas its 3 products are 18-hydroxycorticosterone, oxidized adrenal ferredoxin, and H2O.This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with reduced iron-sulfur protein as one donor, and incorporation of one atom of oxygen into the other donor. The systematic name of this enzyme class is corticosterone,reduced-adrenal-ferredoxin:oxygen oxidoreductase (18-hydroxylating). Other names in common use include corticosterone 18-hydroxylase, and corticosterone methyl oxidase. This enzyme participates in c21-steroid hormone metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gold-filled jewelry** Gold-filled jewelry: Gold-filled jewelry is jewelry composed of a solid layer of gold (typically constituting at least 5% of the item's total weight) mechanically bonded to a base of either sterling silver or some base metal. The related terms "rolled gold plate" and "gold overlay" may legally be used in some contexts if the layer of gold constitutes less than 5% of the item's weight.Most high quality gold-filled pieces have the same appearance as high carat gold, and gold-filled items, even with daily wear, can last 10 to 30 years though the layer of gold will eventually wear off exposing the metal underneath. The layer of gold on gold-filled items is 5 to 10 times thicker than that produced by regular gold plating, and 15 to 25 times thicker than that produced by gold electroplate (sometimes stamped HGE for "high grade electroplate" or HGP for "heavy gold plate", which have neither of them any legal meaning and indicate only that the item is gold plated). Definition: In the United States, the quality of gold-filled jewelry is defined by the Federal Trade Commission (FTC). If the gold layer is 10kt fineness, the minimum weight of the plated layer on an item stamped "GF" must equal at least 1⁄20th of the total weight of the item. If the gold layer is 12 kt or higher, the minimum layer of karat gold in an item stamped "GF" must equal at least 1⁄20th the total weight of the item. The most common stamps found on gold-filled jewelry are 1⁄20th 12kt GF and 1⁄20th 14kt GF. Also common is 1⁄10th 10kt. These standards are for modern gold-filled items. It is not uncommon to see 1⁄8 14kt gold-filled marks, plus many other variations, on items from the 1930s, 1940s, etc., which would have to be marked "Rolled Gold Plate".The Federal Trade Commission allows the use of the terms "rolled gold plate," "R.G.P" or "gold overlay" on items with lower thicknesses of gold than are required for "gold-filled." An example would be an item stamped as "1⁄40 10kt RGP" meaning that the object is plated with 10kt gold at a thickness that makes weight of the plated layer equal to one-fortieth of the weight of the metal parts of the object. Definition: "Double clad" gold-filled sheet is produced with 1⁄2 the thickness of gold on each side. One-twentieth 14Kt double clad gold-filled has a layer on each side of 1⁄40th 14Kt making the total content of gold 1⁄20. The thinner layer on each side does not wear as well as single clad gold-filled.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Port forwarding** Port forwarding: In computer networking, port forwarding or port mapping is an application of network address translation (NAT) that redirects a communication request from one address and port number combination to another while the packets are traversing a network gateway, such as a router or firewall. This technique is most commonly used to make services on a host residing on a protected or masqueraded (internal) network available to hosts on the opposite side of the gateway (external network), by remapping the destination IP address and port number of the communication to an internal host. Purpose: Port forwarding facilitates the connection by remote computers, for example, Internet hosts, to a specific computer or service within a local area network (LAN).In a typical residential network, nodes obtain Internet access through a DSL or cable modem connected to a router or network address translator (NAT/NAPT). Hosts on the private network are connected to an Ethernet switch or communicate via a wireless LAN. The NAT device's external interface is configured with a public IP address. The computers behind the router, on the other hand, are invisible to hosts on the Internet as they each communicate only with a private IP address. Purpose: When configuring port forwarding, the network administrator sets aside one port number on the gateway for the exclusive use of communicating with a service in the private network, located on a specific host. External hosts must know this port number and the address of the gateway to communicate with the network-internal service. Often, the port numbers of well-known Internet services, such as port number 80 for web services (HTTP), are used in port forwarding, so that common Internet services may be implemented on hosts within private networks. Purpose: Typical applications include the following: Running a public HTTP server within a private LAN Permitting Secure Shell access to a host on the private LAN from the Internet Permitting FTP access to a host on a private LAN from the Internet Running a publicly available game server within a private LANAdministrators configure port forwarding in the gateway's operating system. In Linux kernels, this is achieved by packet filter rules in the iptables or netfilter kernel components. BSD and macOS operating systems prior to Yosemite (OS 10.10.X) implement it in the Ipfirewall (ipfw) module while macOS operating systems beginning with Yosemite implement it in the Packet Filter (pf) module. Purpose: When used on gateway devices, a port forward may be implemented with a single rule to translate the destination address and port. (On Linux kernels, this is DNAT rule). The source address and port are, in this case, left unchanged. When used on machines that are not the default gateway of the network, the source address must be changed to be the address of the translating machine, or packets will bypass the translator and the connection will fail. Purpose: When a port forward is implemented by a proxy process (such as on application layer firewalls, SOCKS based firewalls, or via TCP circuit proxies), then no packets are actually translated, only data is proxied. This usually results in the source address (and port number) being changed to that of the proxy machine. Usually only one of the private hosts can use a specific forwarded port at one time, but configuration is sometimes possible to differentiate access by the originating host's source address. Purpose: Unix-like operating systems sometimes use port forwarding where port numbers smaller than 1024 can only be created by software running as the root user. Running with superuser privileges (in order to bind the port) may be a security risk to the host, therefore port forwarding is used to redirect a low-numbered port to another high-numbered port, so that application software may execute as a common operating system user with reduced privileges. Purpose: The Universal Plug and Play protocol (UPnP) provides a feature to automatically install instances of port forwarding in residential Internet gateways. UPnP defines the Internet Gateway Device Protocol (IGD) which is a network service by which an Internet gateway advertises its presence on a private network via the Simple Service Discovery Protocol (SSDP). An application that provides an Internet-based service may discover such gateways and use the UPnP IGD protocol to reserve a port number on the gateway and cause the gateway to forward packets to its listening socket. Types: Port forwarding may be distinguished by the following specific types: local, remote, and dynamic port forwarding. Types: Local port forwarding Local port forwarding is the most common type of port forwarding. It is used to let a user connect from the local computer to another server, i.e. forward data securely from another client application running on the same computer as a Secure Shell (SSH) client. By using local port forwarding, firewalls that block certain web pages, can be bypassed.Connections from an SSH client are forwarded, via an SSH server, to the intended destination server. The SSH server is configured to redirect data from a specified port (which is local to the host that runs the SSH client) through a secure tunnel to some specified destination host and port. The local port is on the same computer as the SSH client, and this port is the "forwarded port". On the same computer, any client that wants to connect to the same destination host and port can be configured to connect to the forwarded port (rather than directly to the destination host and port). After this connection is established, the SSH client listens on the forwarded port and directs all data sent by applications to that port, through a secure tunnel to the SSH server. The server decrypts the data, and then redirects it to the destination host and port.Some uses of local port forwarding : Using local port forwarding to receive mail Connect from a laptop to a website using an SSH tunnel. Types: Remote port forwarding This form of port forwarding enables applications on the server side of a Secure Shell (SSH) connection to access services residing on the SSH's client side. In addition to SSH, there are proprietary tunnelling schemes that utilize remote port forwarding for the same general purpose. In other words, remote port forwarding lets users connect from the server side of a tunnel, SSH or another, to a remote network service located at the tunnel's client side. Types: To use remote port forwarding, the address of the destination server (on the tunnel's client side) and two port numbers must be known. The port numbers chosen depend on which application is to be used. Types: Remote port forwarding allows other computers to access applications hosted on remote servers. Two examples: An employee of a company hosts an FTP server at their own home and wants to give access to the FTP service to employees using computers in the workplace. In order to do this, an employee can set up remote port forwarding through SSH on the company's internal computers by including their FTP server’s address and using the correct port numbers for FTP (standard FTP port is TCP/21) Opening remote desktop sessions is a common use of remote port forwarding. Through SSH, this can be accomplished by opening the virtual network computing port (5900) and including the destination computer’s address. Types: Dynamic port forwarding Dynamic port forwarding (DPF) is an on-demand method of traversing a firewall or NAT through the use of firewall pinholes. The goal is to enable clients to connect securely to a trusted server that acts as an intermediary for the purpose of sending/receiving data to one or many destination servers.DPF can be implemented by setting up a local application, such as SSH, as a SOCKS proxy server, which can be used to process data transmissions through the network or over the Internet. Programs, such as web browsers, must be configured individually to direct traffic through the proxy, which acts as a secure tunnel to another server. Once the proxy is no longer needed, the programs must be reconfigured to their original settings. Because of the manual requirements of DPF, it is not often used.Once the connection is established, DPF can be used to provide additional security for a user connected to an untrusted network. Since data must pass through the secure tunnel to another server before being forwarded to its original destination, the user is protected from packet sniffing that may occur on the LAN.DPF is a powerful tool with many uses; for example, a user connected to the Internet through a coffee shop, hotel, or otherwise minimally secure network may wish to use DPF as a way of protecting data. DPF can also be used to bypass firewalls that restrict access to outside websites, such as in corporate networks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pharyngeal jaw** Pharyngeal jaw: Pharyngeal jaws are a "second set" of jaws contained within an animal's throat, or pharynx, distinct from the primary or oral jaws. They are believed to have originated as modified gill arches, in much the same way as oral jaws. Originally hypothesized to have evolved only once, current morphological and genetic analyses suggest at least two separate points of origin. Based on connections between musculoskeletal morphology and dentition, diet has been proposed as a main driver of the evolution of the pharyngeal jaw. A study conducted on cichlids showed that the pharyngeal jaws can undergo morphological changes in less than two years in response to their diet. Fish that ate hard-shelled prey had a robust jaw with molar-like teeth fit for crushing their durable prey. Fish that ate softer prey, on the other hand, exhibited a more slender jaw with thin, curved teeth used for tearing apart fleshy prey. These rapid changes are an example of phenotypic plasticity, wherein environmental factors affect genetic expression responsible for pharyngeal jaw development. Studies of the genetic pathways suggest that receptors in the jaw bone respond to the mechanical strain of biting hard-shelled prey, which prompts the formation of a more robust set of pharyngeal jaws. Cichlids: A notable example are fish from the family Cichlidae. Cichlid pharyngeal jaws have become very specialized in prey processing and may have helped cichlid fishes become one of the most diverse families of vertebrates. However, later studies based on Lake Victoria cichlids suggest that this trait may also become a handicap when competing with other predator species. Moray eels: Most fish species with pharyngeal teeth do not have extendable pharyngeal jaws. A particularly notable exception is the highly mobile pharyngeal jaw of the moray eels. These are possibly a response to their inability to swallow as other fishes do by creating a negative pressure in the mouth, perhaps induced by their restricted environmental niche (burrows) or in the air in the intertidal zone. Instead, when the moray bites prey, it first bites normally with its oral jaws, capturing the prey. Immediately thereafter, the pharyngeal jaws are brought forward and bite down on the prey to grip it; they then retract, pulling the prey down the moray eel's gullet, allowing it to be swallowed. Popular culture: The exceptional mobility of the moray eel's pharyngeal jaws was featured in the fictional xenomorph species from the Alien film series in which it was depicted showing a second set of jaws for attacking its prey. At that time pharyngeal jaws in other fish were already known.In the game Hungry Shark Evolution, the character "Big Daddy (Dunkleosteus)" is shown depicting a pharyngeal jaw. Popular culture: The final boss of Monster Hunter Rise Narwa, as well as her male counterpart Ibushi, both possess pharyngeal jaws within their throats. In the game Poppy Playtime, Huggy Wuggy, along with his female counterpart, Kissy Missy, and the mini Huggies, possess pharyngeal jaws within their throats.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vitali–Hahn–Saks theorem** Vitali–Hahn–Saks theorem: In mathematics, the Vitali–Hahn–Saks theorem, introduced by Vitali (1907), Hahn (1922), and Saks (1933), proves that under some conditions a sequence of measures converging point-wise does so uniformly and the limit is also a measure. Statement of the theorem: If (S,B,m) is a measure space with m(S)<∞, and a sequence λn of complex measures. Assuming that each λn is absolutely continuous with respect to m, and that a for all B∈B the finite limits exist lim n→∞λn(B)=λ(B). Then the absolute continuity of the λn with respect to m is uniform in n, that is, lim Bm(B)=0 implies that lim Bλn(B)=0 uniformly in n. Also λ is countably additive on B. Preliminaries: Given a measure space (S,B,m), a distance can be constructed on B0, the set of measurable sets B∈B with m(B)<∞. This is done by defining d(B1,B2)=m(B1ΔB2), where B1ΔB2=(B1∖B2)∪(B2∖B1) is the symmetric difference of the sets B1,B2∈B0. This gives rise to a metric space B0~ by identifying two sets B1,B2∈B0 when 0. Thus a point B¯∈B0~ with representative B∈B0 is the set of all B1∈B0 such that 0. Proposition: B0~ with the metric defined above is a complete metric space. Preliminaries: Proof: Let Then This means that the metric space B0~ can be identified with a subset of the Banach space L1(S,B,m) Let Bn∈B0 , with Then we can choose a sub-sequence χBn′ such that lim n′→∞χBn′(x)=χ(x) exists almost everywhere and lim n′→∞∫S|χ(x)−χBn′(x)|dm=0 . It follows that χ=χB∞ for some B∞∈B0 (furthermore χ(x)=1 if and only if χBn′(x)=1 for n′ large enough, then we have that lim inf n′→∞Bn′=⋃n′=1∞(⋂m=n′∞Bm) the limit inferior of the sequence) and hence lim 0. Proof of Vitali-Hahn-Saks theorem: Each λn defines a function λ¯n(B¯) on B~ by taking λ¯n(B¯)=λn(B) . This function is well defined, this is it is independent on the representative B of the class B¯ due to the absolute continuity of λn with respect to m . Moreover λ¯n is continuous. For every ϵ>0 the set is closed in B~ , and by the hypothesis lim n→∞λn(B)=λ(B) we have that By Baire category theorem at least one Fk0,ϵ must contain a non-empty open set of B~ . This means that there is B0¯∈B~ and a δ>0 such that implies sup n≥1|λ¯k0(B¯)−λ¯k0+n(B¯)|≤ϵ On the other hand, any B∈B with m(B)≤δ can be represented as B=B1∖B2 with d(B1,B0)≤δ and d(B2,B0)≤δ . This can be done, for example by taking B1=B∪B0 and B2=B0∖(B∩B0) . Thus, if m(B)≤δ and k≥k0 then Therefore, by the absolute continuity of λk0 with respect to m , and since ϵ is arbitrary, we get that m(B)→0 implies λn(B)→0 uniformly in n. Proof of Vitali-Hahn-Saks theorem: In particular, m(B)→0 implies 0. By the additivity of the limit it follows that λ is finitely-additive. Then, since lim m(B)→0λ(B)=0 it follows that λ is actually countably additive.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cerro de Oro Formation** Cerro de Oro Formation: The Cerro de Oro Formation is a geologic formation in Mexico. It preserves fossils dating back to the Cretaceous period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meat extract** Meat extract: Meat extract is highly concentrated meat stock, usually made from beef or chicken. It is used to add meat flavour in cooking, and to make broth for soups and other liquid-based foods. Meat extract: Meat extract was invented by Baron Justus von Liebig, a German 19th-century organic chemist. Liebig specialised in chemistry and the classification of food and wrote a paper on how the nutritional value of a meat is lost by boiling. Liebig's view was that meat juices, as well as the fibres, contained much important nutritional value and that these were lost by boiling or cooking in unenclosed vessels. Fuelled by a desire to help feed the undernourished, in 1840 he developed a concentrated beef extract, Extractum carnis Liebig, to provide a nutritious meat substitute for those unable to afford the real thing. However, it took 30 kg of meat to produce 1 kg of extract, making the extract too expensive. Commercialisation: Liebig's Extract of Meat Company Liebig went on to co-found the Liebig's Extract of Meat Company, (later Oxo), in London whose factory, opened in 1865 in Fray Bentos, a port in Uruguay, took advantage of meat from cattle being raised for their hides — at one third the price of British meat. Before that, it was the Giebert et Compagnie (April 1863). Commercialisation: Bovril In the 1870s, John Lawson Johnston invented 'Johnston's Fluid Beef', later renamed Bovril. Unlike Liebig's meat extract, Bovril also contained flavourings. It was manufactured in Argentina and Uruguay which could provide cheap cattle. Effects Liebig and Bovril were important contributors to the beef industry in South America. Bonox On the market in 1919 and created by the Fred Walker and Company Bonox is manufactured in Australia. When it was created it was often offered as an alternative hot drink with it being common to offer "Coffee, tea or Bonox". Today: Meat extracts have largely been supplanted by bouillon cubes and yeast extract. Some brands of meat extract, such as Oxo and Bovril, now contain yeast extract as well as meat extract. For example, the current formulation of Bovril contains 41% beef stock, 24% yeast extract, 1% dehydrated beef and salt (388 mg sodium per 100g), spice extracts and flavor enhancers among other ingredients. High purity meat extract is still available from laboratory supply companies for microbiology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ethanolamine-phosphate phospho-lyase** Ethanolamine-phosphate phospho-lyase: The enzyme ethanolamine-phosphate phospho-lyase (EC 4.2.3.2) catalyzes the chemical reaction ethanolamine phosphate + H2O ⇌ acetaldehyde + NH3 + phosphateThis enzyme belongs to the family of lyases, specifically those carbon-oxygen lyases acting on phosphates. The systematic name of this enzyme class is ethanolamine-phosphate phosphate-lyase (deaminating; acetaldehyde-forming). Other names in common use include O-phosphoethanolamine-phospholyase, amino alcohol O-phosphate phospholyase, O-phosphorylethanol-amine phospho-lyase, and ethanolamine-phosphate phospho-lyase (deaminating). It employs one cofactor, pyridoxal phosphate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dm-crypt** Dm-crypt: dm-crypt is a transparent block device encryption subsystem in Linux kernel versions 2.6 and later and in DragonFly BSD. It is part of the device mapper (dm) infrastructure, and uses cryptographic routines from the kernel's Crypto API. Unlike its predecessor cryptoloop, dm-crypt was designed to support advanced modes of operation, such as XTS, LRW and ESSIV (see disk encryption theory for further information), in order to avoid watermarking attacks. In addition to that, dm-crypt addresses some reliability problems of cryptoloop.dm-crypt is implemented as a device mapper target and may be stacked on top of other device mapper transformations. It can thus encrypt whole disks (including removable media), partitions, software RAID volumes, logical volumes, as well as files. It appears as a block device, which can be used to back file systems, swap or as an LVM physical volume. Dm-crypt: Some Linux distributions support the use of dm-crypt on the root file system. These distributions use initrd to prompt the user to enter a passphrase at the console, or insert a smart card prior to the normal boot process. Frontends: The dm-crypt device mapper target resides entirely in kernel space, and is only concerned with encryption of the block device – it does not interpret any data itself. It relies on user space front-ends to create and activate encrypted volumes, and manage authentication. At least two frontends are currently available: cryptsetup and cryptmount. Frontends: cryptsetup The cryptsetup command-line interface, by default, does not write any headers to the encrypted volume, and hence only provides the bare essentials: encryption settings have to be provided every time the disk is mounted (although usually employed with automated scripts), and only one key can be used per volume; the symmetric encryption key is directly derived from the supplied passphrase. Frontends: Because it lacks a "salt", using cryptsetup is less secure in this mode than is the case with Linux Unified Key Setup (LUKS). However, the simplicity of cryptsetup makes it useful when combined with third-party software, for example, with smart card authentication. cryptsetup also provides commands to deal with the LUKS on-disk format. This format provides additional features such as key management and key stretching (using PBKDF2), and remembers encrypted volume configuration across reboots. cryptmount The cryptmount interface is an alternative to the "cryptsetup" tool that allows any user to mount and unmount a dm-crypt file system when needed, without needing superuser privileges after the device has been configured by a superuser. Features: The fact that disk encryption (volume encryption) software like dm-crypt only deals with transparent encryption of abstract block devices gives it a lot of flexibility. This means that it can be used for encrypting any disk-backed file systems supported by the operating system, as well as swap space; write barriers implemented by file systems are preserved. Encrypted volumes can be stored on disk partitions, logical volumes, whole disks as well as file-backed disk images (through the use of loop devices with the losetup utility). dm-crypt can also be configured to encrypt RAID volumes and LVM physical volumes. Features: dm-crypt can also be configured to provide pre-boot authentication through an initrd, thus encrypting all the data on a computer – except the bootloader, the kernel and the initrd image itself.When using the cipher block chaining mode of operation with predictable initialization vectors as other disk encryption software, the disk is vulnerable to watermarking attacks. This means that an attacker is able to detect the presence of specially crafted data on the disk. To address this problem in its predecessors, dm-crypt included provisions for more elaborate, disk encryption-specific modes of operation. Support for ESSIV (encrypted salt-sector initialization vector) was introduced in Linux kernel version 2.6.10, LRW in 2.6.20 and XTS in 2.6.24. Features: The Linux Crypto API includes support for most popular block ciphers and hash functions, which are all usable with dm-crypt. Crypted FS support include LUKS volumes, loop-AES and since Linux kernel 3.13, the TrueCrypt target called "tcw". Compatibility: dm-crypt and LUKS encrypted disks can be accessed and used under MS Windows using the now defunct FreeOTFE (formerly DoxBox, LibreCrypt), provided that the filesystem used is supported by Windows (e.g. FAT/FAT32/NTFS). Encrypted ext2 and ext3 filesystems are supported by using Ext2Fsd or so-called "Ext2 Installable File System for Windows"; FreeOTFE also supports them. Cryptsetup/LUKS and the required infrastructure have also been implemented on the DragonFly BSD operating system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blantyre coma scale** Blantyre coma scale: The Blantyre coma scale is a modification of the Pediatric Glasgow Coma Scale, designed to assess malarial coma in children. It was designed by Terrie Taylor and Malcolm Molyneux in 1987, and named for the Malawian city of Blantyre, site of the Blantyre Malaria Project. Using the scale: The score assigned by the Blantyre coma scale is a number from 0 to 5. The score is determined by adding the results from three groups: Motor response, verbal response, and eye movement. The minimum score is 0 which indicates poor results while the maximum is 5 indicating good results. All scores under 5 are considered abnormal. Using the scale: Eye movement 1 – Watches or follows 0 – Fails to watch or follow Best motor response 2 – Localizes painful stimulus (patient's ability to remove stimuli) 1 – Withdraws limb from painful stimulus 0 – No response or inappropriate response Best verbal response 2 – Cries appropriately with pain, or, if verbal, speaks 1 – Moan or abnormal cry with pain 0 – No vocal response to pain
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isoflupredone** Isoflupredone: Isoflupredone, also known as deltafludrocortisone and 9α-fluoroprednisolone, is a synthetic glucocorticoid corticosteroid which was never marketed.Its acetate ester, isoflupredone acetate, is used in veterinary medicine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trilliant cut** Trilliant cut: A trilliant cut, sometimes called a trillion, trillian, or Trielle is a triangular type of gemstone cut. The cut has many variations. It may have curved or uncurved sides. The shape of the top surface, or table, also varies. Creation: The trilliant cut was introduced by the Asscher brothers in Amsterdam. In the early 1960s Leon Finker created his version on the "triangular brilliant cut diamond" which he called the Trillion Cut. Mr. Finker had a large diamond cutting factory in New York, and Henry Meyer Diamond Company, who was also cutting triangular brilliant diamonds, used the same diamond cutters, though their stones were cut slightly differently. Creation: Henry Meyer referred to his diamond cut as “Trilliant,” while Mr. Finker called his cut Trillion. When Mr. Finker's son, Marvin Finker, entered the business in the early 1970s, he decided to patent the cut that his father was cutting and he trademarked the term “Trillion®” for their now patented triangular brilliant cut diamond. Creation: They used the term “Trillion” and “Trillion” in a stylized design until they lost the trademark in 1986 when a federal court judge decided that the words “Trillion” and “Trilliant” were phonetically equivalent. {Leon Finker, Inc. v. Schlussel, 469 F. Supp. 674 (S.D.N.Y. 1979)}. Since trilliant was a concatenation of the generic term “triangular brilliant,” it could no longer be a registered trademark. Creation: Mr. Finker gave the term “trillion” to the trade in a half-page advertisement in the New York Times and announced that their Patented Cut would now be known as Trielle® and TRIELLE® in a stylized design. Now that the trademark had been canceled, the term "Trillion Cut" is used to refer to all triangular-shaped gems, even step cut, and cabochon stones. Triangular Brilliant and Triangular Modified Brilliant are the generic terms used by GIA when referring to non-branded diamonds. Straight: The cut displays a very sharp brilliance or fire if the diamond is cut to the correct depth allowing good scintillation. It is generally cut with a 1:14 length to width ratio with straight edges. This straight edged trillion cut is usually used for accent gemstones, on either side of a main, larger stone of a ring. Curved: This is a softer version of the uncurved version, with three soft points and curved sides. The length to width ratio should still be 1:1, keeping the gemstone proportioned. This cut is unusual, but can be found in pieces as a solitary gem or as an accent gem. The curved triangular brilliant is commonly known as trilliant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Photo booth** Photo booth: A photo booth is a vending machine or modern kiosk that contains an automated, usually coin-operated, camera and film processor. Today, the vast majority of photo booths are digital. History: The patent for the first automated photography machine was filed in 1888 by William Pope and Edward Poole of Baltimore. The first known really working photographic machine was a product of the French inventor T. E. Enjalbert (March 1889). It was shown at the 1889 World's Fair in Paris. The German-born photographer Mathew Steffens from Chicago filed a patent for such a machine in May 1889. These early machines were not reliable enough to be self-sufficient. The first commercially successful automatic photographic apparatus was the "Bosco" from inventor Conrad Bernitt of Hamburg (patented July 16, 1890). All of these early machines produced ferrotypes. The first automatic photographic apparatus with negative and positive process was invented by Carl Sasse (1896) of Germany.The modern concept of photo booth with (later) a curtain originated with Anatol Josepho (previously Josephewitz), who had arrived in the U.S. from Russia in 1923. In 1925, the first photo booth appeared on Broadway in New York City. For 25 cents, the booth took, developed, and printed 8 photos, a process taking roughly 10 minutes. In the first six months after the booth was erected, it was used by 280,000 people. The Photomaton Company was created to place booths nationwide. On March 27, 1927, Josepho was paid $1 million and guaranteed future royalties for his invention.In the United Kingdom, entrepreneur Clarence Hatry established the Photomaton Parent Corporation, Ltd., in 1928. Operation: After money has been inserted in the machine, multiple customers can enter the booth and pose for a set number of exposures. Some common options include the ability to alter lighting and backdrops while the newest versions offer features such as cameras from a variety of angles, fans, seats, and blue screen effects. Some establishments even offer costumes and wigs for customers to borrow. Operation: Once the pictures have been taken, the customers select the pictures that they wish to keep and customize them using a touch screen or pen-sensitive screen. The touch screen then displays a vast array of options such as virtual stamps, pictures, clip art, colorful backdrops, borders, and pens that can be superimposed on the photographs. Operation: Features that can be found in some sticker machines are customizing the beauty of the customers such as brightening the pictures, making the eyes sparkle more, changing the hair, bringing a more reddish color to the lips, and fixing any blemishes by having them blurred. Other features include cutting out the original background and replacing it with a different background. Certain backgrounds may be chosen so when the machine prints out the picture, the final sticker will be shiny with sparkles.Finally, the number and size of the pictures to be printed are chosen, and the pictures print out on a glossy full-color 10 × 15 cm sheet to be cut up and divided among the group of customers. Some photo booths also allow the pictures to be sent to customers' mobile phones. Other photo places have a scanner and laptop at the cashier's desk for customers to scan and copy their original picture before they cut and divide the pictures amongst their group. Types of photo booths: Passport photo booths Most of the photo booths are used for passport photos. They are coin-operated automated machines that are designed to print a photo in a specific format that meets the passport photo requirements. Multiple copies can be printed so users can save some for future uses. Types of photo booths: Traditionally, photo booths contain a seat or bench designed to seat the one or two patrons being photographed. The seat is typically surrounded by a curtain of some sort to allow for some privacy and help avoid outside interference during the photo session. Once the payment is made, the photo booth will take a series of photographs, although most modern booths may only take a single photograph and print out a series of identical pictures. Before each photograph, there will be an indication, such as a light or a buzzer, that will signal the patron to prepare their pose. Most booths will use artificial lighting, which may be flash or continuous lighting. After the last photograph in the series (typically between 3 and 8) has been taken, the photo booth begins developing the film — a process that used to take several minutes in the old "wet chemistry" booths, but is now typically accomplished in about 30 seconds with digital technology. The prints are then delivered to the customer. Typical dimensions of these prints vary. The classic and most familiar arrangement from the old style photo booths is four pictures on a strip about 40 mm wide by 205 mm long; digital prints tend to have a square arrangement of two images above two images. Types of photo booths: Both black and white and colour photo booths are common in the US, however in Europe the colour photo booth has almost entirely replaced black and white booths. However, newer digital booths now offer the customer the option of whether to print in colour or in black and white. Most modern photo booths use video or digital cameras instead of film cameras, and are under computer control. Some booths can also produce stickers, postcards, or other items with the photographs on them, rather or as well as simply a strip of pictures. These often include an option of novelty decorative borders around the photos. Types of photo booths: Photo sticker booths Photo sticker booths or photo sticker machines originated from Japan (see Purikura below). They are a special type of photo booth that produce photo stickers. Still maintaining huge popularity in Japan, they have spread throughout Asia to Taiwan, South Korea, Hong Kong, Singapore, Malaysia, Philippines, China, Vietnam, and Thailand. They have also been imported to Australia. Some have also begun appearing in the United States and Canada although they failed to make any impression in Europe when introduced in the mid-1990s. Types of photo booths: Purikura In Japan, purikura (プリクラ) refers to a photo sticker booth or the product of such a photo booth. The name is a shortened form of the registered Atlus/Sega trademark Print Club (プリント倶楽部, Purinto Kurabu), the first purikura machine, introduced to arcades in 1995. Types of photo booths: Purikura produce what are today called selfies. Purikura is essentially a cross between a traditional license/passport photo booth and an arcade video game, with a computer which allows the manipulation of digital images. It involves users posing in front of a camera within the compact booth, having their images taken, and then printing the photos with various effects designed to look kawaii. It presents a series of choices, such as desired backdrops, borders, insertable decorations, icons, text writing options, hair extensions, twinkling diamond tiaras, tenderized light effects, and predesigned decorative margins. Types of photo booths: History of purikura Purikura has roots in Japanese kawaii culture, which involves an obsession with beautifying self-representation in photographic forms, particularly among females. Purikura originate from the Japanese video game arcade industry. It was conceived in 1994 by Sasaki Miho, inspired by the popularity of girl photo culture and photo stickers in 1990s Japan. She worked for a Japanese game company, Atlus, where she suggested the idea, but was initially rejected. Atlus eventually decided to pursue Miho's idea, and developed it with the help of a leading Japanese video game company, Sega, which later became the owner of Atlus. Sega and Atlus introduced Print Club, the first purikura, in February 1995, initially at game arcades, before expanding to other popular locations such as fast food shops, train stations, karaoke establishments and bowling alleys. Game Machine magazine listed Printing Club as Japan's most successful arcade game in the non-video game category during early 1996, and it went on to become the overall highest-grossing arcade game of 1996 in Japan. By 1997, about 45,000 Purikura machines had been sold, earning Sega an estimated ¥25 billion (£173 million) or $283,000,000 (equivalent to $516,000,000 in 2022) annually from Purikura sales that year. Print Club went on to generate over $1 billion in sales for Atlus and Sega.The success of the original Sega-Atlus machine led to other Japanese arcade game companies producing their own purikura, including SNK's Neo Print in 1996 and Konami's Puri Puri Campus (Print Print Campus) in 1997, with Sega controlling about half of the market that year. Purikura became a popular form of entertainment among youths in Japan, then East Asia, in the 1990s. To capitalize on the purikura phenomenon, Japanese mobile phones began including a front-facing camera, which facilitated the creation of selfies, during the late 1990s to early 2000s. Photographic features in purikura were later adopted by smartphone apps such as Instagram and Snapchat, including scribbling graffiti or typing text over selfies, adding features that beautify the image, and photo editing options such as cat whiskers or bunny ears. Types of photo booths: 3D selfie photo booths A 3D selfie photo booth such as the Fantasitron located at Madurodam, the miniature park, generates 3D selfie models from 2D pictures of customers. These selfies are often printed by dedicated 3D printing companies such as Shapeways. These models are also known as 3D portraits, 3D figurines or mini-me figurines. Different types of photo booths Cultural significance of photo booths: Purikura Purikura offer rare insight into Japanese popular culture, specifically girl culture. Purikura is a social activity, rarely done alone. It is also now an established form of entertainment, with most Japanese having tried it at least once. The wide lexicon associated with purikura also reveals that it has grown outside kawaii culture; erotic purikura, creepy purikura, and couples purikura are all genres of this popular form of self-photography. Graffiti purikura, an alternative genre of purikura, represents young females' desire to rebel from traditional gender roles. In order to contradict stereotypical images of Japanese women as docile and meek, graffiti purikura photographers may photograph themselves in unflattering fashion or add stickers which defy cuteness, such as the poop emoji. Rather than simple conceited frivolity, purikura photography demonstrates ingenuity and creativity on the part of young Japanese women seeking forms of self-expression. Cultural significance of photo booths: Flinders Street Station photo booth Located at the Elizabeth Street exit of Melbourne's busiest railway station, Flinders Street Station, lies a culturally significant photo booth. The photo booth has been continuously operating at the station since 1961, with many feeling it has become an iconic and irreplaceable part of the station. It has been maintained for the entirety of its life by owner Alan Adler. During May 2018, Mr Adler (then 86) was given 10 days notice to remove the photo booth by Metro Trains Victoria to make way for station upgrades. Alan informed passerby with a hand written note explaining the news prompting widespread backlash from the public and support for Alan and his photo booth. After a letter writing campaign to Metro Trains, Public Transport Victoria CEO Jeroen Weimar phoned Alan to apologise and assured him a new home would be found. Days later they successfully relocated the photo booth to another location within Flinders Street Station. The photo booth shoots analogue images in black and white and joins 3 images together vertically. Cultural significance of photo booths: Photo booths for parties Photo booth rental companies allow a person to rent a photo booth for a short period of time (usually in hours) for a fee. Photo booth rentals have become popular in the United States primarily for wedding receptions, sweet sixteen parties, Bar and Bat Mitzvah parties, along with a growing number of other public and private events. In addition to the photo booth and the printing of unlimited photo strips, rental companies usually include a photo booth attendant to service the photo booth and to help guests construct the guest book of photo strips. Online image hosting, compact discs containing the images and related merchandise are readily available. Celebrities are frequent users of photo booths in parties. Cultural significance of photo booths: Apart from traditional photo printing, modern photo booths may also include the following new functions: Animated GIF Flip book printing Virtual props, placed intelligently on the person's eyes or shoulders etc. Slow-motion video Green-screen background removal Fun costume virtual dressing Games - mostly Kinect body gesture controlled games, and print a photo of the person and his/her scores Facial gesture recognition Growth of photo booth rentals: As digital cameras, compact photo printers, and flat screen computer monitors became widely available in the early 2000s, people connected these together using a personal computer and software and created their own photo booths. Entrepreneurs began renting machines built along these lines at weddings and parties and the idea spread. From 2005 to 2012, interest in the United States for photo booth rentals grew significantly. By 2016 more people were searching for photo booth rentals than DJ rentals in 15 of North America's largest cities. In Greater Los Angeles alone, there are now more than 600 photo booth rental companies. Photo booth rentals have also become popular in other countries such as Canada, Australia, and the UK. So far in 2016 there is an average of 226,000 monthly searches for a photo booth globally. This has risen by 48.9% since 2015 (in the UK alone this is nearly 20,000 searches a month).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fractional model** Fractional model: In applied statistics, fractional models are, to some extent, related to binary response models. However, instead of estimating the probability of being in one bin of a dichotomous variable, the fractional model typically deals with variables that take on all possible values in the unit interval. One can easily generalize this model to take on values on any other interval by appropriate transformations. Examples range from participation rates in 401(k) plans to television ratings of NBA games. Description: There have been two approaches to modeling this problem. Even though they both rely on an index that is linear in xi combined with a link function, this is not strictly necessary. The first approach uses a log-odds transformation of y as a linear function of xi, i.e., logit log ⁡y1−y=xβ . This approach is problematic for two distinct reasons. The y variable can not take on boundary values 1 and 0, and the interpretation of the coefficients is not straightforward. The second approach circumvents these issues by using the logistic regression as a link function. More specifically, exp exp ⁡(xβ) It immediately becomes clear that this set up is very similar to the binary logit model, with that difference that the y variable can actually take on values in the unit interval. Many of the estimation techniques for the binary logit model, such as non-linear least squares and quasi-MLE, carry over in a natural way, just like heteroskedasticity adjustments and partial effects calculations.Extensions to this cross-sectional model have been provided that allow for taking into account important econometric issues, such as endogenous explanatory variables and unobserved heterogeneous effects. Under strict exogeneity assumptions, it is possible to difference out these unobserved effects using panel data techniques, although weaker exogeneity assumptions can also result in consistent estimators. Control function techniques to deal with endogeneity concerns have also been proposed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heteroclinic network** Heteroclinic network: In mathematics, a heteroclinic network is an invariant set in the phase space of a dynamical system. It can be thought of loosely as the union of more than one heteroclinic cycle. Heteroclinic networks arise naturally in a number of different types of applications, including fluid dynamics and populations dynamics. The dynamics of trajectories near to heteroclinic networks is intermittent: trajectories spend a long time performing one type of behaviour (often, close to equilibrium), before switching rapidly to another type of behaviour. This type of intermittent switching behaviour has led to several different groups of researchers using them as a way to model and understand various type of neural dynamics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HD-MAC** HD-MAC: HD-MAC (High Definition Multiplexed Analogue Components) was a broadcast television standard proposed by the European Commission in 1986, as part of Eureka 95 project. It belongs to the MAC - Multiplexed Analogue Components standard family. It is an early attempt by the EEC to provide High-definition television (HDTV) in Europe. It is a complex mix of analogue signal (based on the Multiplexed Analogue Components standard), multiplexed with digital sound, and assistance data for decoding (DATV). The video signal (1250 lines/50 fields per second in 16:9 aspect ratio, with 1152 visible lines) was encoded with a modified D2-MAC encoder. HD-MAC: HD-MAC could be decoded by normal D2-MAC standard definition receivers, but no extra resolution was obtained and certain artifacts were visible. To decode the signal in full resolution a specific HD-MAC tuner was required . Naming convention: The European Broadcasting Union video format description is as follows: width x height [scan type: i or p] / number of full frames per secondEuropean standard definition digital broadcasts use 720×576i/25, meaning 25 720 pixels wide and 576 pixels high interlaced frames: odd lines (1, 3, 5 ...) are grouped to build the odd field, which is transmitted first, then it is followed by the even field containing lines 2, 4, 6... Thus, there are two fields in a frame, resulting in a field frequency of 25 × 2 = 50 Hz. Naming convention: The visible part of the video signal provided by an HD-MAC receiver was 1152i/25, which exactly doubles the vertical resolution of standard definition. The amount of information is multiplied by 4, considering the encoder started its operations from a 1440x1152i/25 sampling grid. Standard history: Work on HD-MAC specification started officially in May 1986. The purpose was to react against a Japanese proposal, supported by the US, which aimed to establish the NHK-designed Hi-Vision (also known as MUSE) system as a world standard. Besides preservation of the European electronic industry, there was also a need to produce a standard that would be compliant with the 50 Hz field frequency systems (used by a large majority of countries in the world). Truth be said, the precisely 60 Hz of the Japanese proposal was also worrying the US, as their NTSC M-based standard definition infrastructure used a practical frequency of 59.94 Hz, potentially leading to incompatibility problems. Standard history: In September, 1988, the Japanese performed the first High Definition broadcasts of the Olympic games, using their Hi-Vision system (NHK produced material using this format since 1982). In that same month of September, Europe showed for the first time a credible alternative, namely a complete HD-MAC broadcasting chain, at IBC 88 in Brighton. This show included the first progressive scan HD video camera prototypes (Thomson/LER).For the Albertville 1992 Winter Olympics and Barcelona 1992 Summer Olympics, a public demonstration of HD-MAC broadcasting took place. 60 HD-MAC receivers for the Albertville games and 700 for the Barcelona games were set up in "Eurosites" to show the capabilities of the standard. 1250 lines (1152 visible) CRT projectors were used to create an image a few meters wide in public spaces in Barcelona for the Olympics. There were some Thomson "Space system" 16:9 CRT TV sets as well. The project sometimes used rear-projection televisions. In addition, some 80,000 viewers of D2-MAC receivers were also able to watch the channel (though not in HD). It is estimated that 350,000 people across Europe were able to see this demonstration of European HDTV. This project was financed by the EEC. The PAL-converted signal was used by mainstream broadcasters such as SWR, BR and 3sat. The HD-MAC standard was also demonstrated at Seville Expo '92, exclusively using equipment designed to work with the standard such as Plumbicon and CCD cameras, direct view and rear projection CRT TVs, BCH 1000 Type B VTRs, single mode fiber optic cables, and Laserdisc players with their respective discs. Production equipment was visible to the public through windows.Because UHF spare bandwidth was very scarce, HD-MAC was usable "de facto" only to cable and satellite providers, where their bandwidth was less constricted, similarly to Hi-Vision that was only broadcast by the NHK through a dedicated satellite channel called BShi. However, the standard never became popular among broadcasters. For all this, analogue HDTV could not replace conventional SDTV (terrestrial) PAL/SECAM, making HD-MAC sets unattractive to potential consumers. Standard history: It was required that all high-powered satellite broadcasters use MAC from 1986. However, the launch of middle-powered satellites by SES and the use of PAL allowed broadcasters to bypass HD-MAC, reducing their transmission costs. HD-MAC was left for transcontinental satellite links, however. The HD-MAC standard was abandoned in 1993, and since then all EU and EBU efforts have focused on the DVB system (Digital Video Broadcasting), which allows both SDTV and HDTV. Standard history: This article about IFA 1993 provides a view of the project's status close to its end. It mentions "a special BBC compilation encoded in HD-MAC and replayed from a D1 Video Tape Recorder".HD-MAC development was stopped alongside the EUREKA project in 1996, because picture quality was not deemed to be good enough, receiving TVs didn't have enough resolution, the 16:9 aspect ratio that would later become standard was seen as exotic, and receiving TVs weren't large enough to exhibit the image quality of the standard, and those that were, were CRT TVs which made them extremely heavy. Technical details: Transmission PAL/SECAM analogue SDTV broadcasts use 6-, 7- (VHF), or 8 MHz (UHF). The 819-line (System E) used 14 MHz wide VHF channels. For HD-MAC, the transmission medium must guarantee a baseband bandwidth of at least 11.14 MHz. This translates to a 12 MHz channel spacing in cable networks. The specification allows for 8 MHz channels, but in this case assistance data can no longer be correctly decoded, and it is only possible to extract a standard definition signal, using a D2-MAC receiver. For satellite broadcasting, due to FM modulation spectrum expansion, an entire satellite transponder would be used, resulting in 27 to 36 MHz of bandwidth. The situation is pretty much the same in analogue standard definition : a given transponder can only support one analogue channel. So from this point of view, going to HD does not represent an inconvenience. Technical details: Bandwidth reduction BRE (Bandwidth Reduction Encoding) operation started with analogue HD video (even when the source was a digital recorder, it was reconverted to analogue to feed the encoder). It was specified to have a 50 Hz field frequency. It could be interlaced, with 25 frames a second (called 1250/50/2 in the recommendation), or it could be progressively scanned with 50 full frames a second (called 1250/50/1). The interlaced version was the one used in practice. In any case, the number of visible lines was 1152, twice the standard 576 lines vertical definition. The full number of lines in a frame period, included those that cannot be displayed, was 1250. This made for a 32 µs line period. According to ITU recommendation for HDTV standards parameters the active part of the line was 26.67 µs long (see also the LDK 9000 camera document ). Technical details: Had the modern trend for square pixels applied, this would have yielded a 2048x1152 sampling grid. There was no such requirement in the standard, though, since CRT monitors don't need any extra scaling to be able to show non-square pixels. According to the specification, the sampling rate for the interlaced input to use was 72 MHz, resulting in 72 x 26.67 = 1920 horizontal samples. It was then reconverted to 1440 from within the sampled domain. The input signal often originated from sources previously sampled at only 54 MHz, for economical reasons, and therefore already containing no more than the analogue equivalent of 1440 samples per line. Technical details: Ultimately, the starting point for BRE was a 1440x1152 sampling grid (twice the horizontal and vertical resolutions of digital SD), interlaced, at 25 fps.To improve horizontal resolution of the D2-MAC norm, only its bandwidth had to be increased. This was easily done as, unlike PAL, the sound is not sent on a sub-carrier, but multiplexed with the picture. However, to increase vertical bandwidth was more complex, as the line frequency had to stay at 15.625 kHz to be compatible with D2-MAC. This offered three choices: 50 frames per second with only 288 lines for fast moving scenes (20 ms mode) 25 frames per second with 576 lines for normally moving scenes (40 ms mode) 12.5 frames per second with all 1152 lines for slow motion (80 ms mode)As none of the three modes would have been sufficient, the choice during encoding was not made for the whole picture, but for little blocks of 16×16 pixels. The signal then contained hints (the DATV digital stream) that controlled which de-interlacing method the decoder should use. Technical details: The 20 ms mode offered an improved temporal resolution, but the 80 ms was the only one that provided High spatial definition in the usual sense. The 40 ms mode threw away one the HD fields and reconstructed it in the receiver with the assistance of motion compensation data. Some indications were also provided in case of a whole frame movement (camera panning,..) to improve the quality of the reconstruction. Technical details: The encoder could work in "Camera" operating mode, using the three coding modes, but also in "film" mode where the 20 ms coding mode was not used. The 80 ms mode took advantage of its reduced 12.5 fps frame rate to spread the contents of an HD frame over two SD frames, meaning four 20 ms fields = 80 ms, hence the name. Technical details: But that was not enough, as a single HD frame contains the equivalent of 4 SD frames. This could have been "solved" by doubling the bandwidth of the D2-MAC signal, thus increasing the allowed horizontal resolution by the same factor. Instead, the standard D2-MAC channel bandwidth was preserved, and one pixel out of two was dropped from each line. This sub-sampling was done in a quincux pattern. Assuming pixels on a line independently numbered from 1 to 1440, only pixels 1, 3, 5... were retained from the first line, pixels 2, 4, 6... from the second, 1, 3, 5...again from the third, and so on. That way, information from all the columns of the HD frame were conveyed to the receiver. Each missing pixel was surrounded by 4 transmitted ones (except on the sides) and could be interpolated from them. The resulting 720 horizontal resolution was further truncated to the 697 samples per line limit of the D2-HDMAC video multiplex.As a consequence of those operations, a 4:1 reduction factor was achieved, allowing the high definition video signal to be transported in a standard D2-MAC channel. The samples retained by the BRE were assembled into a valid standard definition D2-MAC vision signal and finally converted to analogue for transmission. The modulation parameters were such that the independence of the samples was preserved.To fully decode the picture, the receiver had to sample the signal again and then read from the memory several times. The BRD (Bandwidth Restoration Decoder) in the receiver would then reconstruct a 1394x1152 sampling grid from it, under the control of the DATV stream, to be fed into its DAC. Technical details: The final output was a 1250 (1152 visible) lines, 25 fps, interlaced, analogue HD video signal, with a 50 Hz field frequency. Technical details: Progressive scanning European systems are generally referred to as 50 Hz standards (field frequency). The two fields are 20 ms apart in time. The Eu95 project stated it would evolve towards 1152p/50, and it is taken into account as a possible source in the D2-HDMAC specification. In that format, a full frame is captured every 20 ms, thus preserving the quality of motion of television and topping it with solid artifact-free frames representing only one instant in time, as is done for cinema. The 24 fps frame frequency of cinema is a bit low, though, and a generous amount of motion smear is required to allow the eye to perceive a smooth motion. 50 Hz is more than twice that rate, and the motion smear can be reduced in proportion, allowing for sharper pictures. Technical details: In practice, 50P was not used very much. Some tests were even done by having film shot at 50 fps and subsequently telecined.Thomson / LER presented a progressive camera. However, it used a form of quincunx sampling and had therefore some bandwidth constraints.This requirement meant pushing the technology boundaries of the time, and would have added to the notorious lack of sensitivity of some Eu 95 cameras (particularly CRT ones). This thirst for light was one of the problems that plagued the operators shooting the French film "L'affaire Seznec (The Seznec case)" in 1250i. Technical details: Some CCD cameras were developed in the context of the project, see for example LDK9000 : 50 DB signal to noise ratio at 30 MHz, 1000 lux at F/4. Technical details: The Eu95 system would have provided better compatibility with cinema technology than its competitor, first because of progressive scanning, and second because of the convenience and quality of transfer between 50 Hz standards and film (no motion artifacts, one just needs to invert the usual "PAL speed-up" process by slowing down the frame rate in a 25/24 ratio). Taking one frame out of two from a 50P stream would have provided a suitable 25P video as a starting point for this operation. If the sequence is shot at 50 P with a fully opened shutter, it will produce the same amount of motion smear as a 25P shot with a half opened shutter, a common setting when shooting with a standard movie camera. Technical details: In practice, Hi-Vision seems to have been more successful in that regard, having been used for films such as Giulia e Giulia(1987) and Prospero's books(1991). Technical details: Recording Consumer A consumer tape recorder prototype was presented in 1988. It had an 80-minute recording time and used a 1.25 cm "metal" tape. Bandwidth was 10.125 MHz and signal to noise ratio 42 dB.An HD-MAC videodisc prototype had been designed as well. The version that was presented in 1988 could record 20 min per side of a 30 cm disc. Bandwidth was 12 MHz and S/N 32 dB. This media was used for several hours at Expo 92. Technical details: Professional equipment On the studio and production side, it was entirely different. HD-MAC bandwidth reduction techniques bring the HD pixel rate down to the level of SD. So in theory, it would have been possible to use an SD digital video recorder, assuming it provides enough room for the DATV assistance stream, which requires less than 1.1 Mbit/s. SD video using 4:2:0 format (12 bits per pixel) needs 720x576x25x12 bits per second, which is slightly less than 125 Mbit/s, to be compared with the 270 Mbit/s available from a D-1 machine. Technical details: But there is no real reason for the studio equipment to be constrained by HD-MAC, as the latter is only a transmission standard, used to convey the HD material from the transmitter to the viewers. Furthermore, technical and financial resources are available to store the HD video with better quality, for editing and archiving. Technical details: So in practice, other methods were used. At the start of the Eureka95 project the only means of recording the HD signal from a camera was on a massive 1-inch reel-to-reel tape machine, the BTS BCH 1000, which was based on the Type B videotape format but with 8 video heads instead of the two normally used, together with a higher linear tape speed of 66 cm/s, thus accommodating the higher bandwidth requirements of HD Video. Technical details: The plan within the Eureka95 project was to develop an uncompressed 72 MHz sampling digital recorder, dubbed the "Gigabit" recorder. It was expected to take a year to develop, so in the interim, two alternative digital recording systems were assembled, both using the standard definition "D1" uncompressed digital component recorder as starting points. Technical details: The Quincunx-subsampled, or double/dual D1 system developed by Thomson used two D-1 digital recorders which were synchronized in a master/slave relationship. Odd fields could then be recorded on one of the D-1 and even fields on the other. Horizontally the system recorded just half the horizontal bandwidth, with samples taken in a quincunx sampling grid. This gave the system a full bandwidth performance in the diagonal direction, but halved horizontally or vertically depending on the exact image temporal-spatial characteristics. Technical details: The Quadriga system was developed by the BBC in 1988 using 4 synchronised D1 recorders, 54 MHz sampling, and distributed the signal in such a way that blocks of 4 pixels were sent to each recorder in turn. Thus if a single tape was viewed, the image would appear as a fair but distorted representation of the whole image, enabling edit decisions to be taken on a single recording, and a three-machine edit was possible on a single quadriga by processing each of the four channels in turn, with identical edits made on the other three channels subsequently under the control of a programmed edit controller. Technical details: The original D1 recorders were restricted to a parallel video interface with very bulky short cables, but this was not a problem, since the digital signals were contained with the 5 half-height racks (4 D1s and the interface/control/interleaving rack) which made up the Quadriga, and initially all external signals were analogue components. The introduction of SDI (the 270 Mbit/s Serial Digital Interface) simplified cabling by the time the BBC constructed a second Quadriga. Technical details: Philips also constructed a Quadriga but used a slightly different format, with the HD image divided into four quadrants, each quadrant going to one of the four recorders. Excepting a slightly longer processing delay, it otherwise worked similarly to the BBC approach, and both versions of the Quadriga equipment were made to be interoperable, switchable between interleaved and quadrant modes. Technical details: In about 1993 Philips, in a joint venture with Bosch (BTS), produced a "BRR" (or Bit Rate Reduction) recording system to enable the full HD signal to be recorded onto a single D1 (or D5 HD) recorder. A low-resolution version of the image could be viewed in the centre of the screen if the tape was replayed on a conventional D1 recorder, and was surrounded by what appeared to be noise, but was in fact simply coded/compressed data, in a similar way to later MPEG digital compression techniques, with a compression rate of 5:1, starting with 72 MHz sampling. Some BRR equipment also contained Quadriga interfaces, for ease of conversion between recording formats, also being switchable between BBC and Philips versions of the Quadriga format. By this time, Quadriga signals were being carried on four SDI cables. Technical details: Finally, with help from Toshiba, in around 2000, the Gigabit recorder, by now known as the D6 HDTV VTR "Voodoo", was produced, some years after work on the 1250-line system had ceased in favour of the Common Image Format, the HDTV system as it is known today. Hence the quality of Eureka 95 archives is higher than what viewers could see at the output of an HD-MAC decoder. Transfer to film For the making of the HD-based movie L'affaire Seznec, the Thomson company certified it would be able to transfer HD to 35 mm film. But none of the attempts were successful (shooting was done on dual-D1). However, another French movie shot in 1994, Du fond du coeur: Germaine et Benjamin, allegedly achieved such a transfer. It is said to have been shot in digital high definition in 1250 lines. Technical details: If so, it would arguably be the first digital high definition movie, using a film-friendly 50 Hz field rate, 7 years before Vidocq and 8 years before Star Wars: Episode II – Attack of the Clones.. For a historical perspective on HD-originated movies, one can mention early attempts such as 'Harlow', shot in 1965 using a near-HD analogue 819 lines process that later evolved to higher resolutions (see Electronovision). Project's afterlife: Experience was gained on important building blocks like HD digital recording, digital processing including motion compensation, HD CCD cameras, and also in factors driving acceptance or rejection of a new format by the professionals, and all of that was put to good use in the subsequent Digital Video Broadcasting project which, in contrast to HD-MAC, is a great worldwide success. Despite early claims by competitors that it could not do HD, it was soon deployed in Australia for just that purpose. Project's afterlife: The cameras and tape recorders were reused for early experiments in digital high definition cinema. The US brought home some of the Eu95 cameras to be studied in the context of their own HDTV standard development effort. In France, a company called VTHR (Video Transmission Haute Resolution) used the Eu95 hardware for some time to retransmit cultural events to small villages (later, they switched to upscaled 15 Mbit/s MPEG2 SD). Project's afterlife: In 1993, Texas Instruments built a 2048x1152 DMD prototype. No rationale is exposed in the papers for choosing this specific resolution over the Japanese 1035 active lines system, or alternatively doubling the 480 lines of the standard US TV to 960, but that way it could cover all resolutions expected to be present on the market, and that included the European one, which happened to be the highest. Some legacy of this development may be seen in "2K" and "4K" digital movie projectors using TI DLP chips, which run a slightly wider than usual 2048x1080 or 4096x2160 resolution, giving 1.896:1 aspect ratio without anamorphic stretching (vs the 1.778:1 of regular 16:9, with 1920 or 3840 horizontal pixels), give a little (6.7%) more horizontal resolution with anamorphic lenses when showing 2.21:1 (or wider) movies specifically prepared for them, and further enhancement (~13.78%) through reduced letterboxing if used without such lenses. Project's afterlife: As of 2010, some computer monitors with 2048x1152 resolution were available (e.g. Samsung 2343BWX 23, Dell SP2309W). This unlikely to be in reference to Eu95, especially as the refresh rate will generally default to "60 Hz" (or 59.94 Hz), but simply a convenient "HD+" resolution made for bragging rights over ubiquitous 1920x1080 HD panels, with the slimmest possible actual resolution improvement whilst keeping the same 16:9 resolution for video playback without cropping or letterboxing (the next nearest "convenient" 16:9 resolution being the comparatively much larger, so much more expensive 2560x1600 "2.5K" as used in e.g. Apple Cinema and Retina displays); it is also a "neat" power-of-2 width, twice the width of one-time standard XGA (so, e.g. websites designed for that width can be smoothly zoomed to 200%), and happens to be 4x the size of the 1024x576 panels commonly used for cheaper netbooks and mobile tablets (much as the 2.5K standard is 4x the 1280x800 WXGA used in ultraportable laptops and midrange tablets). In this way, it can be considered a form of convergent specification evolution - although there's little chance the two standards are directly related, their particulars will have been landed on by broadly similar methods. Project's afterlife: Although the fact is now mainly of historical interest, most larger-tube CRT PC monitors had a maximum horizontal scan rate of 70 kHz or higher, which means they could have handled 2048x1152 at 60 Hz progressive if set to use a custom resolution (with slimmer vertical blanking margins than HD-MAC/Eu95 itself for those rated for less than 75 kHz). Monitors able to support the lower refresh rate, including smaller models incapable of 70 kHz but good for at least 58 kHz (preferably 62.5 kHz) and able to support the lower vertical refresh rate could instead be set to run 50 Hz progressive, or even 100 Hz interlace to avert the flicker that would otherwise cause.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scriptaid** Scriptaid: Scriptaid is a drug which acts as a histone deacetylase inhibitor, and was one of the first compounds discovered via high-throughput screening that acts at this target. Scriptaid itself was never developed for medical applications, but led to the development of structurally related drugs such as vorinostat, which have been accepted into clinical use. Most early research using these compounds focused on their anti-cancer activity, but more recent research has found scriptaid to be useful in other applications such as cloning and research into regulation of metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mktemp** Mktemp: mktemp is a command available in many Unix-like operating systems that creates a temporary file or directory. Originally released in 1997 as part of OpenBSD 2.1, a separate implementation exists as a part of GNU Coreutils.There used to be a similar named C library function, which is now deprecated for being unsafe, and has safer alternatives.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbonaceous film (paleontology)** Carbonaceous film (paleontology): A carbonaceous film or carbon film is an organism outline of a fossil. It is a type of fossil found in any rock when organic material is compressed, leaving only a carbon residue or film. When an organism is buried under many layers of sediment, pressure and heat increase during diagenesis and if the organism lacks a hard skeleton, it will only leave this thin film of carbon residue on rock surfaces. Carbonaceous film (paleontology): The soft tissues of organisms are made largely of organic carbon compounds. Sometimes, fossils contain only carbon. Fossils usually form when sediment buries a dead organism. As sediment piles up, the organism's remains are subjected to pressure and heat. These conditions force gases and liquids from the body. A thin film of carbon residue is left, forming a silhouette of the original organism called a carbon film. Plant fossils often occur as a residue or film of carbon.The delicate fossils of the Burgess Shale include carbon film forms. Graptolites are an example of carbon film fossils.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gallic group** Gallic group: The Gallic group is a dynamical grouping of the prograde irregular satellites of Saturn following similar orbits. Their semi-major axes range between 16 and 19 Gm, their inclinations between 35° and 40°, and their eccentricities around 0.53. The International Astronomical Union (IAU) reserves names taken from Gallic mythology for these moons. Gallic group: Similar mean orbital elements led the discoverers to postulate a common origin for the group in a breakup of a larger body. The group was later found to be physically homogeneous, all satellites displaying light-red colour (colour indices B − V = 0.91 and V − R = 0.48) and similar infrared indices Remarkably, recent observations revealed that the largest member of the group, Albiorix, displays actually two different colours: one compatible with Erriapus and Tarvos, and another less red. Instead of the common progenitor, it was postulated that Tarvos and Erriapus could be fragments of Albiorix, leaving a large, less red crater. Such an impact would require a body with the diameter in excess of 1 km and relative velocity close to 5 km/s, resulting in a large crater with the radius of 12 km. Numerous, very large craters observed on Phoebe, prove the existence of such collisions in the Saturnian system's past. Gallic group: The discovery of 20 new moons of Saturn was announced in October 2019 by a team led by Scott S. Sheppard using the Subaru Telescope at Mauna Kea. One of them, S/2004 S 24, is also prograde, but it orbits much further away from Saturn than the four known Gallic moons. This moon will nevertheless also receive a name from Gallic mythology.The members of the group are (in order of increasing distance from Saturn according to JPL mean orbital elements): Albiorix S/2007 S 8 Bebhionn Saturn LX Erriapus Tarvos S/2020 S 4Three other prograde moons have inclinations similar to the Gallic group, but have orbits considerably more distant than the main Gallic group: S/2006 S 12 S/2004 S 24
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Risk (1996 video game)** Risk (1996 video game): Risk is a turn-based strategy video game based on the board game of the same name, released in 1996. History: In 1996, Hasbro Interactive released a PC version of Risk that included a new variation on the game called Ultimate Risk, which did not use dice but rather implemented the use of forts, generals, and complex battle strategies. Reception: Next Generation reported that Risk sold "exceptionally well" during 1997.In Computer Gaming World, Terry Coleman called Risk an improvement over previous computer adaptations of the board game, and wrote that "in some ways, Risk even outshines Monopoly CD-ROM on the computer." He praised its Classic Risk mode, and hailed Ultimate Risk as "a superb enhancement to a classic game." Reviewing the game for PC Zone, Chris Anderson wrote, "Hasbro have taken a classic board-game, put it on pc, and brought lots of new features to it, and I for one enjoyed it. It's addictive, highly replayable, and it looks quite smart too. If it had real-time combat we would have been talking a 90+ score".PC Gamer UK's Mark Donald criticized Risk's artificial intelligence for failing to recreate the experience of board game play, but called it "a moderately enjoyable game" overall.Next Generation reviewed the PlayStation version of the game, rating it two stars out of five, and stated that "For Risk fanatics who sometimes have trouble convening games with human opponents, this is a decently satisfying quick fix, as you can play against several computer players. But for anyone else, it's an unexciting, uninspiring, unimpressive interpretation of a classic game."The game was a finalist for Computer Gaming World's 1996 "Classic/Puzzle Game of the Year" award, which ultimately went to Baku Baku Animal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Append-only** Append-only: Append-only is a property of computer data storage such that new data can be appended to the storage, but where existing data is immutable. Access control: Many file systems' Access Control Lists implement an "append-only" permission: chattr in Linux can be used to set the append-only flag to files and directories. This corresponds to the O_APPEND flag in open(). Access control: NTFS ACL has a control for "Create Folders / Append Data", but it does not seem to keep data immutable.Many cloud storage providers provide the ability to limit access as append-only. This feature is especially important to mitigate the risk of data loss for backup policies in the event that the computer being backed-up becomes infected with ransomware capable of deleting or encrypting the computer's backups. Data structures: Many data structures and databases implement immutable objects, effectively making their data structures append-only. Implementing an append-only data structure has many benefits, such as ensuring data consistency, improving performance, and permitting rollbacks.The prototypical append-only data structure is the log file. Log-structured data structures found in Log-structured file systems and databases work in a similar way: every change (transaction) that happens to the data is logged by the program, and on retrieval the program must combine the pieces of data found in this log file. Blockchains add cryptography to the logs so that every transaction is verifiable. Data structures: Append-only data structures may also be mandated by the hardware or software environment: All objects are immutable in purely functional programming languages, where every function is pure and global states do not exist. Flash storage cells can only be written to once before erasing. Erasing on a flash drive works on the level of pages which cover many cells at once, so each page is treated as an append-only set of cells until it fills up. Data structures: Hard drives that use shingled magnetic recording cannot be written to randomly because writing on a track would clobber a neighboring, usually later, track. As a result, each "zone" on the drive is append-only.Append-only data structures grow over time, with more and more space dedicated to "stale" data found only in the history and more time wasted on parsing these data. A number of append-only systems implement rewriting (copying garbage collection), so that a new structure is created only containing the current version and optionally a few older ones.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**One-compartment kinetics** One-compartment kinetics: One-compartment kinetics for a chemical compound specifies that the uptake in the compartment is proportional to the concentration outside the compartment, and the elimination is proportional to the concentration inside the compartment. Both the compartment and the environment outside the compartment are considered to be homogeneous (well mixed).The compartment typically represents some organism (e.g. a fish or a daphnid). This model is used in the simplest versions of the DEBtox method for the quantification of effects of toxicants.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pneumococcal pneumonia** Pneumococcal pneumonia: Pneumococcal pneumonia is a type of bacterial pneumonia that is caused by Streptococcus pneumoniae (pneumococcus). It is the most common bacterial pneumonia found in adults, the most common type of community-acquired pneumonia, and one of the common types of pneumococcal infection. The estimated number of Americans with pneumococcal pneumonia is 900,000 annually, with almost 400,000 cases hospitalized and fatalities accounting for 5-7% of these cases. Symptoms: The symptoms of pneumococcal pneumonia can occur suddenly, presenting as a severe chill, followed by a severe fever, cough, shortness of breath, rapid breathing, and chest pains. Other symptoms like nausea, vomiting, headache, fatigue, and muscle aches could also accompany initial symptoms. The coughing can occasionally produce rusty or blood-streaked sputum. In 25% of cases, a parapneumonic effusion may occur. Chest X-rays will typically show lobar consolidation or patchy infiltrates. Treatment: In most cases, once pneumococcal pneumonia has been identified, doctors will prescribe antibiotics. These antibiotics usually help alleviate and eliminate symptoms between 12 and 36 hours after the initial dose. Despite most antibiotics' effectiveness in treating the disease, sometimes the bacteria can resist the antibiotics, causing symptoms to worsen. Age and health of the infected patient can also contribute to the effectiveness of the antibiotics. A vaccine has been developed for the prevention of pneumococcal pneumonia, recommended to children under age five as well as adults over the age of 65. Research advancements in the field: While it has been commonly known that the influenza virus increases one's chances of contracting pneumonia or meningitis caused by the streptococcus pneumonaie bacteria, new medical research in mice indicates that the flu is actually a necessary component for the transmission of the disease. Researcher Dimitri Diavatopoulo from the Radboud University Nijmegen Medical Centre in the Netherlands describes his observations in mice, stating that in these animals, the spread of the bacteria only occurs between animals already infected with the influenza virus, not between those without it. He says that these findings have only been inclusive in mice, however, he believes that the same could be true for humans. Mechanism of disease manifestation: Three stages can be used to categorize the infection process of pneumococcal pneumonia: transmission, colonization, and invasion. The Streptococcus pneumoniae (S. pneumoniae) leave the colonized host via shedding in order to be transmissible to new hosts, and must survive in the environment until infection of a new host (unless direct transmission occurs). Animal models have allowed scientists to have an increased understanding of these stages of infection. Mechanism of disease manifestation: Transmission In order for transmission to occur, there must be close contact with a carrier or amongst carriers. The likelihood of this increases during colder, dryer months of the year. The probability of transmission is shown to proliferate in coordination with other upper respiratory tract (URT) infections. Mechanism of disease manifestation: Animal models have allowed for an increased understanding of the transmission stage during infection. A 2010 study examining co-infection of influenza in co-housed ferret pairs found that the influenza increased both incidence and severity of pneumococcal infection. These findings exhibited pneumococcal strain dependence. A separate 2010 study examining intra-litter transmission, with influenza co-infection in infant mice, found that the influenza co-infection is a facilitator for pneumococcal susceptibility, transmission, and disease via bacterial shedding. A third study of note, from 2016, was able to examine pneumococcal transmission without co-infection of an URT infection. This study utilized intra-litter transmission in infant mice during bacterial mono-infection with pneumococcus. The results of this study indicated higher rates of shedding for infections in younger mice. These studies, along with the animal models that they utilize have enhanced our understanding of the transmission of pneumococcus. Inflammation induced by Influenza A Virus (IAV) stimulates the flow of mucus through the expression of glycoproteins, prompts secretion, and increases shedding. Streptococcus is found in the inflammation-generated mucus layers covering the URT and increased pneumococci are observed in nasal secretions with IAV co-infection. Levels of shedding have correlations with IAV induced URT inflammation. Pro-inflammatory effects are exhibited by the single pneumococcal toxin, pneumolysin (Ply); use of anti-Ply antibodies result in decreased inflammation. Studies have found transmissible levels of bacterium only in young mice, exhibiting that shedding increases with incidences of contact and proximity. Shedding is shown to decrease in the presence of agglutinating antibodies such as IgG and IgA1 unless cleavage occurs via an IgA1-specific pneumococcal protease. Transmission via the secretions of carriers can result from direct interpersonal contact or contact with a contaminated surface. Bacteria on contaminated surfaces can be easily cultured. In conditions with sufficient nutrients, pneumococci can survive for 24 hours and avoid desiccation for multiple days. Reduced transmission has been observed amongst children with Pneumococcal conjugate vaccine (PCV) immunization as acquisition of a new strain of S. pneumoniae is inhibited by pre-existing colonization. Immunoglobulin G (IgG) immunization with high antibody concentration can also inhibit acquisition. These antibodies require the agglutinating function of the Fc fragment. For successful acquisition in a new host, pneumococcus must successfully adhere to the mucous membrane of the new host's nasopharynx. Pneumococcus is able to evade detection by the mucous membrane when there is a higher proportion of negatively charged capsules. This clearance is mediated by Immunoglobulin A1 (IgA1) which is abundant on the URT mucosal surfaces. Mechanism of disease manifestation: Colonization Transparent and opaque colony morphology has been observed for pneumococci. Airway colonization is observed in transparent phenotypes of serotypes, while survival in bloodstreams is observed for opaque phenotypes. Colonizable strains exhibit resistance against neutrophilic immune response. Successful colonization requires S. pnuemoniae to evade detection by the nasal mucus and attach to epithelial surface receptors. Asymptomatic colonization occurs when S. pneumoniae bind to N-acetyl-glucosamine on epithelium without inflammation. However, co-infection with a pre-existing inflammatory URT infection results in an over-expression of the epithelial receptors utilized by S. pneumoniae, thus increasing the likelihood of colonization. Neuraminidase also increases instances of epithelial binding through its cleavage of N-acetylneuraminic acid, glycolipids, glycoproteins, and oligosaccharides. Mechanism of disease manifestation: Invasion Initial colonization of the nasopharynx is typically asymptomatic, but invasion occurs when the bacteria spreads to other parts of the body including the lungs, blood, and brain. Interactions between Phosphorylcholine (ChoP) components on colonized epithelial cells allow for docking of choline binding proteins (CBPs), most notably CbpA. Colonization of the respiratory tract, and thus pneumonia cannot occur without CpbA. The pneumococcus moves across the mucosal barrier by integrating itself with the polymeric immunoglobulin receptor (pIgR), which is used by mucosal epithelial cells to transport IgA and IgM to the apical surface. Following its cleavage at the apical surface, pIgR, and subsequently the pneumococcus, move back to the basolateral surface allowing invasion of the upper respiratory tract. The pneumococcus then moves to invade the lower respiratory tract, evading the mucociliary escalator with the assistance of neuraminidase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Signal chain** Signal chain: Signal chain, or signal-processing chain is a term used in signal processing and mixed-signal system design to describe a series of signal-conditioning electronic components that receive input (data acquired from sampling either real-time phenomena or from stored data) sequentially, with the output of one portion of the chain supplying input to the next. Signal chains are often used in signal processing applications to gather and process data or to apply system controls based on analysis of real-time phenomena. Definition: This definition comes from common usage in the electronics industry and can be derived from definitions of its parts: Signal: "The event, phenomenon, or electrical quantity, that conveys information from one point to another". Definition: Chain: "1. Any series of items linked together. 2. Pertaining to a routine consisting of segments which are run through the computer in tandem, only one segment being within the computer at any one time and each segment using the output from the previous program as its input".The concept of a signal chain is familiar to electrical engineers, but the term has many synonyms such as circuit topology. The goal of any signal chain is to process a variety of signals to monitor or control an analog-, digital-, or analog-digital system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paper embossing** Paper embossing: Embossing and debossing are the processes of creating either raised or recessed relief images and designs in paper and other materials. An embossed pattern is raised against the background, while a debossed pattern is sunken into the surface of the material but might protrude somewhat on the reverse side. Techniques: Often used in combination with foil stamping, embossing alters the surface of paper stock or other substrates by providing a three-dimensional or raised effect on selected areas. The procedure requires the use of two dies: one that is raised and one that is recessed. The dies fit into each other so that when the paper is pressed between them, the raised die forces the stock into the recessed die and creates the embossed impression. A specific level of pressure is applied to the dies in order to squeeze the fibers of the paper, which results in a permanently raised area in the paper. When the dies are produced, a die maker engraves the desired image into several metal plates, which are the embossing dies for use on an embossing press. A thorough understanding of the process will enable a more successful result. Generally, embossing is the process most often employed to attract attention or convey a high quality textural contrast in relation to the surrounding area of the paper stock. Techniques: "Debossing" is similar to embossing, but recesses the design rather than raising it. Rather than the paper being raised in specific areas, it is indented. The process involves applying pressure to the front side of a substrate and forcing the material down from the surface. Although it is not as common as embossing, it is occasionally used to provide a different effect or appearance that fits a particular theme. Embossing and debossing on digitally printed applications is an off-line process, which may add a significant cost to the job. Techniques: Embossing is basically used to create a distinctive effect. The greatest concern and emphasis on the client’s behalf should be placed on the outcome of the embossed effect. In order to achieve the best possible effect, it is important to understand the embossing process and the types of dies that are used for embossing. The three factors that need to be controlled during the embossing process are: Pressure: the intensity of the impact on the weight of the stock being embossed. Techniques: Heat: the ability to maintain a consistent heat level for the best impression. Techniques: Die depth: the client's artwork or the engraver's efforts will initially determine the die depth, however, if by looking at the artwork it appears that an adjustment of the die depth may be necessary, the die may need to be retooled to achieve a greater depth. Most types of paper can be embossed, and size is not normally a consideration. Embossing without ink, so that the image is raised but not colored, is called "blind embossing". Embossing used in conjunction with ink, so that the raised area is coloured, is called "colour register embossing". Embossing used in conjunction with foil stamping is called "combination stamping" or "combo stamping".Embossing involves a separate stage in the production process, after any varnishing and laminating. It requires a separate press run, and is priced accordingly. In addition to being used as a design element, embossing can be used to improve the performance of paper products like napkins, diapers, and tissue paper. Die materials: The metals most often used for die construction are zinc, magnesium, copper, and brass. The material used for a specific application depends upon a number of factors. Embossing types: Blind emboss Blind embossing does not include the use of ink or foil to highlight the embossed area. The change in the dimensional appearance of the material is the only noticeable difference resulting from the embossing. The blind embossing process provides a clean and distinctive or subtle image on paper stock. It is best used to create a subtle impression or low level of attention to the piece, yet provide some slight form of differentiation for the finished work. Embossing types: Registered emboss Registered embossing is a process that places the embossed image in alignment with another element created with ink, foil, punching, or with a second embossed image. Embossing types: Combination emboss Combination embossing is the process of embossing and foil stamping the same image. It involves imprinting and aligning foil over an embossed image to create a foil emboss. A sculptured die, generally made of brass is used for this procedure. The process requires close registration that must be controlled to keep the image and foil matched precisely. The process of embossing and foil stamping is accomplished in one operation with the use of a combination die. The combination die has a cutting edge around the perimeter to cleanly break the excess foil away from the embossed area. Embossing types: Pastelling Pastelling is also referred to as tint leaf embossing. It involves the process of using a combination die to provide a subtle antique appearance to a substrate that is embossed and foil stamped. Pearl finishes, clear gloss, or similar pastel foil finishes can be selected that provide a soft two-color antique look (without scorching) to the embossed image. Lighter colored stocks work best to provide this soft contrasting effect. Embossing types: Glazing Glazing refers to an embossed area that has a shiny or polished appearance. Most often this process is accomplished with heat that is applied with pressure in order to create a shiny impression on the stock. Dark colored heavy weight stocks generally work best with glazing because the polished effect is much more noticeable and the dark color of the stock helps to eliminate or soften any burned appearance that may result from the application of the heat. When used in conjunction with foil, the process can provide the foil with a slightly brighter appearance. Embossing types: Scorching Scorching is similar to glazing except that it is not used to polish the stock. Instead, scorching does what it implies: as the temperature of the die heating plate is increased beyond a normal temperature range, a scorched effect is created in the embossed image, which results in an antique or shaded appearance. It is best to use a lighter colored stock for this procedure in order to provide a unique two-toned appearance. Caution should be used in requesting this effect, since it is easy to burn the stock if too much heat is used. If scorching occurs too close to the printed copy, it can interfere with the clarity of the printed copy; however, this may be the effect that is desired for a particular application. Document authentication: A notary public may use an embossed seal to mark legal papers, either in the form of an adhesive seal, or using a clamp-like embossing device, to certify a signature on a document, contract, etc., or cause to become certified through a notary public or bill. Registered professional engineers also use embossing seals to certify drawings, thereby guaranteeing to the recipient that due diligence has been exercised in the design. Government agencies use embossed seals to certify that an important document, such as a birth certificate, court order, etc., is an authentic, original copy, rather than a photocopy that could be altered in the copying process. On stamps: Embossing has been used regularly on postage and other types of stamps. The embossed paper of a letter sheet or stamped envelope is called an indicium. Notable early examples include some of the earliest stamps of Italy, Natal, and Switzerland, as well as the early high values of Great Britain (1847–54). Modern stamps still sometimes use embossing as a design element.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Protein A** Protein A: Protein A is a 42 kDa surface protein originally found in the cell wall of the bacteria Staphylococcus aureus. It is encoded by the spa gene and its regulation is controlled by DNA topology, cellular osmolarity, and a two-component system called ArlS-ArlR. It has found use in biochemical research because of its ability to bind immunoglobulins. It is composed of five homologous Ig-binding domains that fold into a three-helix bundle. Each domain is able to bind proteins from many mammalian species, most notably IgGs. It binds the heavy chain within the Fc region of most immunoglobulins and also within the Fab region in the case of the human VH3 family. Through these interactions in serum, where IgG molecules are bound in the wrong orientation (in relation to normal antibody function), the bacteria disrupts opsonization and phagocytosis. History: As a by-product of his work on type-specific staphylococcus antigens, Verwey reported in 1940 that a protein fraction prepared from extracts of these bacteria non-specifically precipitated rabbit antisera raised against different staphylococcus types. In 1958, Jensen confirmed Verwey’s finding and showed that rabbit pre-immunization sera as well as normal human sera bound to the active component in the staphylococcus extract; he designated this component Antigen A (because it was found in fraction A of the extract) but thought it was a polysaccharide. The misclassification of the protein was the result of faulty tests but it was not long thereafter (1962) that Löfkvist and Sjöquist corrected the error and confirmed that Antigen A was in fact a surface protein on the bacterial wall of certain strains of S. aureus. The Bergen group from Norway named the protein "Protein A" after the antigen fraction isolated by Jensen. Protein A antibody binding: It has been shown via crystallographic refinement that the primary binding site for protein A is on the Fc region, between the CH2 and CH3 domains. In addition, protein A has been shown to bind human IgG molecules containing IgG F(ab')2 fragments from the human VH3 gene family.Protein A can bind with strong affinity to the Fc portion of immunoglobulin of certain species as shown in the below table. Other antibody binding proteins: In addition to protein A, other immunoglobulin-binding bacterial proteins such as Protein G, Protein A/G and Protein L are all commonly used to purify, immobilize or detect immunoglobulins. Role in pathogenesis: As a pathogen, Staphylococcus aureus utilizes protein A, along with a host of other proteins and surface factors, to aid its survival and virulence. To this end, protein A plays a multifaceted role: By binding the Fc portion of antibodies, protein A renders them inaccessible to the opsonins, thus impairing phagocytosis of the bacteria via immune cell attack. Protein A facilitates the adherence of S. aureus to human von Willebrand factor (vWF)-coated surfaces, thus increasing the bacteria's infectiousness at the site of skin penetration. Protein A can inflame lung tissue by binding to tumor necrosis factor 1(TNFR-1) receptors. This interaction has been shown to play a key role in the pathogenesis of staphylococcal pneumonia. Protein A has been shown to cripple humoral (antibody-mediated) immunity which in turn means that individuals can be repeatedly infected with S. aureus since they cannot mount a strong antibody response. Role in pathogenesis: Protein A has been shown to promote the formation of biofilms both when the protein is covalently linked to the bacterial cell wall as well as in solution.Protein A helps inhibit phagocytic engulfment and acts as an immunological disguise. Higher levels of protein A in different strains of S. aureus have been associated with nasal carriage of this bacteria.Mutants of S. aureus lacking protein A are more efficiently phagocytosed in vitro, and mutants in infection models have diminished virulence. Production: Protein A is produced and purified in industrial fermentation for use in immunology, biological research and industrial applications (see below). Natural (or native) protein A can be cultured in Staphylococcus aureus and contains the five homologous antibody binding regions described above and a C-terminal region for cell wall attachment. Today, protein A is more commonly produced recombinantly in Escherichia coli. (Brevibacillus has also been shown to be an effective host.) Recombinant versions of protein A also contain the five homologous antibody binding domains but may vary in other parts of the structure in order to facilitate coupling to porous substrates Engineered versions of the protein are also available, the first of which was rProtein A, B4, C-CYS. Engineered versions are multimers (typically tetramers, pentamers or hexamers) of a single domain which has been modified to improve usability in industrial applications. Research: Protein A is often coupled to other molecules such as a fluorescent dye, enzymes, biotin, colloidal gold or radioactive iodine without affecting the antibody binding site. Examples including protein A–gold (PAG) stain is used in immunogold labelling, fluorophore coupled protein A for immunofluorescence, and DNA docking strand coupled protein A for DNA-PAINT imaging. It is also widely utilized coupled to magnetic, latex and agarose beads. Research: Protein A is often immobilized onto a solid support and used as reliable method for purifying total IgG from crude protein mixtures such as serum or ascites fluid, or coupled with one of the above markers to detect the presence of antibodies. The first example of protein A being coupled to a porous bead for purification of IgG was published in 1972. Immunoprecipitation studies with protein A conjugated to beads are also commonly used to purify proteins or protein complexes indirectly through antibodies against the protein or protein complex of interest. Role in industrial purification of antibodies: The first reference in the literature to a commercially available protein A chromatography resin appeared in 1976. Today, chromatographic separation using protein A immobilized on porous substrates is the most widely established method for purifying monoclonal antibodies (mAbs) from harvest cell culture supernatant. The choice of protein A as the preferred method is due to the high purity and yield which are easily and reliably achieved. This forms the basis for a general antibody purification "platform" which simplifies manufacturing operations and reduces the time and effort required to develop purification processes. A typical mAb purification process is shown at right. Albeit the long history of protein A chromatography for the production of antibodies, the process is still being improved today. Continuous chromatography, more precisely periodic counter-current chromatography, enormously increases the productivity of the purification step.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Buffalo jump** Buffalo jump: A buffalo jump, or sometimes bison jump, is a cliff formation which Indigenous peoples of North America historically used to hunt and kill plains bison in mass quantities. The broader term game jump refers to a man-made jump or cliff used for hunting other game, such as reindeer. Method of the hunt: Hunters herded the bison and drove them over the cliff, breaking their legs and rendering them immobile. Tribe members waiting below closed in with spears and bows to finish the kills. The Blackfoot people called the buffalo jumps "pishkun", which loosely translates as "deep blood kettle". This type of hunting was a communal event that occurred as early as 12,000 years ago. They believed that if any buffalo escaped these killings then the rest of the buffalo would learn to avoid humans, which would make hunting even harder.Buffalo jump sites are often identified by rock cairns, which were markers designating "drive lanes", by which bison would be funneled over the cliff. These drive lanes would often stretch for several miles. Method of the hunt: Buffalo jump sites yield significant archaeological evidence because processing sites and camps were always nearby. The sites yield information as to how the Native Americans used the bison for food, clothing, and shelter. Plains Indians, in particular, depended on the bison for their survival. Every part of the animal could be used in some way: hides for clothes and shelter, bones for tools, sinews for bowstrings and laces. Hooves could be ground for glue, and the brains could be used in the tanning process for the hides. The extra meat was preserved as pemmican.In one of his journals, Meriwether Lewis describes how a buffalo jump was practiced during the Lewis and Clark Expedition: one of the most active and fleet young men is selected and disguised in a robe of buffalo skin... he places himself at a distance between a herd of buffalo and a precipice proper for the purpose; the other Indians now surround the herd on the back and flanks and at a signal agreed on all show themselves at the same time moving forward towards the buffalo; the disguised Indian or decoy has taken care to place himself sufficiently near the buffalo to be noticed by them when they take to flight and running before them they follow him in full speed to the precipice; the Indian (decoy) in the mean time has taken care to secure himself in some cranny in the cliff... the part of the decoy I am informed is extremely dangerous. Method of the hunt: Despite having described a jump in detail, neither Lewis nor any white settlers are known to have personally witnessed the events. Historical sites: Sites of interest include Head-Smashed-In, Bonfire Shelter, Ulm Pishkun, Madison Buffalo Jump, Dry Island, Glenrock, Big Goose Creek, Cibolo Creek, Vore, Too Close for Comfort Site (also known as Wahkpa Chu'gn Site), Olsen-Chubbuck Bison Kill Site, and Camp Disappointment of the Lewis and Clark Expedition. Historical sites: Ulm Pishkun Buffalo Jump is likely the largest buffalo jump in the world. It was used by the Native Americans in the area between 900 and 1500 AD. The cliffs themselves stretch for more than a mile and the site below has compacted bison bones nearly 13 feet (4.0 m) deep. Ulm Pishkun Buffalo Jump is located in First Peoples Buffalo Jump State Park in Cascade County, Montana, north-northwest of the community of Ulm. Historical sites: Madison Buffalo Jump State Park is a Montana state park in Gallatin County, Montana in the United States. The park is 638 acres (258 ha) and sits at an elevation of 4,554 feet (1,388 m). The park is named for a canyon cliff used by Native Americans as a buffalo jump, where herds of bison were stampeded over the cliff as an efficient means of slaughter. This limestone cliff was used for 2,000 years by Native Americans. Madison Buffalo Jump State Park is a day use-only park. It is open year-round for hiking, wildlife observation, and some picnicking.Camp Disappointment, the northernmost point of the Lewis and Clark Expedition, is among the best-preserved buffalo jumps in Montana, due to its relatively inaccessible location. The creek at the bottom of the cliff periodically exposes animal bones.There is a 3-D reconstruction of Charles M. Russell's painting of a buffalo jump on display at the Helena State Capital Museum, Helena, Montana.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Overhead microphone** Overhead microphone: Overhead microphones are those used in sound recording and live sound reproduction to pick up ambient sounds, transients and the overall blend of instruments. They are used in drum recording to achieve a stereo image of the full drum kit, as well as orchestral recording to create a balanced stereo recording of full orchestras. Overhead positioning: There are multiple arrangements for drum overheads, which are often based on personal preference of the musician, engineer, or producer. These include "A-B" spaced pairs (where two directional microphones are suspended above the left and right clusters of cymbals), "X-Y" coincident pairs, where the two directional microphones are centred on the drum kit with their capsule are very close without touching, and angled across each other at 90°. Coincident placement give a wider stereo image than spaced pairs, and some engineers prefer it for this reason.Other drum overhead positions include the Recorderman Technique (where the distance between both microphones and the snare drum is equal, as is the distance between both microphones and the bass drum) and Glyn Johns' method (where one "overhead" is placed to the drummer's right, aiming across the floor tom to the centre of the kit). Overhead positioning: In orchestral recordings, particularly those for film score recordings, the Decca tree is often used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Core-excited shape resonance** Core-excited shape resonance: A core-excited shape resonance is a shape resonance in a system with more than one degree of freedom where, after fragmentation, one of the fragments is in an excited state. It is sometimes very difficult to distinguish a core-excited shape resonance from a Feshbach resonance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nitroxoline** Nitroxoline: Nitroxoline is an antibiotic that has been in use in Europe for about fifty years, and has proven to be very effective at combating biofilm infections. Nitroxoline was shown to cause a decrease in the biofilm density of P. aeruginosa infections, which would allow access to the infection by the immune system in vivo. It was shown that nitroxoline functions by chelating Fe2+ and Zn2+ ions from the biofilm matrix; when Fe2+ and Zn2+ were reintroduced into the system, biofilm formation was reconstituted. The activity of biofilm degradation is comparable to EDTA, but has a history of human use in clinical settings and therefore has a precedent with which to allow its use against “slimy” biofilm infections. Anticancer activity: The chelating activities of nitroxoline have also been used in an anticancer setting. Nitroxoline has been shown to be more cytotoxic to HL60, DHL-4, PANC-1, and A2780 cells lines than clioquinol and other 8-hydroxyquinoline derivatives. It also demonstrated an increase in reactive oxygen species (ROS) production over controls, especially when Cu2+ was added. The ROS levels reached over 350% of the controls with addition of CuCl2. The cytotoxicity production was markedly decreased with addition of ZnCl2, indicating, based on this model, that nitroxoline is not a zinc chelator. Because the zinc chelating action of clioquinol has been associated with subacute myelo-optic neuropathy, the use of nitroxoline as a cytotoxic drug in the treatment of cancers should not exhibit neurotoxic effects in humans, and in vivo trials on tumour xenografts in mice have not yielded any negative neurodegenerative effects. Anticancer activity: Nitroxoline has been shown to inhibit the enzymatic activity of cathepsin B. Cathepsin B degrades extra-cellular membrane proteins in tumor cells, allowing them to proliferate more freely, and metastasize throughout the body. Nitroxoline was shown to be a noncompetitive, reversible inhibitor of these actions in MCF-10A neoT cells. The Ki (dissociation constant) values it demonstrates are comparable to other reversible inhibitors of cathepsin B. This indicates that it may be a candidate for further trials as an anticancer drug, especially given its history as an antimicrobial agent and its well-known pharmacokinetic profile. The mechanism of action by which nitroxoline inhibits cathepsin B may also suggest that further research of noncovalent, noncompetitive inhibitors of cathepsin B could be warranted. In fact, it was recently shown that a balance exists between the potency and the kinetics of a molecule, reflected in the molecular weight, which must be optimized in order to create the best drug for inhibition of a target enzyme. For example, a certain inhibitor may have a high affinity for an enzyme, but it may prove impractical to use in a clinical setting for treatment because of its size. Anticancer activity: Nitroxoline and its analogues have also been shown to have antiangiogenic properties. For example, nitroxoline inhibits MetAP2 activity, an enzyme associated with angiogenesis, and HUVEC proliferation. This is further evidence that nitroxoline would make an effective anticancer drug. With different derivatives of nitroxoline demonstrating various levels of inhibition, nitroxoline may also prove to be a novel starting point for future research into cancer treatment. Granulomatous amoebic encephalitis: In 2018 nitroxoline was identified via a clinical metagenomic next-generation sequencing analysis as a compound that could be repurposed as possible amoebicidal agent against Balamuthia mandrillaris which causes granulomatous amoebic encephalitis (GAE) a fatal disease.In 2021 a patient survived after treatment with nitroxoline, the man had been given a recommended drug therapy (pentamidine, sulfadiazine, azithromycin/clarithromycin, fluconazole, flucytosine, and miltefosine) but progressed negatively therefore the regime was complement with nitroxoline which required the permission of the FDA as the drug isn't approved in the United States, the cerebral lesion shrank only one week later and the man later recovered.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HomePlug** HomePlug: HomePlug is the family name for various power line communications specifications under the HomePlug designation, each with unique capabilities and compatibility with other HomePlug specifications. HomePlug: Some HomePlug specifications target broadband applications. For instance in-home distribution of low data rate IPTV, gaming, and Internet content, while others focus on low power, low throughput and extended operating temperatures for applications such as smart power meters and in-home communications between electric systems and appliances. All of the HomePlug specifications were developed by the HomePlug Powerline Alliance, which also owns the HomePlug trademark. HomePlug: On 18 October 2016 the HomePlug Alliance announced that all of its specifications would be put into the public domain and that other organizations would be taking on future activities relating to deployment of the existing technologies. There was no mention in the announcement of any further technology development within the HomePlug community. History: The HomePlug Powerline Alliance was formed to develop standards and technology for enabling devices to communicate with each other and the Internet over existing structure/house electrical wiring. One of the greatest technical challenges was finding a way to reduce sensitivity to the electrical noise present on power lines. HomePlug solved this problem by increasing the communication carrier frequencies so that the signal is conveyed by the neutral conductor which is common to all phases. History: The first HomePlug specification HomePlug 1.0 was released in June 2001. The HomePlug AV (for audio-video) specification released in 2005 increased physical layer (PHY) peak data rates from approximately 13.0 Mbit/s to 200 Mbit/s. The HomePlug Green PHY specification was released in June 2010 and targets Smart Energy and Smart Grid applications as an interoperable "sibling" to HomePlug AV with lower cost, lower power consumption and decreased throughput.In 2010, the IEEE 1901 was approved and HomePlug AV as baseline technology for the FFT-OFDM PHY within the standard and became an international standard. The HomePlug Powerline Alliance is a certifying body for IEEE 1901 products. The three major specifications published by HomePlug (HomePlug AV, HomePlug Green PHY and HomePlug AV2) are interoperable and compliant.In 2011, the HomePlug Green PHY specification was adopted by Ford, General Motors, Audi, BMW, Daimler, Porsche, and Volkswagen, as a connectivity standard for Plug-In Electrical Vehicle.As of 2017, there are at least six chip vendors shipping HomePlug AV chipsets with IEEE 1901 specification support: Broadcom, Qualcomm Atheros, Sigma Designs, Intellon, SPiDCOM, and MStar.Newer versions of HomePlug support the use of Ethernet in bus topology via OFDM modulation, which enables several distinct data carriers to coexist in the same wire. Also, HomePlug's OFDM technology can turn off (mask) any sub-carriers that overlap previously allocated radio spectrum in a given geographic region, thus preventing interference. In North America, for instance, HomePlug AV only uses 917 of 1155 sub-carriers. Usage: Powerline networking is a network that can be set up using a building's existing electrical wiring. For electric vehicle charging, the SAE J1772 standard plug-in electric vehicle charger also requires HomePlug Green PHY to establish communications over a powerline before the vehicle can begin to draw any charging power. All commercial HomePlug implementations meet the AES-128 encryption standard specified for advanced metering infrastructure by the US FERC. Accordingly, these devices are suitable to deploy as utility grade meters off the shelf with appropriate software. Usage: As of late 2012, the most widely deployed HomePlug devices are "adapters", which are standalone modules that plug into wall outlets (or power strips [but not surge protectors] or extension cords) and provide one or more Ethernet ports. In a simple home network, the Internet gateway router connects via Ethernet cable to a powerline adapter, which in turn plugs into a nearby power outlet. A second adapter, plugged into any other outlet in the home, connects via Ethernet cable to any Ethernet device (e.g., computer, printer, IP phone, gaming station). Communications between the router and Ethernet devices are then conveyed over existing home electrical wiring. More complex networks can be implemented by plugging in additional adapters as needed. A powerline adapter may also be plugged into a hub or switch so that it supports multiple Ethernet devices residing in a common room. Usage: Increasingly, the functionality found in standalone adapters is being built into end devices such as power control centers, digital media adapters, and Internet security cameras. It is anticipated that powerline networking functionality will be embedded in TVs, set-top boxes, DVRs, and other consumer electronics, especially with the emergence of global powerline networking standards such as the IEEE 1901 standard, ratified in September 2010.Several manufacturers sell devices that include 802.11n, HomePlug and four ports of gigabit ethernet connectivity for under US$100. Several are announced for early 2013 that also include 802.11ac connectivity, the combination of which with HomePlug is sold by Qualcomm Atheros as its Hy-Fi hybrid networking technology, an implementation of IEEE P1905. This permits a device to use wired ethernet, powerline or wireless communication as available to provide a redundant and reliable failover – thought to be particularly important in consumer applications where there is no onsite expertise typically available to debug connections. Versions: HomePlug 1.0 The first HomePlug specification, HomePlug 1.0, provides a peak PHY-rate of 14 Mbit/s. It was first introduced in June, 2001 and has since been replaced by HomePlug AV. On May 28, 2008 Telecommunications Industry Association (TIA) incorporated HomePlug 1.0 powerline technology into the newly published TIA-1113 international standard. TIA-1113 defines modem operations on user-premises electrical wiring. The new standard is the world's first multi-megabit powerline communications standard approved by an American National Standards Institute (ANSI)-accredited organization.The HomePlug 1.0 MAC Layer uses channel access based on Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) to transport data from 46 to 1500 bytes long from encapsulated IEEE 802.3 frames as MAC Service Data Units (MSDUs) (so doesn't support jumbo frames). Versions: HomePlug 1.0 Turbo adapters comply with the HomePlug 1.0 specification but employ a faster, proprietary mode that increases the peak PHY-rate to 85 Mbit/s. HomePlug 1.0 Turbo modems were only available from Intellon Corporation. Versions: HomePlug AV The HomePlug AV specification, which was introduced in August 2005, provides sufficient bandwidth for applications such as HDTV and VoIP. HomePlug AV offers a peak data rate of 200 Mbit/s at the physical layer, and about 80 Mbit/s at the MAC layer. HomePlug AV devices are required to coexist, and optionally to interoperate, with HomePlug 1.0 devices. The physical layer uses OFDM carriers spaced at 24.414 kHz, with carriers from 2 to 30 MHz. Depending on the signal to noise ratio, the system automatically selects from BPSK, QPSK, 16 QAM, 64 QAM, 256 QAM, and 1024 QAM, on a carrier by carrier basis. Versions: Utilizing adaptive modulation on up to 1155 OFDM sub-carriers, turbo convolution codes for error correction, two-level MAC framing with ARQ, and other techniques, HomePlug AV can achieve near the theoretical maximum bandwidth across a given transmission path. For security reasons, the specification includes key distribution techniques and the use of 128 bit AES encryption. Furthermore, the specification's adaptive techniques present inherent obstacles to eavesdropping and cyber attacks.Some Qualcomm Atheros-based adapters comply with the HomePlug AV specification but employ a proprietary extension that increases the PHY-rate to 500 Mbit/s primarily by using a wider spectrum. Versions: HomePlug AV2 The HomePlug AV2 specification was introduced in January 2012. It is interoperable with HomePlug AV and HomePlug GreenPHY devices and is IEEE 1901 standard compliant. It features gigabit-class PHY-rate, support for MIMO PHY, repeating functionalities and power saving modes. It can additionally use the band from 30 to 86 MHz. The first generation are generally considered to be 20% faster than HomePlug AV 500, it is often sold as HomePlug 600. They do not support MIMO, but only single streams due to the Atheros chipset architecture (QCA7450/AR1540). October 2013 Qualcomm announced the QCA7500 with support for 2x2 MIMO which supposedly will double data transfer rates. In 2014, Qualcomm began production of the QCA7500. This device provided raw PHY rates of 1300 Mbit/s, with resultant data rates of 550 Mbit/s UDP and 500 Mbit/s TCP, full MIMO. Communication takes place on both the line–neutral and line–ground power line pairs. Devolo from Germany has made proprietary improvements on the standard, and are using the ground wire in addition to phase (also known as hot or live) and null (also known as neutral). This technology is available worldwide, though can only be used in territories that use the ground wire in their building wiring regulations. Versions: HomePlug Green PHY The HomePlug Green PHY specification is a subset of HomePlug AV that is intended for use in the smart grid. It has peak rates of 10 Mbit/s and is designed to go into smart meters and smaller appliances such as HVAC thermostats, home appliances and plug-in electric vehicles so that data can be shared over a home network and with the power utility. High capacity broadband is not needed for such applications; the most important requirements are low power and cost, reliable communication, and compact size. GreenPHY uses up to 75% less energy than AV.The HomePlug Powerline Alliance worked with utilities and meter manufacturers to develop this 690-page specification. HomePlug Green PHY devices are required to be fully interoperable with devices based on HomePlug AV, HomePlug AV2 and IEEE 1901 specification, which is considered to hamper their power consumption and cost reduction. The HomePlug silicon vendor QualComm announced commercially available Green PHY silicon in December 2011.HomePlug GreenPHY is the communication protocol used in the international electric vehicle charging standard CCS HomePlug Access BPL Access Broadband Power Line (BPL) refers to a to-the-home broadband access technology. The HomePlug Alliance formed the HomePlug Access BPL Working Group, whose first charter was to develop the Market Requirements Document (MRD) for a HomePlug Access BPL specification. The Alliance made an open invitation to the BPL industry to participate in the development of or provide input for consideration in the MRD. After several months of collaboration between utilities, ISPs and other BPL industry groups, the MRD was completed in June 2005. HomePlug's work on the Access BPL was subsequently contributed and merged into the IEEE 1901 standard. Security: Since signals may travel outside the user's residence or business and be eavesdropped on, HomePlug includes the ability to set an encryption password. The HomePlug specification requires that all devices are set to a default out-of-box password – although a common one. Users should change this password. If the password is not changed, an attacker can use their own homeplug device to detect the users signals, and then use the default password to access and change settings such as the encryption key used. Security: On many new powerline adapters that come as a boxed pair, a unique security key has already been established and the user does not need to change the password, except when using these with existing powerline adapters, or adding new adapters to an existing network. Some systems support an authenticate button, allowing adapters to be added to the network with just two button presses (one of each of the devices). Security: To simplify the process of configuring passwords on a HomePlug network, each device has a built-in master password, chosen at random by the manufacturer and hard-wired into the device, which is used only for setting the encryption passwords. A printed label on the device lists its master password. The HomePlug AV standard uses 128-bit AES, while the older versions use the less secure DES protocols. This encryption has no effect on the data the user sends or receives, and therefore higher level protocols and systems like TLS should still be used. Security: Since HomePlug devices typically function as transparent network bridges, computers running any operating system can use them for network access. However, some manufacturers only supply the password-setup software in a Microsoft Windows version; in other words, enabling encryption requires a computer running Windows [1] Archived 2006-07-20 at the Wayback Machine. Once the encryption password has been configured, any device supporting the ethernet specification will work on the adapter. Interoperability: HomePlug AV, GP and AV2 are fully interoperable, and will also interoperate with IEEE 1901 devices. HomePlug 1.0 devices do not interoperate with HomePlug AV devices. Although it is technically possible to achieve such backward compatibility, doing so is not economically feasible because of the high cost of circuitry that would have to support different forward error correction (FEC) techniques and feature sets.HomePlug devices will not interoperate with devices that employ other powerline technologies, such as Universal Powerline Association (UPA), HD-PLC, or G.hn. In the case of G.hn, it was deemed prohibitively expensive to implement both HomePlug's turbo coding forward error correction and G.hn's low density parity check (LDPC). However, IEEE 1901 allows co-existence within the same deployment of both HomePlug AV and HD-PLC via its Inter-System Protocol (ISP). G.hn also supports the ISP. Interoperability: HomePlug devices are not compatible with certain power strips, surge protectors, and uninterruptible power supplies incorporating filters, which block the high-frequency signal. In such cases, the installer must plug devices directly into building electrical receptacles. If a spare power point is not available, a double adapter can be used in many cases with the incompatible device on one side and the HomePlug device on the other. EMI concerns: One of the concerns with all powerline systems, when compared to dedicated data wiring, is that the route of the wiring is not known in advance, and is generally already optimized for power transmission. This means that there will be situations where the system will radiate a significant fraction of the energy, in the form of radio frequency interference, or be vulnerable to the ingress of external signals. Given that the shortwave band is used both by low-power long-range telemetry and high-power broadcast signals, this is a potentially serious drawback. To attempt to minimize the effects of incoming interference and frequency-dependent path losses, the HomePlug standard requires each node to maintain 'tone maps' updates during operation, so the equipment 'learns' to avoid certain troublesome frequencies and to put more data onto those frequencies that exhibit a low loss. However, while this mitigates against ingress, if there is sensitive receiving equipment nearby then there is no easy way to tell the HomePlug apparatus to 'turn down' the radiated interference. In comparison to the received signals in a radio communication equipment, the signal levels in a powerline system are quite high. Typically the power density is 10 nW/Hz, as each carrier occupies a channel of 24 kHz, each carrier is injected at a level of −6.6dBm (220 microwatts), making the total full channel power 24dBm, (250 milliwatts). Typical short wave radio receiver sensitivities are at −100dBm (tenths of a picowatt) level. EMI concerns: In the UK there have been suggestions that users of powerline equipment should be prosecuted under the Wireless Telegraphy Act, if they cause interference to official radio systems. Also GCHQ has published concerns that such interference affects its ability to monitor radio activity in the UK.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benserazide** Benserazide: Benserazide is a peripherally acting aromatic L-amino acid decarboxylase or DOPA decarboxylase inhibitor, which is unable to cross the blood–brain barrier.It is on the World Health Organization's List of Essential Medicines. Indications: It is used in the management of Parkinson's disease in combination with L-DOPA (levodopa) as co-beneldopa (BAN), under the brand names Madopar in the UK and Prolopa in Canada, both made by Roche. Benserazide is not approved for use in the US; carbidopa is used, instead, for the same purpose. These combinations are also used for the treatment of restless leg syndrome. Pharmacology: Levodopa is a precursor to the neurotransmitter dopamine, which is administered to increase its levels in the central nervous system. However, most levodopa is decarboxylated to dopamine before it reaches the brain, and since dopamine is unable to cross the blood–brain barrier, this translates to little therapeutic gain with strong peripheral side effects. Benserazide inhibits the aforementioned decarboxylation, and since it cannot cross the blood–brain barrier itself, this allows dopamine to build up solely in the brain, instead. Adverse effects caused by peripheral dopamine, such as vasoconstriction, nausea, and arrhythmia, are minimized. However, benserazide cannot reduce the centrally mediated side effects of levodopa, particularly dyskinesia. Benserazide has little therapeutic effect on its own, and its effect occurs synergically in combination with levodopa. Pharmacology: The enzyme inhibited by benzerazide catalyzes many different decarboxylations. The same effect of concentrating the conversion of levodopa into dopamine to the central nervous system can be achieved with the following decarboxylations being confined to the central nervous system: 5-HTP to serotonin Tryptophan to tryptamine Phenylalanine to phenethylamine L-tyrosine to tyramineCentrally mediated side effects of higher levels of neuro- and trace amine-transmitters may worsen in combination with monoamine oxidase inhibitors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Expansin** Expansin: Expansins are a family of closely related nonenzymatic proteins found in the plant cell wall, with important roles in plant cell growth, fruit softening, abscission, emergence of root hairs, pollen tube invasion of the stigma and style, meristem function, and other developmental processes where cell wall loosening occurs. Expansins were originally discovered as mediators of acid growth, which refers to the widespread characteristic of growing plant cell walls to expand faster at low (acidic) pH than at neutral pH. Expansins are thus linked to auxin action. They are also linked to cell enlargement and cell wall changes induced by other plant hormones such as gibberellin, cytokinin, ethylene and brassinosteroids.A subset of the β-expansins are also the major group-1 allergens of grass pollens. Families: So far, two large families of expansin genes have been discovered in plants, named alpha-expansins (given the gene symbol EXPA) and beta-expansins (EXPB). Both families of expansins have been identified in a wide range of land plants, from angiosperms and gymnosperms to ferns and mosses. The model plant Arabidopsis thaliana contains around 26 different α-expansin genes and 6 β-expansin genes. A subset of β-expansins has evolved a special role in grass pollen, where they are known as group 1 grass pollen allergens. Plants also have a small set of expansin-like genes (named EXLA and EXLB) whose function has not been established. Some proteins in bacteria and fungi are known to have distant sequence similarity to plant expansins. Strong evidence that at least some of these sequences are indeed expansins came in 2008 when the crystal structure of the YOAJ protein from a bacterium (Bacillus subtilis) was shown to be very similar to the structure of plant expansins, despite the low sequence similarity. This study also noted that proteins related to YOAJ were found in diverse species of plant pathogenic bacteria, but not in related bacteria that did not attack or colonize plants, thus suggesting that these bacterial expansins have a role in plant-microbe interactions. Families: Some animals, such as Globodera rostochiensis, a plant-parasitic nematode, can produce a functional expansin which uses it to loosen cell walls when invading its host plant.To be designated as expansin or expansin-like, genes and their protein products must contain both domain I (N-terminal, catalytic, GH45-like - GH meaning glycoside-hydrolase) and domain II (C-terminal, distantly related to group-2 grass pollen allergens). Families: Non-plant expansins can be designated with the symbol EXLX (expansin-like X), but they do not constitute a monophyletic group; distantly similar to plant expansins, they could have diverged prior to the origin of land plants, or else could have been acquired by horizontal transfer. Nomenclature of genes and proteins of expansins and expansin-like: e.g., Arabidopsis thaliana EXPANSIN A1 is named "AtEXPA1" as for the gene, and "AtEXPA1" as for the protein; one adds "-1" for mutant allele 1. Actions: Expansins characteristically cause wall stress relaxation and irreversible wall extension (wall creep). This process is essential for cell enlargement. Expansins are also expressed in ripening fruit where they function in fruit softening, and in grass pollen, where they loosen stigmatic cell walls and aid pollen tube penetration of the stigmain germinating seeds for cell wall disassembly, in floral organs for their patterning, in developing nitrogen-fixing nodules in legumes, in abscessing leaves, in parasitic plants, and in ‘resurrection’ plants during their rehydration. No enzymatic activity has been found for expansin and in particular, no glucanase activity: they don't hydrolyze the matrix polysaccharides; the only definitive assay for expansin activity is thus to measure wall stress relaxation or wall extension. Structure and regulation: Expansins are proteins; the two expansins initially uncovered had molecular weights of 29 kDa (kiloDalton) and 30 kDa, which would correspond to around 270 amino acids on average. Generally speaking, α- and β-expansins and expansin-like are composed of approximately 300 amino acids, with a MW of ~25–28 kDa for the mature protein. The peptidic sequence of an expansin consists, in particular, of: a signal peptide of around 20–30 amino acids at the N-terminal end, the putative catalytic domain, a His-Phe-Asp (HFD) motif in central region (except EXL), and the C-terminal putative cellulose-binding domain with conserved Trp (tryptophan) residues. Sequence analysis of expansin genes shows seven introns named A, B, C, D, E, F, and G; sequences from different expansin genes show good correspondence, the exon/intron organization being conserved among α- and β-expansins, and expansin-like genes, although the number of introns and the length of each intron differ among genes. In the N-terminal signal sequences of α-expansin genes, the general absence of endoplasmic reticulum retention signal (KDEL or HDEL) confirms that the proteins are targeted to the cell wall. Structure and regulation: A promoter analysis of expansin genes indicates that expression of these genes may be regulated by auxin, gibberellin, cytokinin or ethylene, this being more frequent in α-expansins than in β-expansins; semi-aquatic plants such as Rumex palustris, which are induced to grow rapidly by submergence, show a transcription induction by submergence, the same as in rice where hypoxia and submergence increase α-expansin mRNA levels. Mechanism: The plant cell wall has high tensile strength and must be loosened to enable the cell to grow (enlarge irreversibly). Within the cell wall, this expansion of surface area involves slippage or movement of cellulose microfibrils, which normally is coupled to simultaneous uptake of water. In physical terms, this mode of wall expansion requires cell turgor pressure to stretch the cell wall and put the network of interlinked cellulose microfibrils under tension. By loosening the linkages between cellulose microfibrils, expansins allow the wall to yield to the tensile stresses created in the wall through turgor pressure. The molecular mechanism by which expansin loosens the cellulosic network within the cell wall is not yet established in detail. However, expansin is hypothesized to disrupt the non-covalent adhesion or entrapment of hemicellulose on the surface of cellulose microfibrils. Hemicelluloses can tether cellulose microfibrils together, forming a strong load-bearing network. Expansin is thought to disrupt the cellulose-hemicellulose association transiently, allowing slippage or movement of cell wall polymers before the association reforms and the integrity of the cell wall network is reestablished.Turning to the function of bacterial expansins, the bacterial protein named YOAJ or BsEXLX1 binds to plant and bacterial cell walls and has weak but significant expansin activity, that is, it induces plant cell wall extension in vitro. Moreover, B. subtilis mutants lacking BsEXLX1 were defective in colonizing plant roots, suggesting that this protein facilitates plant-bacterium interactions. Allergenicity: In grass pollens, the major allergens (group-1 allergens, main causative agents of hay fever and of seasonal asthma) are structurally linked to a sub-group of the β-expansins. These expansins appear specialized in pollination, likely in loosening the cell walls of the maternal tissues during penetration of the pollen tube into the stigma and style, as is suggested by their potent rheological effect on grass style and stigma walls, where they are abundantly released by the pollen. Expansin-like proteins are implicated in group-2 and -3 grass allergenes, less important than those of group-1. These three allergens groups share a carbohydrate-binding module (CBM), which could be responsible for the binding to the IgE antibody. The expansin domain II, causative of the allergenic effects, could be related to the competition between pollens for access to ovules.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lactitol** Lactitol: Lactitol is a disaccharide sugar alcohol produced from lactose. It is used as a replacement bulk sweetener for low calorie foods with 30–40% of the sweetness of sucrose. It is also used medically as a laxative. Production: Lactitol is produced by hydrogenation of lactose using Raney nickel catalyst. The product can be obtained as an anhydrous, monohydrate, or dihydrate. Two manufacturers, Danisco and Purac Biochem, produce about 10,000 tons/y. Applications: Lactitol is used in a variety of low food energy or low fat foods. High stability makes it popular for baking. It is used in sugar-free candies, cookies (biscuits), chocolate, and ice cream, with a sweetness of 30–40% that of sucrose. Lactitol also promotes colon health as a prebiotic. Because of poor absorption, lactitol only has 2–2.5 kilocalories (8.4–10.5 kilojoules) per gram, compared to 4 kilocalories (17 kJ) per gram for typical saccharides. Hence, lactitol is about 60% as caloric as typical saccharides. Applications: Medical Lactitol is listed as an excipient in some prescription drugs.Lactitol is a laxative and is used to prevent or treat constipation, e.g., under the trade name Importal.In February 2020, Lactitol was approved for use in the United States as an osmotic laxative for the treatment of chronic idiopathic constipation (CIC) in adults.Lactitol in combination with Ispaghula husk is an approved combination for idiopathic constipation as a laxative and is used to prevent or treat constipation. Safety and health: Lactitol, erythritol, sorbitol, xylitol, mannitol, and maltitol are all classified sugar alcohols (lactitol and maltitol are in fact disaccharide alcohols, since they contain one intact sugar). The U.S. Food and Drug Administration (FDA) classifies sugar alcohols as "generally recognized as safe" (GRAS). They are approved as food additives, and are recognized as not contributing to tooth decay or causing increases in blood glucose. Lactitol is also approved for use in foods in most countries around the world.Like other sugar alcohols, lactitol causes cramping, flatulence, and diarrhea in some individuals who consume it. These effects arise because humans lack a suitable beta-galactosidase in the upper gastrointestinal (GI) tract, and a majority of ingested lactitol reaches the large intestine, where it then becomes fermentable to gut microbes (prebiotic) and can pull water into the gut by osmosis. For these reasons, medical advice is often sought before their use. History: The U.S. Food and Drug Administration (FDA) approved Pizensy based on evidence from a clinical trial (Trial 1/ NCT02819297) of 594 subjects with CIC conducted in the United States. The FDA also considered other supportive evidence including data from Trial 2 (NCT02481947) which compared Pizensy to previously approved drug (lubiprostone) for CIC, and Trial 3 (NCT02819310) in which subjects used Pizensy for one year as well as data from published literature.The benefit and side effects of Pizensy were evaluated in a clinical trial (Trial 1) of 594 subjects with CIC. In this trial, subjects received treatment with either Pizensy or placebo once daily for 6 months. Neither the subjects nor the health care providers knew which treatment was being given until after the trials were completed.In the second trial (Trial 2) of three months duration, improvement in CSBMs was used to compare Pizensy to the drug lubiprostone which was previously approved for CIC. The third trial (Trial 3) was used to collect the side effects in subjects treated with Pizensy for one year.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clitoral index** Clitoral index: The clitoral index, defined as the product of the sagittal and transverse dimensions of the glans clitoridis, is sometimes used as a measure of virilization in women. In one study, the mean, and also median, clitoral index of a group of 200 normal women was measured as being roughly 18.5mm2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glass rod** Glass rod: A glass stirring rod, glass rod, stirring rod or stir rod is a piece of laboratory equipment used to mix chemicals. They are usually made of solid glass, about the thickness and slightly longer than a drinking straw, with rounded ends. Structure: Stir rods are generally made of borosilicate (commonly known as Pyrex) glass or polypropylene plastic. They are usually between 10 and 40 centimeters in length and about half a centimeter in diameter. Glass rods are created from a single length of thin glass that is then cut into smaller segments. The ends are generally rounded (for example, by flame polishing) to prevent scratching the surface of glassware during use, which may lead to cracks if the glassware is later heated. Other shapes are possible, such as a flat paddle which can be used to circulate sediment, a triangular paddle to imitate a rubber policeman or a round button used to crush solids. Uses: A stirring rod is used for mixing liquids, or solids and liquids. Uses: Stir rods are used as part of proper laboratory technique when decanting supernatants because the contact helps to negate the adhesion between the side of the glassware and the supernatant that is responsible for the liquid running down the side. Using a stir rod also grants more control over the rate of flow, which is important in cases where chemicals may react violently. This process is also used to pour a large-mouthed flask or beaker into a test tube.Glass rods can also be used to induce crystallization in a recrystallization procedure, when they are used to scratch the inside surface of a test tube or beaker. Uses: They can also break up an emulsion during an extraction. Applications in physics: These are two classic experiments performed using glass rods. Applications in physics: Vanishing rods experiment This experiment introduces students to the concept of an index of refraction in a liquid. Glass rods are placed in beakers of liquid, in this case oil and water. In water, the glass rods are visible because the refractive index of water is different for water and glass. In the oil, however, the glass rods seem to disappear because they have a refractive index very similar to that of glass, so the light does not bend as it crosses the glass/oil interface. Applications in physics: Electrification Glass rods can also be used to demonstrate electrification by friction. This occurs when there are two surfaces rubbing together. In this instance, rubbing a glass rod with silk transfers negative charge from it. This effect is known as the triboelectric effect and can be performed with a variety of materials. Because glass rods and silk are relatively common, they are often chosen to demonstrate this effect.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adcept** Adcept: An Adcept is a tool used in marketing & advertising development to test creative ideas or brand positionings. The term is an amalgamation of the words 'advertising' and 'concept', indicating its role as a halfway stage between a concept idea and an advertising execution. Traditionally, adcepts have been used to check early creative work, typically with advertising agency clients or in market research focus groups. An alternative to concept statements: Traditionally, qualitative market research has used concept statements and mood boards to test ideas with target consumers. However, Richard Woods (Journal of Consumer Behaviour, Vol.3, 4 388-403) argues that concept boards are unfamiliar and lack emotional appeal. They also attempt to pull apart the rational and the emotional, which goes against how the human mind evaluates ideas. Because advertising is the popular language of brands, he claims, adcepts are easier for consumers to relate to, and to evaluate. Formats: Adcepts can be highly finished and closely resemble final advertising or they can be made to be deliberately conceptual. When used as research stimulus, Richard Woods argues that this rougher, conceptual format is needed. Showing very polished adcepts to focus group participants leads them to pass judgment on the specific execution rather than examining the core idea being explored. Applications: At their most basic level, adcepts are useful for testing advertising ideas that are works in progress. However, when used in place of concept boards, they are ideally suited to isolating what a brand could stand for, and exploring the areas it could expand into in the future.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Commercialization of the Internet** Commercialization of the Internet: The commercialization of the Internet encompasses the creation and management of online services principally for financial gain. It typically involves the increasing monetization of network services and consumer products mediated through the varied use of Internet technologies. Common forms of Internet commercialization include e-commerce (electronic commerce), electronic money, and advanced marketing techniques including personalized and targeted advertising. The effects of the commercialization of the Internet are controversial, with benefits that simplify daily life and repercussions that challenge personal freedoms, including surveillance capitalism and data tracking. This began with the National Science Foundation funding supercomputing center and then universities being able to develop supercomputer sites for research and academic purposes. Commercialization of the Internet: With the growing population and demands of Internet users, startups and their investors were encouraged to start profiting off of the Internet. Early history: The core idea of the web was outlined by Vannevar Bush in 1945 as interconnected networks of hyperlinked pages. The first attempt to materialize this vision was by Ted Nelson in 1965 all the way up to 1984, a decade before Netscape. Early history: In the mid-1980s, the National Science Foundation, the backbone of the internet at the time, claimed that the Internet was for research and not commerce. This strict allocation of funding was known as the "acceptable use policy." The NSF were firm believers that the Internet would be thwarted and devalued if it were to open up to commercial interests.However, simultaneously in April 1984 CompuServe's Consumer Information Service opened a new online shopping service called the Electronic Shopping Mall that allowed subscribers to buy from merchants including American Express, Sears, and over sixty other retailers. Early history: Development of public networks NSFNET, the National Science Foundation Network, was a three-layer network that acted as a backbone for much of the internet's infrastructure. Originally funded by the government, NSFNET was a big leap into the future which allowed networks to run smoothly. It allowed people to view pages without any cost to institutions. The allowance of users to access websites without having to pay to go on them would further develop the idea of being able to browse for items on the internet in the future. In 1992, interested entrepreneurs who believed there was money to be made from the Internet petitioned Congress to get the government out of the way of NSFNET.UUNET was the first company to sell commercial TCP/IP, first to government-approved corporations in November 1988 and then actively to the public starting in January 1990, albeit only to the NSFNET backbone with their approval.Barry Shein's The World STD was selling dial-up Internet on a legally questionable basis starting in late 1989 or early 1990, and then on an approved basis by 1992. He claims to be and is generally recognized as the first to ever think of selling dial-up Internet access for money. The High Performance Computing Act of 1991 put U.S. government money behind the National Information Infrastructure, the National Research and Education Network (NREN), the National Center for Supercomputing Applications, and more. Internet becomes a true commercial medium: Although the Internet infrastructure was mostly privately owned by 1993, the lack of security in the protocols made doing business and obtaining capital for commercial projects on the Internet difficult. Additionally, the legality of Internet business was still somewhat grey, though increasingly tolerated, which prevented large amounts of investment money from entering the medium. This changed with the NSFNET selling its assets in 1995 and the December 1994 release of Netscape Navigator, whose HTTPS secure protocol permitted relatively safe transfer of credit and debit card information. Internet becomes a true commercial medium: This along with the advent of user-friendly Web browsers and ISP portals such as America Online, along with the disbanding of the NSFNET in 1995 is what led to the corporate Internet and the dot com boom of the late 1990s. Internet becomes a true commercial medium: 1995 was a significant year for the concept of commercial Internet service provider (ISP) markets due to two substantial events that took place: the Netscape initial public offering (IPO) and emergence of AT&T World Net. The Netscape IPO brought a lot of publicity to the new technology of the Web and the commercial opportunities for ISPs changed. At this point in time, ISPs started providing their traditional service, text-based applications such as e-mail, and trying to expand to another service, web applications. This became a time where ISPs were incentivized to expand and experiment with new types of services and business models. The entry of AT&T on the other hand created an Internet access service that spread nationwide and gained one million customers due to its publicity and marketing. Despite this, AT&T did not dominate the commercial ISP markets, and in fact, led to the growth and emergence of other independent ISPs such as AOL, going against the predicted trend that commercial ISP markets would be dominated by only a few national ISP services.The commercialization of the Internet is going so well due to four main reasons. First, the academic model could easily migrate into business operations without requiring additional providers. Second, it is feasible for entrepreneurs to learn and gain benefits without too many technical and operational challenges. Third, customizing Internet access is widespread through many different locations, circumstances, and users. Fourth, the Internet Access industry is still growing and provides a lot of opportunities for further research and practice. Internet becomes a true commercial medium: Dot-com bubble Between 1995 and 2000, Internet start-ups encouraged investors to pour large sums of money into companies with ".com" in their business plan. When the commercialization of the Internet became more acceptable and fast-paced, Internet companies began to form rapidly with minute planning in order to get into what they thought would be easy money. It fueled by enthusiasm for all of the new opportunities and profits the Internet had to offer, produced some successful companies such as Amazon.com.This was shortly followed by the Dot-com Crash in 2001, wherein many of the same start-up companies failed due to a lack of concrete structure in their business plans, and investors cut off funding for the unprofitable companies. This led many people to believe that the Internet was overhyped, but in reality this turning-point for the Internet led to a revolutionary concept known as Web 2.0. Internet becomes a true commercial medium: Early social media platforms The introduction to social media started in 1994 when GeoCities was created by David Bohnett and John Rezner. GeoCities, being the first form of web hosting on the Internet, let users utilize its functions to create "digital neighborhoods," which let everyone using the platform discuss different topics in depth with others involved in the same discussions. The concept of connecting with others through the Internet in this manner that was established by GeoCities paved the way for the emergence of other social media platforms.In December 1995, Classmates, a website created by Randy Conrads, enabled its users to create their own profiles, search through large yearbook databases, and add their high school friends to their friend list. Two years later, in 1997, Six Degrees, which is widely considered to be the first social networking website, included many of the popular features on Classmates, such as creating profiles and adding friends and school affiliations. Internet becomes a true commercial medium: File-sharing computer services Before the internet, sharing files was through floppy disks, tapes, CD's and other forms. However, with the creation of the internet and the development of new applications made file-sharing much easier. A prime example would be Napster, a music file-sharing application created by Shawn Fanning in 1999. The main idea of Napster was for people to share music files between each other, virtually, without the use of a physical copy. Napster was the reason why we have digital forms of music, TV-shows, and movies. Napster forced production and music companies to make digital forms of their media and streaming services. Internet becomes a true commercial medium: File-sharing computer services started to become more accessible to users due to applications like BitTorrent. The creation of BitTorrent brought peer to peer sharing and users were able to use the built-in search engine to find media they would like to view. BitTorrent's traffic made up 53% of all peer to peer traffic in 2004 even though it was a file-download protocol. Since BitTorrent was a file- download protocol, it had to rely on websites like The Pirate's Bay for users to share their media onto the website and then users were allowed to download media onto their computers through BitTorrent. Web 2.0 and the rise of social media: Web 2.0 The bursting of the Dot-Com Bubble in 2001 acted as an unforeseen gateway for the transition from Web 1.0 to Web 2.0 by 2004. A conference between Tim O'Reilly with O'Reilly Media and MediaLive International coined the term "Web 2.0". This conference became an annual "Web 2.0 Summit" in San Francisco, where the idea was developed gradually from 2004 to 2011. Web 2.0 included data that existed prior within Web 1.0 with improved data management and increased interaction. Web 2.0 was majorly delivered by AdobeFlash, Ajax, RSS, Eclipse, JavaScript, Microsoft Silverlight, etc.Some key characteristics of Web 2.0 included: Development of user friendly advertising i.e. banners and pop-up ads developed by Google and Overture A platform for the people by the people Users now able to add value to existing applications Transition from static to dynamic HTML serving web applications to users User-generated content Growth of social media Transition from passive viewing to co-authoring The transition from read-only web to read-write web.While using the applications of Web 2.0, users unknowingly had their user data aggregated: collected and linked together for what was already becoming commercial purposes. Sites used built-in APIs (Application Programming Interfaces) to connect with external sources invisible to the user. Thanks to the newfound connection between platform and user, applications could now access data and exchange it with other software.Web 2.0 catapulted the marketing industry into a completely uncharted territory. With encouraged user integration, there were now enhanced retail opportunities, increased marketing visibility, and the ability for business to interact with customers. In November 2005, Google came out with Google Analytics which allows sellers to track buyers' referrals, advertisements, search engines, and e-mail promotions. All of these characteristics of Web 2.0 combined gave way to "viral marketing": a marketing technique that was all over the internet, all the time. Business could now promote products or services to larger audiences wherever, and however they chose to. Web 2.0 and the rise of social media: Personalization on the Internet The commercialization of the Internet has allowed personalized experiences for consumers, which in turn provides companies and marketers with data that helps them make inferences about the behavioral patterns and actions of the consumer. Personalization of the Internet follows a typical cycle of product purchases, which includes steps such as personalized searches, personalized recommendations, and personalized price and promotions. Companies tailor products they think the consumer would like based on their behavior and transactions. This make people feel like someone is listening to their needs. Web 2.0 and the rise of social media: Clickstream Clickstream technology is what is used in order to infer information about the user, mainly for online shopping sites. Clickstream is the path that consumers take on the website to its destination. It includes times stamps and the pages that were visited. Although clickstream may be associated with online shopping and advertising, it can also be used to simply extend certain sites to more people. It is important to understand a consumer's pattern on a certain site in order to tailor information to their liking. Not only does this increase traffic flow to a specific site, but it also increases connections with customers which makes it more engaging. Web 2.0 and the rise of social media: Personalized email marketing Emails can also help in the process of personalizing the Internet. Email personalization means that companies have to learn about your likes and dislikes in order to send you emails that carry important weight to you rather than sending every user the same email. Because of email personalization, it has been proven to increase interest in the site. A form of email personalization that is commonly seen are those that remind the user that they still have products in their cart that have not been purchased yet. This can remind them of something that they forgot or this can increase their interest even further and ultimately can lead to them purchasing the product. Web 2.0 and the rise of social media: Advertising on the Internet Targeted advertising can take on many different forms, across numerous platforms, in order to effectively target range of market subgroups. Often seen as a precursor to the Internet, the first spam email was sent on May 3, 1978, originally used as "a highly secure medium for information flow between universities and research centers," beginning with UCLA, UC Santa Barbara, University of Utah, and Stanford Research Institute. The earliest widespread spam email was sent on April 13, 1994, by Martha Siegel and Laurence Canter, who were among the first to post on Usenet in order to advertise their law firm, claiming they would in turn be able to help people enter the green-card lottery. Hotwired magazine then coined the term "banner advertising" in October 1994, with AT&T being one of the first companies to purchase a banner ad. Web 2.0 and the rise of social media: Search engine marketing Search engine marketing is something that is seen every time someone searches for a product on a search engine. Sometimes referred to as SEM, search engine marketing is a strategy in marketing when ads appear above or below relevant search results. These ads are usually sponsored or paid for, also known as CPC, cost-per-click. Commonly seen on popular search engines such as Google and Bing, these ads don't just stop at search engines, but extend to their partner sites, such as Yahoo, YouTube, and shopping. Similar to SEM, search engine optimization (SEO) improves visibility of their page. Instead of paying for ads, they have to appeal to search engines in order for them to rank them higher on specific searches. Web 2.0 and the rise of social media: Banner ads Banner ads are slim ads that can appear vertically, horizontally, and in between texts. They are usually filled with pictures and few words rather than all words. Usually clickable, when a user clicks on these banners, they lead to another page which displays the full site. Usually placed in high-traffic areas, companies have to bid for their ad to be placed their instead of simply paying for the spot. However, this is not done by humans but rather by a program that does it without human assistance. Unlike personalization, banner ads can reach everyone which can spark interest in new people rather than staying within one group of consumer. However, it can also show for those that are interested in a specific product. Ultimately, banner ads will not decrease but only increase attraction towards their site. Web 2.0 and the rise of social media: Facebook Facebook was founded by Mark Zuckerberg in 2004 and was originally only available to Harvard students as an interactive college student network, but it soon expanded to many other college campuses throughout the United States. By September 2006, Facebook expanded outside of just educational institutions and became available to any person with a registered email address. After Sean Parker, the founder of Napster, became the president of Facebook, Mark Zuckerberg was introduced to Peter Thiel, a venture capitalist and one of the founders of PayPal, which resulted in Thiel investing $500,000 into Facebook, establishing Facebook as an up-and-coming company that would interest other investors. Web 2.0 and the rise of social media: In April 2004, Facebook's ad sales effort, which was led by co-founder Eduardo Saverin, was an example of an early commercial use of social media. The ad rates were as low as $1 per 1,000 impressions and Facebook offered its services to companies who wanted to advertise themselves through this platform. Facebook enabled companies to create targeted advertisements based on a variety of consumer-related factors such as college/university, degree type, sexual orientation, age, personal interests, and political views. Companies were also provided with an up-to-the-minute ad performance tracking service and had rates that differed depending on the type of advertisement (run-of-site ads, targeted ads, etc.). Web 2.0 and the rise of social media: Twitter Twitter emerged from Odeo, a podcasting venture that was launched in 2004 and founded by Evan Williams and Noah Glass. After Apple's announcement in 2005 that they would add podcasts to iTunes, the leaders of Odeo wanted to take the company in a new direction since they believed that they could not compete with Apple in regards to podcasts. Engineer Jack Dorsey suggested that the company could provide a short message service (SMS) that enables friends to share short blog-like updates to one another. Using Dorsey's idea, in 2006, Twitter was officially launched.Throughout Twitter's lifespan, there have been various examples of it being used for commercial purposes. Twitter's implementation of hashtags allows companies to go viral on the platform and gain a significant amount of attention for launching a successful hashtag. Nike's "Dream Big" campaign used the hashtag "#justdoit" to promote their message, which focused on the stories of famous athletes who became successful after not letting fear stop them from achieving their goals in order to motivate their followers to chase their own dreams. The success and traction of Nike's "Dream Big" campaign is an example of how Twitter as a platform was used to benefit companies whose goal was to increase their visibility to a large number of people using Twitter. Viral marketing is also present on Twitter as companies use their posts to touch upon current issues that are being heavily discussed. This method of using Twitter to keep their brands up to date with the trends creates more opportunities for companies to interact with users and potential buyers in ways that are relevant to them. The emergence of the mobile Internet: Preceded modern cellular mobile form of telephony technology was the mobile radio telephone systems, referred to as the zero generation (0G or pre-cellular) systems, later succeeded by 1G. The world's first commercial 1G mobile network, launched in 1979 by Nippon Telegraph and Telephone (NTT), was made available to the citizens of Tokyo, Japan. Following Japan, other countries started attaining 1G coverage. Although cellphone prototypes followed the launch of 1G, it wasn't until 1983 that Motorola rolled out to the public the first commercially available cellphone. The emergence of the mobile Internet: Replacing 1G's analog technology, telecommunication standards is 2G, digital telecommunications. 2G cellar networks were first commercially launched on the Global System for Mobile Communications (GSM) standard in Finland in 1991. 2G made significant changes, advancing the basis of 1G's voice calls through improved quality and download speed, also introducing digitally encrypted phone calls (although not in entirety, it allowed for conversations held between the mobile phone and the cellular base station). 2G also marked the start of data services for mobile devices and enables access to media content. The introduction of 2G's data transferring abilities, text messages (SMS) and multimedia messages (MMS), changed how people communicate. As methods of communications shifted, the 2G network led to the smartphones' massive adoption. With the demand increase for data and connectivity, the 2G network was superseded by 3G, followed by 4G, and later on 5G. The emergence of the mobile Internet: 3G Technology The first commercial launch of 3G technology was deployed for the public by NTT Docomo in Japan in 2001. 3G networks implemented new technology and protocols that offered significantly faster data transfer capabilities, which improved connectivity, call quality, and connection speed, including 30 times higher rates (3 Mbps) on average than that of 2G (0.1 Mbit/s). When 3G infrastructure launched more broadly in 2002, its networks continued to develop in not only quality and speed but also range and volume, setting the early commercial foundations of the mobile Internet.Smartphones soon gained popularity quickly as 3G marked the first for a cellular communications network to broaden and improve an range of features, kick-starting the transition to widespread usage of cellular networks. In 2002, BlackBerry launched its first mobile device (BlackBerry 5810), offering a full keyboard, advanced security, and internet access. The company characterized it as a "breakthrough in wireless convergence," and touted the wireless handheld device for mobile services ranging from email delivery and SMS, to streaming and web browsing, to graphical interfaces and utility features. Later, the BlackBerry 5810 was replaced by more advanced models iteratively produced by both Blackberry and their field competitors.In 2007, Apple released the company's first-ever phone, iPhone 2G (also known as iPhone 1 or the original iPhone). At Macworld 2007, Steve Jobs presented the phone as the one device allowing capabilities of "an iPod, a phone, and an internet communicator." Following the iPhone 2G, the iPhone 3G (also known as iPhone 2) was introduced along with the company's developed App Store. The Apple-developed and maintained app store platform utilizes the 3G network, providing access to mobile applications on the company's operations systems. Jointly, the introduction of the App Store marketplace and the maturation of the iPhone established the idea that an electronic device need not be rigid functionally. The device contributed to the transition to mobile as it speedily evolved into a dominant platform of the mobile web, making available to the public increasing access to apps and data. The emergence of the mobile Internet: 4G Technology 4G, different from previous networks designed primarily for voice communication (2G and 3G), is the first network designed specifically for data transmission, driven by order for quicker data and expansion of network capabilities. The network observes the start of today's standard services, offering faster data access on mobile phones. The emergence of the mobile Internet: Some improvements and applications brought about by LTE and 4G include: Streaming quality (higher resolution and better audio quality) Upload and download rates Video calls More on-the-go entertainment, including influence on social media platforms and gaming services Wearable tech productsEliminating the previous hold, set by limited data transmission, the network contributed to advanced mobile applications and the applications' unique features. Network users became able to share high-resolution images and videos from their mobile devices, causing social media and gaming platforms to shift and create features that take advantage of the network's new effects. While progress in the 4G network helped with further improvement of data transferring speeds, the network had reached its capacity, and the rapidly releasing new mobile and wearable products require faster networks. The emergence of the mobile Internet: 5G Technology At the heart of 5G network is 3GPP (3rd Generation Partnership Project), which invented the air interface and service layer design for 5G. 3GPP encompasses infrastructure vendors, device manufacturers, as well as network and service providers. The initial idea of 5G network was to connect all machines, objects, and devices together virtually. 5G would be more reliable with massive network capacity and a uniform user experience. No specific company owns 5G, but a plethora of different companies contribute to bringing it to its full potential. 5G works by using OFDM (orthogonal frequency-division multiplexing), which sends digital signals across channels to properly interconnect various devices. With wider bandwidth, 5G has larger capacity and lower latency than its predecessors. From 2013 to 2016, companies such as Verizon, Samsung, Google, Bt, Nokia and others began to develop their own versions of 5G networks including Google's Skybender. Global operators began launching new 5G networks in 2019. By 2020, 5G networks were integrated into the majority of mobile phones as well as in-home modems. The emergence of the mobile Internet: Some characteristics and effects of 5G network: Speeds twice as fast as 4G Ability to download movies within seconds Larger frequencies which allow for faster data travel with less clutter Mobile phone becomes the dominant platform for video consumption Mass roll-out and quality improvement of virtual realityFacebook has taken advantage of the prevail of 5G networks. Facebook took over a 5G company, Inovi, and partnered with a startup company, Common Networks, to help power home use of 5G. The emergence of the mobile Internet: Facebook had already invested in Oculus, with the idea that Virtual Reality and 5G will innovate social media usage. Mark Zuckerberg and Facebook have already incorporated VR into applications such as VR chat, Facebook spaces, and Oculus Home. Users can communicate with one another through avatars and specialized 3D audio technology, play virtual games, watch content together, and visit virtual space stations. E-commerce: E-commerce stands for electronic commerce and pertains to buying and trading goods through electronic mediums. This enabled shoppers to do their online shopping at home instead of going into physical stores. With the development and better access to the Internet, e-commerce allowed larger and smaller businesses to grow at a faster rate and it cuts down expenses when it comes down to retail shopping. E-commerce: History and development of e-commerce The idea of e-commerce can be traced back to the 1960s with the development of the Electronic Data Interchange, enabling data exchange through digital transactions without human interaction. Early forms of e-commerce date to Michael Aldrich, when in 1979 he connected a TV to a transaction processing computer using a telephone, calling it teleshopping. In 1992, Book Stacks Unlimited became the first store to host an e-commerce site, as additional companies like Dell followed suit with the adoption related e-commerce models. The development of the Secure Socket Layers, an encryption certificate, provided better security for data transmission over the internet. This gave online shoppers less hesitations and concerns while shopping online. Impact on the retail industry E-commerce was an stepping stone into cutting down costs for customers and for businesses to bring in more profit. A traditional business selling shirts would have to go through warehouses and distributors before the product ends up in the store, which all bring additional costs to the business. E-commerce: The access to the Internet allowed for individuals to do their shopping on their computers or phones. The change that E-commerce brought to the shopping industry had negative impacts on the retail section. Business would close down their stores to cut costs on selling their product which lead to employees losing their jobs since businesses found it easier and cheaper to sell their products online. With the decline in physical retail stores, customers aren't able to go to a store and try the product before buying it; this brought a bigger hassle to customers where their new item doesn't fit or work and they would have to ship it back. E-commerce: Role of e-commerce in 21st century Amazon.com, created by Jeff Bezos, started out as a bookstore and comparing it to today, Amazon.com offers different products for users when using their website. Amazon.com is known for their Prime services, 2-day delivery, which attracted shoppers since they were able to get different items delivered to their front door step. In some cases, groceries and household supplies would also be able to delivered through Amazon.com making it easy for shoppers due to the fact they wouldn't have to leave their homes. E-commerce: Brands like Nike and Adidas would promote their Cyber Monday deals to encourage people to buy their products online without having to deal with the long lines when shopping during Black Friday.Smaller businesses were able to sell their products online and be able to promote it using internet advertisements. E-commerce: The creation of applications like Shopify allows users to develop their own website to sell their product and build a reputable brand with the help of tutorials and instructions. The developments of PayPal and other payment methods gave customers an easier and faster way of paying by signing into their accounts.5G and Online Shopping Augmented reality, Virtual reality and 5G networks have given rise to revolutionary online shopping practices. By using AR to achieve a hyper-realistic virtual presentation of the physical world, online shopping stores have immersed their consumers into the digital future of trying and buying products online. Facebook has also started to test out AR advertisements on their platform, and even collaborating with businesses to advertise using AR on Facebook Messenger ever since 2018. These ads are unique due to their “tap and try” feature, with companies virtually demonstrating their product or service to the prospective buyers with Facebook as the middleman. E-commerce: The use of 5G allows brands to utilize big data for hyper-personalized advertising. With consumers using the internet as often as they do, high amounts of data allow brands to micro-segment their target audiences; this is a form of digital marketing yet to be seen. Internet Privacy: NSA disclosures In 2013, Edward Snowden, a former intelligence contractor for Booz Allen Hamilton in Hawaii, leaked classified documents from the National Security Agency (NSA) to journalists Glenn Greenwald and Laura Poitras. These documents revealed information including the fact that the NSA collected millions of Verizon customers' telephone records and used a program called Prism to access data from Internet companies such as Google and Facebook. When people became more aware of the mass surveillance being done by the NSA, Americans became more disapproving of the government's surveillance program, which served as an anti-terrorism effort, and the majority of Americans started to believe that having control of who is able to access their personal and private information is important. Internet privacy became an important concept for companies when it came to easing their customers due to the information Snowden leaked. Facebook is an example of one of these companies, which was shown during Facebook's F8 developer conference in 2019, where Zuckerberg stated "The future is private...Over time, I believe that a private social platform will be even more important to our lives than our digital town squares. So today, we’re going to start talking about what this could look like as a product, what it means to have your social experience be more intimate, and how we need to change the way we run this company in order to build this.” Facebook scandal In 2018, the Cambridge Analytica group brought to attention what Facebook was doing with their user's information. This scandal was based around the election and how Facebook had harvested their data from their users and use their information for advertisement. A well known social media page was sued for personal data breach. This case became argued by the fact that data was being taken without consent, which meant failure to comply with legal obligations under the Data Protection Act of 1998. Internet Privacy: The Cambridge Analytica scandal against Facebook didn't change Facebook but the way individuals see social media applications. The Cambridge Analytica wanted Facebook to fix the privacy issues within their application, however Facebook wasn't taking the steps that the Cambridge Analytica wanted Facebook to take. However, this scandal opened the eyes of Facebook users because users saw what was being done with their data that they give to Facebook.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Affine plane (incidence geometry)** Affine plane (incidence geometry): In geometry, an affine plane is a system of points and lines that satisfy the following axioms: Any two distinct points lie on a unique line. Affine plane (incidence geometry): Given any line and any point not on that line there is a unique line which contains the point and does not meet the given line. (Playfair's axiom) There exist three non-collinear points (points not on a single line).In an affine plane, two lines are called parallel if they are equal or disjoint. Using this definition, Playfair's axiom above can be replaced by: Given a point and a line, there is a unique line which contains the point and is parallel to the line.Parallelism is an equivalence relation on the lines of an affine plane. Affine plane (incidence geometry): Since no concepts other than those involving the relationship between points and lines are involved in the axioms, an affine plane is an object of study belonging to incidence geometry. They are non-degenerate linear spaces satisfying Playfair's axiom. The familiar Euclidean plane is an affine plane. There are many finite and infinite affine planes. As well as affine planes over fields (and division rings), there are also many non-Desarguesian planes, not derived from coordinates in a division ring, satisfying these axioms. The Moulton plane is an example of one of these. Finite affine planes: If the number of points in an affine plane is finite, then if one line of the plane contains n points then: each line contains n points, each point is contained in n + 1 lines, there are n2 points in all, and there is a total of n2 + n lines.The number n is called the order of the affine plane. Finite affine planes: All known finite affine planes have orders that are prime or prime power integers. The smallest affine plane (of order 2) is obtained by removing a line and the three points on that line from the Fano plane. A similar construction, starting from the projective plane of order 3, produces the affine plane of order 3 sometimes called the Hesse configuration. An affine plane of order n exists if and only if a projective plane of order n exists (however, the definition of order in these two cases is not the same). Thus, there is no affine plane of order 6 or order 10 since there are no projective planes of those orders. The Bruck–Ryser–Chowla theorem provides further limitations on the order of a projective plane, and thus, the order of an affine plane. Finite affine planes: The n2 + n lines of an affine plane of order n fall into n + 1 equivalence classes of n lines apiece under the equivalence relation of parallelism. These classes are called parallel classes of lines. The lines in any parallel class form a partition the points of the affine plane. Each of the n + 1 lines that pass through a single point lies in a different parallel class. Finite affine planes: The parallel class structure of an affine plane of order n may be used to construct a set of n − 1 mutually orthogonal latin squares. Only the incidence relations are needed for this construction. Relation with projective planes: An affine plane can be obtained from any projective plane by removing a line and all the points on it, and conversely any affine plane can be used to construct a projective plane by adding a line at infinity, each of whose points is that point at infinity where an equivalence class of parallel lines meets. Relation with projective planes: If the projective plane is non-Desarguesian, the removal of different lines could result in non-isomorphic affine planes. For instance, there are exactly four projective planes of order nine, and seven affine planes of order nine. There is only one affine plane corresponding to the Desarguesian plane of order nine since the collineation group of that projective plane acts transitively on the lines of the plane. Each of the three non-Desarguesian planes of order nine have collineation groups having two orbits on the lines, producing two non-isomorphic affine planes of order nine, depending on which orbit the line to be removed is selected from. Affine translation planes: A line l in a projective plane Π is a translation line if the group of elations with axis l acts transitively on the points of the affine plane obtained by removing l from the plane Π. A projective plane with a translation line is called a translation plane and the affine plane obtained by removing the translation line is called an affine translation plane. While in general it is often easier to work with projective planes, in this context the affine planes are preferred and several authors simply use the term translation plane to mean affine translation plane.An alternate view of affine translation planes can be obtained as follows: Let V be a 2n-dimensional vector space over a field F. A spread of V is a set S of n-dimensional subspaces of V that partition the non-zero vectors of V. The members of S are called the components of the spread and if Vi and Vj are distinct components then Vi ⊕ Vj = V. Let A be the incidence structure whose points are the vectors of V and whose lines are the cosets of components, that is, sets of the form v + U where v is a vector of V and U is a component of the spread S. Then: A is an affine plane and the group of translations x → x + w for a vector w is an automorphism group acting regularly on the points of this plane. Generalization: k-nets: An incidence structure more general than a finite affine plane is a k-net of order n. This consists of n2 points and nk lines such that: Parallelism (as defined in affine planes) is an equivalence relation on the set of lines. Every line has exactly n points, and every parallel class has n lines (so each parallel class of lines partitions the point set). There are k parallel classes of lines. Each point lies on exactly k lines, one from each parallel class.An (n + 1)-net of order n is precisely an affine plane of order n. A k-net of order n is equivalent to a set of k − 2 mutually orthogonal Latin squares of order n. Generalization: k-nets: Example: translation nets For an arbitrary field F, let Σ be a set of n-dimensional subspaces of the vector space F2n, any two of which intersect only in {0} (called a partial spread). The members of Σ, and their cosets in F2n, form the lines of a translation net on the points of F2n. If |Σ| = k this is a k-net of order |Fn|. Starting with an affine translation plane, any subset of the parallel classes will form a translation net. Generalization: k-nets: Given a translation net, it is not always possible to add parallel classes to the net to form an affine plane. However, if F is an infinite field, any partial spread Σ with fewer than |F| members can be extended and the translation net can be completed to an affine translation plane. Geometric codes: Given the "line/point" incidence matrix of any finite incidence structure, M, and any field, F the row space of M over F is a linear code that we can denote by C = CF(M). Another related code that contains information about the incidence structure is the Hull of C which is defined as: Hull ⁡(C)=C∩C⊥, where C⊥ is the orthogonal code to C. Geometric codes: Not much can be said about these codes at this level of generality, but if the incidence structure has some "regularity" the codes produced this way can be analyzed and information about the codes and the incidence structures can be gleaned from each other. When the incidence structure is a finite affine plane, the codes belong to a class of codes known as geometric codes. How much information the code carries about the affine plane depends in part on the choice of field. If the characteristic of the field does not divide the order of the plane, the code generated is the full space and does not carry any information. On the other hand, If π is an affine plane of order n and F is a field of characteristic p, where p divides n, then the minimum weight of the code B = Hull(CF(π))⊥ is n and all the minimum weight vectors are constant multiples of vectors whose entries are either zero or one.Furthermore, If π is an affine plane of order p and F is a field of characteristic p, then C = Hull(CF(π))⊥ and the minimum weight vectors are precisely the scalar multiples of the (incidence vectors of) lines of π.When π = AG(2, q) the geometric code generated is the q-ary Reed-Muller Code. Affine spaces: Affine spaces can be defined in an analogous manner to the construction of affine planes from projective planes. It is also possible to provide a system of axioms for the higher-dimensional affine spaces which does not refer to the corresponding projective space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rectified 7-simplexes** Rectified 7-simplexes: In seven-dimensional geometry, a rectified 7-simplex is a convex uniform 7-polytope, being a rectification of the regular 7-simplex. There are four unique degrees of rectifications, including the zeroth, the 7-simplex itself. Vertices of the rectified 7-simplex are located at the edge-centers of the 7-simplex. Vertices of the birectified 7-simplex are located in the triangular face centers of the 7-simplex. Vertices of the trirectified 7-simplex are located in the tetrahedral cell centers of the 7-simplex. Rectified 7-simplex: The rectified 7-simplex is the edge figure of the 251 honeycomb. It is called 05,1 for its branching Coxeter-Dynkin diagram, shown as . E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as S17. Alternate names Rectified octaexon (Acronym: roc) (Jonathan Bowers) Coordinates The vertices of the rectified 7-simplex can be most simply positioned in 8-space as permutations of (0,0,0,0,0,0,1,1). This construction is based on facets of the rectified 8-orthoplex. Images Birectified 7-simplex: E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as S27. It is also called 04,2 for its branching Coxeter-Dynkin diagram, shown as . Alternate names Birectified octaexon (Acronym: broc) (Jonathan Bowers) Coordinates The vertices of the birectified 7-simplex can be most simply positioned in 8-space as permutations of (0,0,0,0,0,1,1,1). This construction is based on facets of the birectified 8-orthoplex. Images Trirectified 7-simplex: The trirectified 7-simplex is the intersection of two regular 7-simplexes in dual configuration. E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as S37. This polytope is the vertex figure of the 133 honeycomb. It is called 03,3 for its branching Coxeter-Dynkin diagram, shown as . Alternate names Hexadecaexon (Acronym: he) (Jonathan Bowers) Coordinates The vertices of the trirectified 7-simplex can be most simply positioned in 8-space as permutations of (0,0,0,0,1,1,1,1). This construction is based on facets of the trirectified 8-orthoplex. The trirectified 7-simplex is the intersection of two regular 7-simplices in dual configuration. This characterization yields simple coordinates for the vertices of a trirectified 7-simplex in 8-space: the 70 distinct permutations of (1,1,1,1,−1,−1,−1,-1). Images Related polytopes: Related polytopes These polytopes are three of 71 uniform 7-polytopes with A7 symmetry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hedgehog's dilemma** Hedgehog's dilemma: The hedgehog's dilemma, or sometimes the porcupine dilemma, is a metaphor about the challenges of human intimacy. It describes a situation in which a group of hedgehogs seek to move close to one another to share heat during cold weather. They must remain apart, however, as they cannot avoid hurting one another with their sharp spines. Though they all share the intention of a close reciprocal relationship, this may not occur, for reasons they cannot avoid. Hedgehog's dilemma: Arthur Schopenhauer conceived this metaphor to describe what he considers to be the state of the individual in relation to others in society. The hedgehog's dilemma suggests that despite goodwill, human intimacy cannot occur without the risk of substantial mutual harm, and what results is cautious behavior and weak relationships. With the hedgehog's dilemma, one is recommended to use moderation in affairs with others both because of self-interest, as well as out of consideration for others. The hedgehog's dilemma is used to explain self-imposed isolation. Schopenhauer: The concept originates in the following parable from the German philosopher Arthur Schopenhauer's Parerga and Paralipomena, Volume II, Chapter XXXI, Section 396:One cold winter's day, a number of porcupines huddled together quite closely in order through their mutual warmth to prevent themselves from being frozen. But they soon felt the effect of their quills on one another, which made them again move apart. Now when the need for warmth once more brought them together, the drawback of the quills was repeated so that they were tossed between two evils, until they had discovered the proper distance from which they could best tolerate one another. Thus the need for society which springs from the emptiness and monotony of men's lives, drives them together; but their many unpleasant and repulsive qualities and insufferable drawbacks once more drive them apart. The mean distance which they finally discover, and which enables them to endure being together, is politeness and good manners. Whoever does not keep to this, is told in England to 'keep his distance.' By virtue thereof, it is true that the need for mutual warmth will be only imperfectly satisfied, but on the other hand, the prick of the quills will not be felt. Yet whoever has a great deal of internal warmth of his own will prefer to keep away from society in order to avoid giving or receiving trouble or annoyance. Freud: It entered the realm of psychology after the tale was discovered and adopted by Sigmund Freud. Schopenhauer's tale was quoted by Freud in a footnote to his 1921 work Group Psychology and the Analysis of the Ego (German: Massenpsychologie und Ich-Analyse). Freud stated, of his trip to the United States in 1909: "I am going to the USA to catch sight of a wild porcupine and to give some lectures." Social psychological research: The dilemma has received empirical attention within the contemporary psychological sciences. Jon Maner and his colleagues (Nathan DeWall, Roy Baumeister, and Mark Schaller) referred to Schopenhauer's "porcupine problem" when interpreting results from experiments examining how people respond to ostracism. The study showed that participants who experienced social exclusion were more likely to seek out new social bonds with others. In popular culture: The parable of the hedgehog's dilemma was referenced in the anime series Neon Genesis Evangelion, especially in its fourth episode of the same name.The award-winning short film Henry is a modernist version of the hedgehog's dilemma: in this story, the hedgehog eventually finds social comfort through a turtle, that is, a fellow social creature who is invulnerable to the hedgehog's spines. In the context of the original dilemma, this can be taken to represent the need for variability in human social preferences. In popular culture: The Japanese vocaloid song Harinezumi by Tota Kasamura is about the hedgehog's dilemma.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Compensatory tracking task** Compensatory tracking task: A compensatory tracking task is a task that assesses eye–hand coordination, in which a user is operating a display that has an indicator and a zero point using a joystick, computer mouse, trackball, or other controlling device. The user must try to keep the indicator within the zero point while the indicator is being acted upon by outside forces.Early versions of compensatory tracking tasks included a display made of an cathode ray oscilloscope with a rack and pinion connected to a knob that controlled the indicator. The zero point would be displayed on the cathode ray tube. The participant would turn the knob in order to keep the indicator within the zero point. Time, and distance from the zero point are measured to determine the participant's ability to control the indicator. The early versions of this test were used to help develop better controls. Control modulators such as springs, generators, and electromagnets were used to increase difficulty of the task. Compensatory tracking task: More recently, compensatory tracking tasks has been used to gauge alertness. This is done using a computer monitor and a simulation controlled by a mouse or trackball. Participants use the mouse to keep the indicator within a target which acts as the zero point. Time within the zero point and distance from the zero point are once again measured. Notable versions of the compensatory tracking task are COMPTRACK, and the PEBL compensatory tracking task.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Telescoping (rail cars)** Telescoping (rail cars): In a railway accident, telescoping occurs when the underframe of one vehicle overrides that of another, and smashes through the second vehicle's body. The term is derived from the resulting appearance of the two vehicle bodies: the body of one vehicle may appear to be slid inside the other like the tubes of a collapsible telescope – the body sides, roof and underframe of the latter vehicle being forced apart from each other.Telescoping often results in heavy fatalities if the cars telescoped are fully occupied. The car riding on top will often be destroyed by the structure of the car below, crushing those on board (although the physics of the incident may reverse the cars' roles). The chances of telescoping can be reduced by use of anticlimbers and other structural systems which direct crash energy and debris away from the passenger and crew areas. One such energy absorbing system is the Green Buffer, winners of the 2023 Swedish Steel Prize, where a collapsing steel structure in the buffers dissipate energy similarly to the crumble zones used in the automotive industry. Telescoping (rail cars): Accidents where telescoping occurred are numerous and include: To reduce the chance of telescoping, rail and tramway vehicles are often provided with an anticlimber: a horizontally ridged plate at the end of the chassis, which in a collision will engage with the anticlimber on the next car.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diffraction efficiency** Diffraction efficiency: Diffraction efficiency is the performance of diffractive optical elements – especially diffraction gratings – in terms of power throughput. It's a measure of how much optical power is diffracted into a designated direction compared to the power incident onto the diffractive element of grating. If the diffracted power is designated with P and the incident power with P0 the efficiency η reads η=PP0. Grating efficiency: In the most common case – the diffraction efficiency of optical gratings (therefore also called grating efficiency) – there are two possibilities to specify efficiency: The absolute efficiency is defined as above and relates the power diffracted into a particular order to the incident power. The relative efficiency relates the power diffracted into a particular order to the power that would be reflected by a mirror of the same coating as the grating, therefore attributing to inevitable reflection losses at the grating but not caused by inefficient diffraction itself.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Midnight sun** Midnight sun: Midnight sun is a natural phenomenon that occurs in the summer months in places north of the Arctic Circle or south of the Antarctic Circle, when the Sun remains visible at the local midnight. When midnight sun is seen in the Arctic, the Sun appears to move from left to right, but in Antarctica, the equivalent apparent motion is from right to left. This occurs at latitudes from 65°44' to 90° north or south, and does not stop exactly at the Arctic Circle or the Antarctic Circle, due to refraction. Midnight sun: The opposite phenomenon, polar night, occurs in winter, when the Sun stays below the horizon throughout the day. Details: Around the summer solstice (approximately 21 June in the Northern Hemisphere and 21 December in the Southern Hemisphere), in certain areas the Sun does not set below the horizon within a 24-hour period. Geography: Because there are no permanent human settlements south of the Antarctic Circle, apart from research stations, the countries and territories whose populations experience midnight sun are limited to those crossed by the Arctic Circle: Canada (Yukon, Nunavut, and Northwest Territories), Finland, Greenland, Iceland, Norway, Russia, Sweden, and the United States (state of Alaska). The largest city in the world north of the Arctic Circle, Murmansk, Russia, experiences midnight sun from 22 May to 22 July (62 days). Geography: A quarter of Finland's territory lies north of the Arctic Circle, and at the country's northernmost point the Sun does not set at all for 72 days during summer.In Svalbard, Norway, the northernmost inhabited region of Europe, there is no sunset from approximately 19 April to 23 August. The extreme sites are the poles, where the Sun can be continuously visible for half the year. The North Pole has midnight sun for 6 months, from late March to late September. Geography: Polar circle proximity Because of atmospheric refraction, and also because the Sun is a disc rather than a point in the sky, midnight sun may be experienced at latitudes slightly south of the Arctic Circle or north of the Antarctic Circle, though not exceeding one degree (depending on local conditions). For example, Iceland is known for its midnight sun, even though most of it (Grímsey is the exception) is slightly south of the Arctic Circle. For the same reasons, the period of sunlight at the poles is slightly longer than six months. Even the northern extremities of the United Kingdom (and places at similar latitudes, such as Saint Petersburg) experience twilight throughout the night in the northern sky at around the summer solstice. Geography: Places sufficiently close to the poles, such as Alert, Nunavut, experience times where it does not get entirely dark at night yet the Sun does not rise either, combining effects of midnight sun and polar night, reaching civil twilight during the "day" and astronomical twilight at "night". Geography: White nights Locations where the Sun remains less than 6 (or 7) degrees below the horizon – between 60° 34’ (or 59° 34’) latitude and the polar circle – experience midnight civil twilight instead of midnight sun, so that daytime activities, such as reading, are still possible without artificial light on a clear night. This happens in both Northern Hemisphere summer solstice and Southern Hemisphere summer solstice. The lowest latitude to experience midnight sun without a golden hour is 72°33′43″ North or South. Geography: White Nights have become a common symbol of Saint Petersburg, Russia, where they occur from about 11 June to 1 July, and the last 10 days of June are celebrated with cultural events known as the White Nights Festival. The northernmost tip of Antarctica also experiences white nights near the Southern Hemisphere summer solstice. Explanation: Since the axial tilt of Earth is considerable (23 degrees, 26 minutes, 21.41196 seconds), at high latitudes the Sun does not set in summer; rather, it remains continuously visible for one day during the summer solstice at the polar circle, for several weeks only 100 km (62 mi) closer to the pole, and for six months at the pole. At extreme latitudes, midnight sun is usually referred to as polar day. Explanation: At the poles themselves, the Sun rises and sets only once each year on the equinox. During the six months that the Sun is above the horizon, it spends the days appearing to continuously move in circles around the observer, gradually spiralling higher and reaching its highest circuit of the sky at the summer solstice. Time zones and daylight saving time: The term "midnight sun" refers to the consecutive 24-hour periods of sunlight experienced north of the Arctic Circle and south of the Antarctic Circle. Other phenomena are sometimes referred to as "midnight sun", but they are caused by time zones and the observance of daylight saving time. For instance, in Fairbanks, Alaska, which is south of the Arctic Circle, the Sun sets at 12:47 a.m. at the summer solstice. This is because Fairbanks is 51 minutes ahead of its idealized time zone (as most of the state is in one time zone) and Alaska observes daylight saving time. (Fairbanks is at about 147.72 degrees west, corresponding to UTC−9 hours 51 minutes, and is on UTC−9 in winter.) This means that solar culmination occurs at about 12:51 p.m. instead of at 12 noon. Time zones and daylight saving time: If a precise moment for the genuine "midnight sun" is required, the observer's longitude, the local civil time, and the equation of time must be taken into account. The moment of the Sun's closest approach to the horizon coincides with its passing due north at the observer's position, which occurs only approximately at midnight in general. Each degree of longitude east of the Greenwich meridian makes the vital moment exactly 4 minutes earlier than midnight as shown on the clock, while each hour that the local civil time is ahead of coordinated universal time (UTC, also known as GMT) makes the moment an hour later. These two effects must be added. Furthermore, the equation of time (which depends on the date) must be added: a positive value on a given date means that the Sun is running slightly ahead of its average position, so the value must be subtracted.As an example, at the North Cape of Norway at midnight on June 21/22, the longitude of 25.9 degrees east makes the moment 103.2 minutes earlier by clock time; but the local time, 2 hours ahead of GMT in the summer, makes it 120 minutes later by clock time. The equation of time at that date is -2.0 minutes. Therefore, the Sun's lowest elevation occurs 120 - 103.2 + 2.0 minutes after midnight: at 00.19 Central European Summer time. On other nearby dates the only thing different is the equation of time, so this remains a reasonable estimate for a considerable period. The Sun's altitude remains within half a degree of the minimum of about 5 degrees for about 45 minutes either side of this time. Time zones and daylight saving time: When it rotates on its own axis, it sometimes moves closer to the Sun. During this period of Earth's rotation from May to July, Earth tilts at an angle of 23.5 degrees above its own axis in its orbit. This causes the part of Norway located in the Arctic region at the North Pole of Earth to move very close to the Sun and during this time the length of the day increases. It can be said that it almost never subsides. Night falls in Norway's Hammerfest at this particular time of year. Duration: The number of days per year with potential midnight sun increases the closer one goes toward either pole. Although approximately defined by the polar circles, in practice, midnight sun can be seen as much as 90 km (56 mi) outside the polar circle, as described below, and the exact latitudes of the farthest reaches of midnight sun depend on topography and vary slightly from year to year. Duration: Even though at the Arctic Circle the center of the Sun is, per definition and without refraction by the atmosphere, only visible during one summer night, some part of midnight sun is visible at the Arctic Circle from approximately 12 June until 1 July. This period extends as one travels north: At Cape Nordkinn, Norway, the northernmost point of Continental Europe, midnight sun lasts approximately from 14 May to 29 July. On the Svalbard archipelago farther north, it lasts from 20 April to 22 August. Duration: Southern and Northern poles Also, the periods of polar day and polar night are unequal in both polar regions because the Earth is at perihelion in early January and at aphelion in early July. As a result, the polar day is longer than the polar night in the Northern Hemisphere (at Utqiagvik, Alaska, for example, polar day lasts 84 days, while polar night lasts only 68 days), while in the Southern Hemisphere, the situation is the reverse—the polar night is longer than the polar day. Duration: Observers at heights appreciably above sea level can experience extended periods of midnight sun as a result of the "dip" of the horizon viewed from altitude.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Why Zebras Don't Get Ulcers** Why Zebras Don't Get Ulcers: Why Zebras Don't Get Ulcers is a 1994 (2nd ed. 1998, 3rd ed. 2004) book by Stanford University biologist Robert M. Sapolsky. The book describes itself as a "Guide to Stress, Stress-Related Diseases, and Coping" on the front cover of its third edition. Background and synopsis: The title derives from Sapolsky's premise that for animals such as zebras, stress is generally episodic (e.g., running away from a lion), while for humans, stress is often chronic (e.g., worrying about losing one's job). Therefore, many wild animals are less susceptible than humans to chronic stress-related disorders such as ulcers, hypertension, decreased neurogenesis and increased hippocampal neuronal atrophy. However, chronic stress occurs in some social primates (Sapolsky studies baboons) for individuals on the lower side of the social dominance hierarchy. Background and synopsis: Sapolsky focuses on the effects of glucocorticoids on the human body, arguing that such hormones may be useful to animals in the wild escaping their predators, (see Fight-or-flight response) but the effects on humans, when secreted at high quantities or over long periods of time, are much less desirable. Sapolsky relates the history of endocrinology, how the field reacted at times of discovery, and how it has changed through the years. While most of the book focuses on the biological machinery of the body, the last chapter of the book focuses on self-help. Background and synopsis: Why Zebras Don't Get Ulcers argues that social phenomena such as child abuse and the chronic stress of poverty affect biological stress, leading to increased risk of disease and disability. Reception: The book received mostly positive reviews. Kirkus reviews called it an "entertaining explanation of how stress affects the body and what we can do to counteract its effects." Barry Keverne wrote in a review for New Scientist: "Everyone can benefit from reading Why Zebras Don't Get Ulcers and gain insights into the workings of the body and mind, and why some of us are more vulnerable than others to stress-related illness."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Yoast SEO** Yoast SEO: Yoast SEO is a search engine optimization (SEO) plug-in for WordPress. This plugin has over 5 million active installations and has been downloaded more than 350 million times with over 25,000 five star reviews on Wordpress.org. History: Yoast SEO created its first WordPress SEO plugin in 2007 - originally named WordPress SEO: it was developed as a WordPress plugin by SEO consultant Joost de Valk. In 2012, the plug-in was renamed Yoast SEO. In 2015, Yoast hosted the first YoastCon conference, which was hosted at the Lindenberg Nijmegen Culture House in Nijmegen, Netherlands. In 2015 a flaw was discovered in version 1.7.3.3 and earlier versions. The flaw could have left users of Yoast SEO open to hackers and was discovered by a security consultant. Company: Yoast SEO can trace its origins to 2005 when Joost de Valk launched a website named "joostdevalk.nl". After moving to and eventually selling the domain "css3.info", de Valk created the Yoast platform in 2009, launched the first version of WordPress SEO in 2010 and founded the company Yoast BV in 2010.Initially, Yoast focused on SEO consultancy and developed both the Yoast SEO plugin and a Google Analytics plugin, both for WordPress. In 2012, a premium version of the plug-in was launched. In April 2016, Yoast BV sold the Google Analytics for WordPress plugin.In 2018, Yoast had a total turnover of €10 million. According to Yoast, as of September 2018, they had almost 100 employees, of which 85 are based in their HQ in Wijchen, Netherlands.In June 2020, Yoast acquired the Duplicate Post plugin, which had over 3 million users. Also, the original developer of Duplicate Post, Enrico Battocchi, joined Yoast as a senior developer and remains one of the leading developers on the plugin.Yoast was acquired by Newfold Digital (the company that owns the hosting provider Bluehost) in August 2021. Reception: The software runs on more than 12 million sites and on 16.2% of the top 1 million sites in the world. On WordPress alone, it has amassed over five million downloads. Michael David, the author of WordPress Search Engine Optimization (2015) book, referred to it as "the granddaddy of all SEO plugins". Brian Santo, editor of EE Times, uses Yoast for estimating the ranking of articles on Google by using analysis results (e.g. keyphrase, keyword density, links, readability), but criticizes the negative effects SEO has had on journalism and suggest Google use more human or artificial intelligence to improve search. Sponsorship: In September 2020, Yoast announced it became the main sponsor of a professional basketball club Yoast United, which plays in the BNXT League.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eye injury** Eye injury: Physical or chemical injuries of the eye can be a serious threat to vision if not treated appropriately and in a timely fashion. The most obvious presentation of ocular (eye) injuries is redness and pain of the affected eyes. This is not, however, universally true, as tiny metallic projectiles may cause neither symptom. Tiny metallic projectiles should be suspected when a patient reports metal on metal contact, such as with hammering a metal surface. Corneal foreign body is one of the most common preventable occupational hazard. Intraocular foreign bodies do not cause pain because of the lack of nerve endings in the vitreous humour and retina that can transmit pain sensations. As such, general or emergency department doctors should refer cases involving the posterior segment of the eye or intraocular foreign bodies to an ophthalmologist. Ideally, ointment would not be used when referring to an ophthalmologist, since it diminishes the ability to carry out a thorough eye examination. Eye injury: Flicking sand, flying pieces of wood, metal, glass and stone are notorious for causing much of the eye trauma. Sporting balls such as cricket ball, lawn tennis ball, squash ball, shuttlecock, and other high speed flying objects can strike the eye. The eye is also susceptible to blunt trauma in a fistfight. Children’s games such as bow-and-arrows, bb guns and firecrackers can lead to eye trauma. Road traffic accidents (RTAs) with head and facial trauma may also have an eye injury - these are usually severe in nature with multiple lacerations, shards of glasses embedded in tissues, orbital fractures, severe hematoma and penetrating open-globe injuries with prolapse of eye contents. Other causes of intraocular trauma may arise from workplace tools or even common household implements, including bottle-caps suddenly propelling at great force.About 5.3 million cases of foreign bodies in the eyes occurred in 2013. Presentation: Complications Multiple complications are known to occur following eye injury: corneal scarring, hyphema, iridodialysis, post-traumatic glaucoma, uveitis cataract, vitreous hemorrhage and retinal detachment. The complications risk is high with retinal tears, penetrating injuries and severe blunt trauma. Diagnosis: The goal of investigation is the assessment of the severity of the ocular injury with an eye to implementing a management plan as soon as is required. The usual eye examination should be attempted, and may require a topical anesthetic in order to be tolerable. Many topical agents cause burning upon instillation. Proxymetacaine has been found to have the best tolerance.Depending on the medical history and preliminary examination, the primary care physician should designate the eye injury as a true emergency, urgent or semi-urgent. Diagnosis: Classification Based on the injury to the eyewall (outer fibrous coat of the eye consisting of cornea and sclera) Closed globe injury: the eye globe is intact, but the seven rings of the eye have been classically described as affected by blunt trauma. Types include contusion and lamellar laceration Open globe injury: there is a full thickness injury of the eye wall (cornea and sclera)It includes A) Globe rupture: caused by blunt trauma and is an inside-out injury. Diagnosis: B) Globe laceration: a full-thickness wound caused by sharp objects. It includes 1)Penetrating trauma: the globe integrity is disrupted by a full-thickness entry wound and may be associated with prolapse of the internal contents of the eye. Such injuries are often referred to as a Globe fracture or a Globe rupture, although these can be incurred by blunt trauma as well. Diagnosis: 2) Perforating trauma: the globe integrity is disrupted in two places due to an entrance and exit wound (through and through injury). This is a quite severe type of eye injury. Other types include Blowout fracture of the orbit is caused by blunt trauma, classically described for fist or ball injury, leading to fracture of the floor or medial wall of the orbit due to sudden increased pressure on the orbital contents. Muscular Entrapment Fracture of the orbital bones can lead to muscular entrapment limiting gaze in one direction. Emergency An emergency must be treated within minutes. This includes chemical burns of both the conjunctiva and cornea. Urgent An urgent case must be treated within hours. This includes penetrating globe injuries; corneal abrasions or corneal foreign bodies; hyphema (must be referred); eyelid lacerations that are deep, involve the lid margin or involve the lacrimal canaliculi; radiant energy burns such as arc eye (welder's burn) or snow blindness; or, rarely, traumatic optic neuropathy. Semi-urgent Semi-urgent cases must be managed within 1–2 days. They include orbital fractures and subconjunctival hemorrhages. Management: Irrigation The first line of management for chemical injuries is usually copious irrigation of the eye with an isotonic saline or sterile water. In the cases of chemical burns, one should not try to buffer the solution, but instead dilute it with copious flushing. Management: Patching Depending on the type of ocular injury, either a pressure patch or shield patch should be applied. Up until circa 1987, pressure patches were the preferred method of treatment for corneal abrasions in non-contact lens wearers; multiple controlled studies conducted by accredited organizations such as the American Academy of Ophthalmology have shown that pressure patching is of little or no value in healing corneal abrasions and is actually detrimental to healing in some cases. A Cochrane review found that patching simple corneal abrasions may not improve healing or reduce pain. Pressure patching should never be used on an individual presenting with a corneal abrasion who has a history of contact lens wear. In this circumstance, a virulent infection caused by the bacterium Pseudomonas aeruginosa is at a clearly delineated increased risk for occurrence. These infections can cause blindness within 24 – 48 hours and there is a possibility that the infection can move into the peri-orbital socket, resulting in the need for evisceration of the eyeball. In rare cases, the infection can enter the brain and cause death to the patient. Management: In cases of globe penetration, pressure patches should never be applied, and instead a shield patch should be applied that protects the eye without applying any pressure. If a shield patch is applied to one eye, the other eye should also be patched due to eye movement. If the uninjured eye moves, the injured eye will also move involuntarily possibly causing more damage. Management: Suturing In cases of eyelid laceration, sutures may be a part of appropriate management by the primary care physician so long as the laceration does not threaten the canaliculi, is not deep, and does not affect the lid margins. Epidemiology: A recent study estimated that from 2002–2003 there were 27,152 injuries in the United States related to the wearing of eyeglasses. The same study concluded that sports-related injuries due to eyeglasses wear were more common in those under the age of 18 and that fall-related injuries due to wearing eyeglasses were more common in those aged 65 and over. Although eyeglasses-related injuries do occur, prescription eyeglasses and non-prescription sunglasses have been found to "offer measurable protection which results in a lower incidence of severe eye injuries to those wearing [them]".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heteroresistance** Heteroresistance: Heteroresistance is a phenotype in which a bacterial isolate contains sub-populations of cells with increased antibiotic resistance when compared with the susceptible main population. This phenomenon is known to be highly prevalent among several antibiotic classes and bacterial isolates and associated with treatment failure through the enrichment of low frequencies of resistant subpopulations in the presence of antibiotics. Heteroresistance is known to be highly unstable, meaning that the resistance sub-population can revert to susceptibility within a limited number of generations of growth in the absence of antibiotic. Regarding the instability and the transient characteristic of heteroresistance subpopulations, the detection of this subpopulation often face difficulties by the conventional minimum inhibitory concentration methods. Hence, there is a significant demand for clinical microbiology laboratories to use rapid standardized methods to identify heteroresistance in pathologic specimen to prescribe a proper antibiotic treatment for patients. Mechanisms: The enrichment of resistance sub-populations can be due to the acquisition of resistant mutations that are genetically stable but have high fitness cost or due to the enrichment of sub-population with increased copy number of resistance-conferring tandem gene amplifications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IP Systems** IP Systems: IP Systems Ltd. is a consultancy and IT company specialized for the liberalized European energy market. Its applications support the whole energy trading process from forecasting and nomination to allocation and accounting. History: IP Systems was set up in 2008. In 2011 the company started its international growth. International participation: Gas Balancing-IP introduced in 2012 on the 4th Energy Trading Week was recommended for the use on regional level, furthermore in a unique way besides FGSZ Natural Gas Transmission Closed Company Limited, IP Systems was also invited by ENTSOG to the working group to help developing the European natural gas balancing model. Awards and appreciations: Gas Balancing-IP won the ’ICT project of the year’ award of the IVSZ ICT Association of Hungary and National Innovation Office in 2012.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OpenFDA** OpenFDA: OpenFDA is a project indexing and formatting FDA data, and making it accessible to the public. The ultimate goal of enabling the data accessibility is to educate people and save lives. The currently provided API of accessing data is under beta version. The project is open source and the code is available from GitHub.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Talking blues** Talking blues: Talking blues is a form of folk music and country music. It is characterized by rhythmic speech or near-speech where the melody is free, but the rhythm is strict. Talking blues: Christopher Allen Bouchillon, billed as "The Talking Comedian of the South", is credited with creating the "talking blues" form with the song "Talking Blues", recorded for Columbia Records in Atlanta in 1926, from which the style gets its name. The song was released in 1927, followed by a sequel, "New Talking Blues", in 1928. His song "Born in Hard Luck" is similar in style. Overview: A talking blues typically consists of a repetitive guitar line utilizing a three chord progression which, although it is called a "blues", is not actually a twelve bar blues. The vocals are sung in a rhythmic, flat tone, very near to a speaking voice, and take the form of rhyming couplets. At the end of each verse, consisting of two couplets, the singer continues to talk, adding a fifth line consisting of an irregular, generally unrhymed, and unspecified number of bars, often with a pause in the middle of the line, before resuming the strict chordal structure. This example, from "Talking Blues" by Woody Guthrie, a cover of "New Talking Blues" by Bouchillon, serves to explain the format: The lyrics to a talking blues are characterized by dry, rural humor, with the spoken codetta often adding a wry commentary on the subject of the verse, like Bob Dylan's "Talkin' Bear Mountain Picnic Massacre Blues". Development of the genre: Woody Guthrie and his song "Talking Hard Work" is a title-tribute to Bouchillon's "Talking Blues" and "Born in Hard Luck". Development of the genre: The "Talking Blues" begins with the line: Several sources of the 1940s–1950s, including the Almanac Singers, wrongly credited Guthrie as the creator of the talking blues. By the 1940s, what had started as a comedic country music genre became a more pronounced form of wry political protest singing. This sample lyric, from "Talking Union" by Pete Seeger, Lee Hays, and Millard Lampell shows the development of the genre into a vehicle for political commentary: In 1958, the musician and folk music scholar John Greenway recorded an album collection called "Talking Blues" on the Folkways label. His compendium included 15 talking blues songs by Guthrie, Tom Glazer, and others, and was, according to the music historian Manfred Helfert, the "obvious source" for the many 1960s forays into the genre by Bob Dylan. Bob Dylan recorded "Talking World War III Blues" in 1963. Development of the genre: Dylan's fame and his repeated use of the talking blues form contributed to the genre becoming a widely popular vehicle for the composition of songs with political content. When the country singer Johnny Cash recorded a song that described his trip to Vietnam with his wife June Carter Cash, he chose the talking blues format to describe his dissent against the Vietnam War. Development of the genre: Talking blues is also popular as a medium for parody, as in "Like a Lamb to the Slaughter", Frank Hayes's talking-blues parody of Matty Groves: Notable examples: "Talking Blues" (1926) and "New Talking Blues" (1928) by Christopher Allen Bouchillon. "Talking Dust Bowl Blues" (1940), "Talking Fishing Blues", "Talking Centralia", "Talking Columbia", "Talking Hard Work", "Talking Sailor", and "Talking Subway" by Woody Guthrie. "Talking Union," by Pete Seeger, Lee Hays, and Millard Lampell. "Atomic Talking Blues" (a.k.a. "Talking Atom", "Old Man Atom") by Vern Partlow. "Talking Inflation Blues" by Tom Glazer. "Talking World War III Blues" (1963), "Talking New York", "Talking Hava Negiliah Blues", "Talkin' John Birch Paranoid Blues", "I Shall Be Free No. 10", and "Talkin' Bear Mountain Picnic Massacre Blues" by Bob Dylan, all recorded during the 1960s. "Guitar Man" (1967) by Jerry Reed, made famous by Elvis Presley. "Talkin' Candy Bar Blues" by Peter, Paul & Mary on A Song Will Rise (1965). "Singing in Viet Nam Talking Blues" by Johnny Cash. "Talking Birmingham Jam" (1963), "Talking Airplane Disaster" (1963), "Talking Cuban Crisis" (1963), "Talking Vietnam (1964) by Phil Ochs. "Talking Thunderbird Blues" (1973), "Fraternity Blues" (1977) by Townes Van Zandt. "Talking New Bob Dylan" by Loudon Wainwright III on his album History (1992). “Talking Blues” by Buck Trent on popular tv show Hee Haw. (1977)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Munching square** Munching square: The Munching Square is a display hack dating back to the PDP-1 (ca. 1962, reportedly discovered by Jackson Wright), which employs a trivial computation (repeatedly plotting the graph Y = X XOR T for successive values of T) to produce an impressive display of moving and growing squares that devour the screen. The initial value of T is treated as a parameter, which, when well-chosen, can produce amazing effects. Some of these, later (re)discovered on the LISP machine, have been christened munching triangles (using bitwise AND instead of XOR, and toggling points instead of plotting them), munching w's, and munching mazes. More generally, suppose a graphics program produces an impressive and ever-changing display of some basic form, foo, on a display terminal, and does it using a relatively simple program; then the program (or the resulting display) is likely to be referred to as munching foos.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SLIB** SLIB: SLIB is computer software, a library for the programming language Scheme, written by Aubrey Jaffer. It uses only standard Scheme syntax and thus works on many different Scheme implementations, such as Bigloo, Chez Scheme, Extension Language Kit 3.0, Gambit 3.0, GNU Guile, JScheme, Kawa, Larceny, MacScheme, MIT/GNU Scheme, Pocket Scheme, Racket, RScheme, Scheme 48, SCM, SCM Mac, and scsh. SLIB is used by GnuCash. Other implementations can support SLIB in a unified way through Scheme Requests for Implementation (SRFI) 96.SLIB is a GNU package.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Performance-based advertising** Performance-based advertising: Performance-based advertising, also known as pay for performance advertising, is a form of advertising in which the purchaser pays only when there are measurable results. Performance-based advertising is becoming more common with the spread of electronic media, notably the Internet, where it is possible to measure user actions resulting from advertisement. Performance marketing is different from Brand Marketing which focuses on awareness, consideration and opinions among target consumers. Pricing models: There are four common pricing models used in the online performance advertising market. CPM (cost-per-mille, or cost-per-thousand) pricing models charge advertisers for impressions, i.e. the number of times people view an advertisement. Display advertising is commonly sold on a CPM pricing model. The problem with CPM advertising is that advertisers are charged even if the target audience does not click on the advertisement. Pricing models: CPC (cost-per-click) advertising overcomes this problem by charging advertisers only when the consumer clicks on the advertisement. However, due to increased competition, search keywords have become very expensive. A 2007 Doubleclick Performics Search trends Report shows that there were nearly six times as many keywords with a cost per click (CPC) of more than $1 in January 2007 than the prior year. The cost per keyword increased by 33% and the cost per click rose by as much as 55%. Pricing models: In recent times, there has been a rapid increase in online lead generation – banner and direct response advertising that works off a CPL pricing model. In a cost-per-lead pricing model, advertisers pay only for qualified leads – irrespective of the clicks or impressions that went into generating the lead. CPL advertising is also commonly referred to as online lead generation. Pricing models: Cost per lead (CPL) pricing models are the most advertiser friendly. In 2007, an IBM research study found that two-thirds of senior marketers expect 20 percent of ad revenue to move away from impression-based sales, in favor of action-based models within three years. CPL models allow advertisers to pay only for qualified leads as opposed to clicks or impressions and are at the pinnacle of the online advertising ROI hierarchy. Pricing models: In CPA advertising, or Cost Per Acquisition, advertisers pay for a specific action such as a credit card transaction (also called CPO, cost-per-order). Advertisers need to be careful when choosing between CPL and CPA pricing models. In CPL campaigns, advertisers pay for an interested lead – i.e. the contact information of a person interested in the advertiser's product or service. CPL campaigns are suitable for brand marketers and direct response marketers looking to engage consumers at multiple touch-points – by building a newsletter list, community site, reward program or member acquisition program. In CPA campaigns, the advertiser typically pays for a completed sale involving a credit card transaction. CPA is all about 'now' – it focuses on driving consumers to buy at that exact moment. If a visitor to the website doesn't buy anything, there's no easy way to re-market to them. Pricing models: There are other important differentiators: CPL campaigns are advertiser-centric. The advertiser remains in control of their brand, selecting trusted and contextually relevant publishers to run their offers. On the other hand, CPA and affiliate marketing campaigns are publisher-centric. Advertisers cede control over where their brand will appear, as publishers browse offers and pick which to run on their websites. Advertisers generally do not know where their offer is running. Pricing models: CPL campaigns are usually high volume and light-weight. In CPL campaigns, consumers submit only basic contact information. The transaction can be as simple as an email address. On the other hand, CPA campaigns are usually low volume and complex. Typically, consumer has to submit credit card and other detailed information.CPL advertising is more appropriate for advertisers looking to deploy acquisition campaigns by re-marketing to end consumers through e-newsletters, community sites, reward programs, loyalty programs and other engagement vehicles. Metrics: Various types of measurable action may be used in charging for performance-based advertising: Many Internet sites charge for advertising on a "CPM" (cost per thousand) or cost per impression basis. That is, the advertiser pays only when a consumer sees their advertisement. Some would argue that this is not performance-based advertising since there is no measurement of the user response. Metrics: Internet sites often also offer advertising on a "PPC" (pay per click) basis. Google's AdWords product and equivalent products from Millennial Media, Yahoo!, Microsoft and others support PPC advertising plans. A small but growing number of sites are starting to offer plans on a "Pay per call" basis. The user can click a button to place a VoIP call, or to request a call from the advertiser. If the user requests a call, presumably they are highly likely to make a purchase. Metrics: Finally, there is considerable research into methods of linking the user's actions to the eventual purchase: the ideal form of performance measurement.Some Internet sites are markets, bringing together buyers and sellers. eBay is a prominent example of a market operating on an auction basis. Other market sites let the vendors set their price. In either model, the market mediates sales and takes a commission – a defined percentage of the sale value. The market is motivated to give a more prominent position to vendors who achieve high sales value. Markets may be seen as a form of performance-based advertising. Metrics: The use of mobile coupons also enables a whole new world of metrics within identifying campaign effect. There are several providers of mobile coupon technology that makes it possible to provide unique coupons or barcodes to each individual person and at the same time identify the person downloading it. This makes it possible to follow these individuals during the whole process from downloading until when and where the coupons are redeemed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cannon** Cannon: A cannon is a large-caliber gun classified as a type of artillery, which usually launches a projectile using explosive chemical propellant. Gunpowder ("black powder") was the primary propellant before the invention of smokeless powder during the late 19th century. Cannons vary in gauge, effective range, mobility, rate of fire, angle of fire and firepower; different forms of cannon combine and balance these attributes in varying degrees, depending on their intended use on the battlefield. A cannon is a type of heavy artillery weapon. Cannon: The word cannon is derived from several languages, in which the original definition can usually be translated as tube, cane, or reed. In the modern era, the term cannon has fallen into decline, replaced by guns or artillery, if not a more specific term such as howitzer or mortar, except for high-caliber automatic weapons firing bigger rounds than machine guns, called autocannons. Cannon: The earliest known depiction of cannons appeared in Song dynasty China as early as the 12th century; however, solid archaeological and documentary evidence of cannons do not appear until the 13th century. In 1288, Yuan dynasty troops are recorded to have used hand cannon in combat, and the earliest extant cannon bearing a date of production comes from the same period. By the early 14th century, possible mentions of cannon had appeared in the Middle East and the depiction of one in Europe by 1326. Recorded usage of cannon began appearing almost immediately after. They subsequently spread to India, their usage on the subcontinent being first attested to in 1366. By the end of the 14th century, cannons were widespread throughout Eurasia. Cannons were used primarily as anti-infantry weapons until around 1374, when large cannons were recorded to have breached walls for the first time in Europe. Cannons featured prominently as siege weapons, and ever larger pieces appeared. In 1464 a 16,000 kg (35,000 lb) cannon known as the Great Turkish Bombard was created in the Ottoman Empire. Cannons as field artillery became more important after 1453, with the introduction of limber, which greatly improved cannon maneuverability and mobility. European cannons reached their longer, lighter, more accurate, and more efficient "classic form" around 1480. This classic European cannon design stayed relatively consistent in form with minor changes until the 1750s. Etymology and terminology: The word cannon is derived from the Old Italian word cannone, meaning "large tube", which came from Latin canna, in turn originating from the Greek κάννα (kanna), "reed", and then generalised to mean any hollow tube-like object; cognate with Akkadian qanu(m) and Hebrew qāneh, "tube, reed". The word has been used to refer to a gun since 1326 in Italy, and 1418 in England. Both of the plural forms cannons and cannon are correct. History: East Asia The cannon may have appeared as early as the 12th century in China, and was probably a parallel development or evolution of the fire-lance, a short ranged anti-personnel weapon combining a gunpowder-filled tube and a polearm of some sort. Co-viative projectiles such as iron scraps or porcelain shards were placed in fire lance barrels at some point, and eventually, the paper and bamboo materials of fire lance barrels were replaced by metal.The earliest known depiction of a cannon is a sculpture from the Dazu Rock Carvings in Sichuan dated to 1128, however, the earliest archaeological samples and textual accounts do not appear until the 13th century. The primary extant specimens of cannon from the 13th century are the Wuwei Bronze Cannon dated to 1227, the Heilongjiang hand cannon dated to 1288, and the Xanadu Gun dated to 1298. However, only the Xanadu gun contains an inscription bearing a date of production, so it is considered the earliest confirmed extant cannon. The Xanadu Gun is 34.7 cm in length and weighs 6.2 kg. The other cannons are dated using contextual evidence. The Heilongjiang hand cannon is also often considered by some to be the oldest firearm since it was unearthed near the area where the History of Yuan reports a battle took place involving hand cannons. According to the History of Yuan, in 1288, a Jurchen commander by the name of Li Ting led troops armed with hand cannons into battle against the rebel prince Nayan.Chen Bingying argues there were no guns before 1259, while Dang Shoushan believes the Wuwei gun and other Western Xia era samples point to the appearance of guns by 1220, and Stephen Haw goes even further by stating that guns were developed as early as 1200. Sinologist Joseph Needham and renaissance siege expert Thomas Arnold provide a more conservative estimate of around 1280 for the appearance of the "true" cannon. Whether or not any of these are correct, it seems likely that the gun was born sometime during the 13th century.References to cannons proliferated throughout China in the following centuries. Cannon featured in literary pieces. In 1341 Xian Zhang wrote a poem called The Iron Cannon Affair describing a cannonball fired from an eruptor which could "pierce the heart or belly when striking a man or horse, and even transfix several persons at once." By the 1350s the cannon was used extensively in Chinese warfare. In 1358 the Ming army failed to take a city due to its garrisons' usage of cannon, however, they themselves would use cannon, in the thousands, later on during the siege of Suzhou in 1366.The Mongol invasion of Java in 1293 brought gunpowder technology to the Nusantara archipelago in the form of cannon (Chinese: Pao). During the Ming dynasty cannons were used in riverine warfare at the Battle of Lake Poyang. One shipwreck in Shandong had a cannon dated to 1377 and an anchor dated to 1372. From the 13th to 15th centuries cannon-armed Chinese ships also travelled throughout Southeast Asia. Cannon appeared in Đại Việt by 1390 at the latest.The first of the western cannon to be introduced were breech-loaders in the early 16th century, which the Chinese began producing themselves by 1523 and improved on by including composite metal construction in their making.Japan did not acquire cannon until 1510 when a monk brought one back from China, and did not produce any in appreciable numbers. During the 1593 Siege of Pyongyang, 40,000 Ming troops deployed a variety of cannons against Japanese troops. Despite their defensive advantage and the use of arquebus by Japanese soldiers, the Japanese were at a severe disadvantage due to their lack of cannon. Throughout the Japanese invasions of Korea (1592–1598), the Ming–Joseon coalition used artillery widely in land and naval battles, including on the turtle ships of Yi Sun-sin.According to Ivan Petlin, the first Russian envoy to Beijing, in September 1619, the city was armed with large cannon with cannonballs weighing more than 30 kg (66 lb). His general observation was that the Chinese were militarily capable and had firearms: There are many merchants and military persons in the Chinese Empire. They have firearms, and the Chinese are very skillful in military affairs. They go into battle against the Yellow Mongols who fight with bows and arrows. History: Western Europe Outside of China, the earliest texts to mention gunpowder are Roger Bacon's Opus Majus (1267) and Opus Tertium in what has been interpreted as references to firecrackers. In the early 20th century, a British artillery officer proposed that another work tentatively attributed to Bacon, Epistola de Secretis Operibus Artis et Naturae, et de Nullitate Magiae, dated to 1247, contained an encrypted formula for gunpowder hidden in the text. These claims have been disputed by science historians. In any case, the formula itself is not useful for firearms or even firecrackers, burning slowly and producing mostly smoke.There is a record of a gun in Europe dating to 1322 being discovered in the nineteenth century but the artifact has since been lost. The earliest known European depiction of a gun appeared in 1326 in a manuscript by Walter de Milemete, although not necessarily drawn by him, known as De Nobilitatibus, sapientii et prudentiis regum (Concerning the Majesty, Wisdom, and Prudence of Kings), which displays a gun with a large arrow emerging from it and its user lowering a long stick to ignite the gun through the touch hole. In the same year, another similar illustration showed a darker gun being set off by a group of knights, which also featured in another work of de Milemete's, De secretis secretorum Aristotelis. On 11 February of that same year, the Signoria of Florence appointed two officers to obtain canones de mettallo and ammunition for the town's defense. In the following year a document from the Turin area recorded a certain amount was paid "for the making of a certain instrument or device made by Friar Marcello for the projection of pellets of lead". A reference from 1331 describes an attack mounted by two Germanic knights on Cividale del Friuli, using man-portable gunpowder weapons of some sort. The 1320s seem to have been the takeoff point for guns in Europe according to most modern military historians. Scholars suggest that the lack of gunpowder weapons in a well-traveled Venetian's catalogue for a new crusade in 1321 implies that guns were unknown in Europe up until this point, further solidifying the 1320 mark, however more evidence in this area may be forthcoming in the future.The oldest extant cannon in Europe is a small bronze example unearthed in Loshult, Scania in southern Sweden. It dates from the early-mid 14th century, and is currently in the Swedish History Museum in Stockholm. History: Early cannons in Europe often shot arrows and were known by an assortment of names such as pot-de-fer, tonnoire, ribaldis, and büszenpyle. The ribaldis, which shot large arrows and simplistic grapeshot, were first mentioned in the English Privy Wardrobe accounts during preparations for the Battle of Crécy, between 1345 and 1346. The Florentine Giovanni Villani recounts their destructiveness, indicating that by the end of the battle, "the whole plain was covered by men struck down by arrows and cannon balls". Similar cannon were also used at the Siege of Calais (1346–47), although it was not until the 1380s that the ribaudekin clearly became mounted on wheels. History: Early use The Battle of Crecy which pitted the English against the French in 1346 featured the early use of cannon which helped the longbowmen repulse a large force of Genoese crossbowmen deployed by the French. The English originally intended to use the cannon against cavalry sent to attack their archers, thinking that the loud noises produced by their cannon would panic the advancing horses along with killing the knights atop them.Early cannons could also be used for more than simply killing men and scaring horses. English cannon were used defensively in 1346 during the Siege of Breteuil to launch fire onto an advancing siege tower. In this way cannons could be used to burn down siege equipment before it reached the fortifications. The use of cannons to shoot fire could also be used offensively as another battle involved the setting of a castle ablaze with similar methods. The particular incendiary used in these projectiles was most likely a gunpowder mixture. This is one area where early Chinese and European cannons share a similarity as both were possibly used to shoot fire.Another aspect of early European cannons is that they were rather small, dwarfed by the bombards which would come later. In fact, it is possible that the cannons used at Crécy were capable of being moved rather quickly as there is an anonymous chronicle that notes the guns being used to attack the French camp, indicating that they would have been mobile enough to press the attack. These smaller cannons would eventually give way to larger, wall-breaching guns by the end of the 1300s. History: Islamic world There is no clear consensus on when the cannon first appeared in the Islamic world, with dates ranging from 1260 to the mid-14th century. The cannon may have appeared in the Islamic world in the late 13th century, with Ibn Khaldun in the 14th century stating that cannons were used in the Maghreb region of North Africa in 1274, and other Arabic military treatises in the 14th century referring to the use of cannon by Mamluk forces in 1260 and 1303, and by Muslim forces at the 1324 Siege of Huesca in Spain. However, some scholars do not accept these early dates. While the date of its first appearance is not entirely clear, the general consensus among most historians is that there is no doubt the Mamluk forces were using cannon by 1342. Other accounts may have also mentioned the use of cannon in the early 14th century. An Arabic text dating to 1320–1350 describes a type of gunpowder weapon called a midfa which uses gunpowder to shoot projectiles out of a tube at the end of a stock. Some scholars consider this a hand cannon while others dispute this claim. The Nasrid army besieging Elche in 1331 made use of "iron pellets shot with fire".According to historian Ahmad Y. al-Hassan, during the Battle of Ain Jalut in 1260, the Mamluks used cannon against the Mongols. He claims that this was "the first cannon in history" and used a gunpowder formula almost identical to the ideal composition for explosive gunpowder. He also argues that this was not known in China or Europe until much later. Al-Hassan further claims that the earliest textual evidence of cannon is from the Middle East, based on earlier originals which report hand-held cannons being used by the Mamluks at the Battle of Ain Jalut in 1260. Such an early date is not accepted by some historians, including David Ayalon, Iqtidar Alam Khan, Joseph Needham and Tonio Andrade. Khan argues that it was the Mongols who introduced gunpowder to the Islamic world, and believes cannon only reached Mamluk Egypt in the 1370s. Needham argued that the term midfa, dated to textual sources from 1342 to 1352, did not refer to true hand-guns or bombards, and that contemporary accounts of a metal-barrel cannon in the Islamic world did not occur until 1365. Similarly, Andrade dates the textual appearance of cannons in middle eastern sources to the 1360s. Gabor Ágoston and David Ayalon note that the Mamluks had certainly used siege cannons by 1342 or the 1360s, respectively, but earlier uses of cannons in the Islamic World are vague with a possible appearance in the Emirate of Granada by the 1320s and 1330s, though evidence is inconclusive.Ibn Khaldun reported the use of cannon as siege machines by the Marinid sultan Abu Yaqub Yusuf at the siege of Sijilmasa in 1274. The passage by Ibn Khaldun on the Marinid Siege of Sijilmassa in 1274 occurs as follows: "[The Sultan] installed siege engines ... and gunpowder engines ..., which project small balls of iron. These balls are ejected from a chamber ... placed in front of a kindling fire of gunpowder; this happens by a strange property which attributes all actions to the power of the Creator." The source is not contemporary and was written a century later around 1382. Its interpretation has been rejected as anachronistic by some historians, who urge caution regarding claims of Islamic firearms use in the 1204–1324 period as late medieval Arabic texts used the same word for gunpowder, naft, as they did for an earlier incendiary, naphtha. Ágoston and Peter Purton note that in the 1204–1324 period, late medieval Arabic texts used the same word for gunpowder, naft, that they used for an earlier incendiary, naphtha. Needham believes Ibn Khaldun was speaking of fire lances rather than hand cannon.The Ottoman Empire made good use of cannon as siege artillery. Sixty-eight super-sized bombards were used by Mehmed the Conqueror to capture Constantinople in 1453. Jim Bradbury argues that Urban, a Hungarian cannon engineer, introduced this cannon from Central Europe to the Ottoman realm; according to Paul Hammer, however, it could have been introduced from other Islamic countries which had earlier used cannons. These cannon could fire heavy stone balls a mile, and the sound of their blast could reportedly be heard from a distance of 10 miles (16 km). Shkodëran historian Marin Barleti discusses Turkish bombards at length in his book De obsidione Scodrensi (1504), describing the 1478–79 siege of Shkodra in which eleven bombards and two mortars were employed. The Ottomans also used cannon to control passage of ships through the Bosphorus strait. Ottoman cannons also proved effective at stopping crusaders at Varna in 1444 and Kosovo in 1448 despite the presence of European cannon in the former case.The similar Dardanelles Guns (for the location) were created by Munir Ali in 1464 and were still in use during the Anglo-Turkish War (1807–1809). These were cast in bronze into two parts: the chase (the barrel) and the breech, which combined weighed 18.4 tonnes. The two parts were screwed together using levers to facilitate moving it. History: Fathullah Shirazi, a Persian inhabitant of India who worked for Akbar in the Mughal Empire, developed a volley gun in the 16th century.While there is evidence of cannons in Iran as early as 1405 they were not widespread. This changed following the increased use of firearms by Shah Ismail I, and the Iranian army used 500 cannons by the 1620s, probably captured from the Ottomans or acquired by allies in Europe. By 1443, Iranians were also making some of their own cannon, as Mir Khawand wrote of a 1200 kg metal piece being made by an Iranian rikhtegar which was most likely a cannon. Due to the difficulties of transporting cannon in mountainous terrain, their use was less common compared to their use in Europe. History: Eastern Europe Documentary evidence of cannons in Russia does not appear until 1382 and they were used only in sieges, often by the defenders. It was not until 1475 when Ivan III established the first Russian cannon foundry in Moscow that they began to produce cannons natively. The earliest surviving cannon from Russia dates to 1485.Later on large cannons were known as bombards, ranging from three to five feet in length and were used by Dubrovnik and Kotor in defence during the later 14th century. The first bombards were made of iron, but bronze became more prevalent as it was recognized as more stable and capable of propelling stones weighing as much as 45 kilograms (99 lb). Around the same period, the Byzantine Empire began to accumulate its own cannon to face the Ottoman Empire, starting with medium-sized cannon 3 feet (0.91 m) long and of 10 in calibre. The earliest reliable recorded use of artillery in the region was against the Ottoman siege of Constantinople in 1396, forcing the Ottomans to withdraw. The Ottomans acquired their own cannon and laid siege to the Byzantine capital again in 1422. By 1453, the Ottomans used 68 Hungarian-made cannon for the 55-day bombardment of the walls of Constantinople, "hurling the pieces everywhere and killing those who happened to be nearby". The largest of their cannons was the Great Turkish Bombard, which required an operating crew of 200 men and 70 oxen, and 10,000 men to transport it. Gunpowder made the formerly devastating Greek fire obsolete, and with the final fall of Constantinople—which was protected by what were once the strongest walls in Europe—on 29 May 1453, "it was the end of an era in more ways than one". History: Southeast Asia The Javanese Majapahit Empire was arguably able to encompass much of modern-day Indonesia due to its unique mastery of bronze-smithing and use of a central arsenal fed by a large number of cottage industries within the immediate region. Cannons were introduced to Majapahit when Kublai Khan's Chinese army under the leadership of Ike Mese sought to invade Java in 1293. History of Yuan mentioned that the Mongol used a weapon called p'ao against Daha forces.: 1–2 : 244–245 : 220  This weapon is interpreted differently by researchers, it may be a trebuchet that throws thunderclap bombs, firearms, cannons, or rockets. It is possible that the gunpowder weapons carried by the Mongol–Chinese troops amounted to more than one type.: 97 Thomas Stamford Raffles wrote in The History of Java that in 1247 saka (1325 AD), cannons were widely used in Java especially by the Majapahit. It is recorded that the small kingdoms in Java that sought the protection of Majapahit had to hand over their cannons to the Majapahit.: 106 : 61  Majapahit under Mahapatih (prime minister) Gajah Mada (in office 1331–1364) utilized gunpowder technology obtained from Yuan dynasty for use in naval fleet.: 57  Mongol-Chinese gunpowder technology of Yuan dynasty resulted in eastern-style cetbang which is similar to Chinese cannon. Swivel guns however, only developed in the archipelago because of the close maritime relations of the Nusantara archipelago with the territory of West India after 1460 AD, which brought new types of gunpowder weapons to the archipelago, likely through Arab intermediaries. This weapon seems to be cannon and gun of Ottoman tradition, for example the prangi, which is a breech-loading swivel gun. A new type of cetbang, called the western-style cetbang, was derived from the Turkish prangi. Just like prangi, this cetbang is a breech-loading swivel gun made of bronze or iron, firing single rounds or scattershots (a large number of small bullets).: 94–95 Cannons derived from western-style cetbang can be found in Nusantara, among others were lantaka and lela. Most lantakas were made of bronze and the earliest ones were breech-loaded. There is a trend toward muzzle-loading weapons during colonial times. A pole gun (bedil tombak) was recorded as being used by Java in 1413.: 245 Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Circa 1540, the Javanese, always alert to new weapons, found the newly arrived Portuguese weaponry superior to that of the locally made variants. Majapahit-era cetbang cannon were further improved and used in the Demak Sultanate period during the Demak invasion of Portuguese Malacca. During this period, the iron, for manufacturing Javanese cannon was imported from Khorasan in northern Persia. The material was known by Javanese as wesi kurasani (Khorasan iron). When the Portuguese came to the archipelago, they referred to it as berço, which was also used to refer to any breech-loading swivel gun, while the Spaniards called it verso.: 151 Duarte Barbosa c. 1514 said that the inhabitants of Java were great masters in casting artillery and very good artillerymen. They made many one-pounder cannon (cetbang or rentaka), long muskets, spingarde (arquebus), schioppi (hand cannon), Greek fire, guns (cannon), and other fireworks. Every place was considered excellent in casting artillery, and in the knowledge of using it.: 198 : 224  In 1513, the Javanese fleet led by Pati Unus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India".: 162 : 23  By early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180- and 260-pounders, weighing anywhere between 3 and 8 tons, length of them between 3 and 6 m (9.8 and 19.7 ft).Cannons were used by the Ayutthaya Kingdom in 1352 during its invasion of the Khmer Empire. Within a decade large quantities of gunpowder could be found in the Khmer Empire. By the end of the century firearms were also used by the Trần dynasty.Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation.: 180–181  Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles' The History of Java (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali.: 180–181 Africa In Africa, the Adal Sultanate and the Abyssinian Empire both deployed cannons during the Adal-Abyssinian War. Imported from Arabia, and the wider Islamic world, the Adalites led by Ahmed ibn Ibrahim al-Ghazi were the first African power to introduce cannon warfare to the African continent. Later on as the Portuguese Empire entered the war it would supply and train the Abyssinians with cannons, while the Ottoman Empire sent soldiers and cannon to back Adal. The conflict proved, through their use on both sides, the value of firearms such as the matchlock musket, cannon, and the arquebus over traditional weapons. History: Offensive and defensive use While previous smaller guns could burn down structures with fire, larger cannons were so effective that engineers were forced to develop stronger castle walls to prevent their keeps from falling. Nonetheless, cannons were used other purposes than battering down walls as fortifications began using cannons as defensive instruments such as an example in India where the fort of Raicher had gun ports built into its walls to accommodate the use of defensive cannons. In The Art of War, Niccolò Machiavelli opined that field artillery forced an army to take up a defensive posture and this opposed a more ideal offensive stance. Machiavelli's concerns can be seen in the criticisms of Portuguese mortars being used in India during the sixteenth century as lack of mobility was one of the key problems with the design. In Russia the early cannons were again placed in forts as a defensive tool. Cannon were also difficult to move around in certain types of terrain with mountains providing a great obstacle for them, for these reasons offensives conducted with cannons would be difficult to pull off in places such as Iran. History: Early modern period By the 16th century, cannons were made in a great variety of lengths and bore diameters, but the general rule was that the longer the barrel, the longer the range. Some cannons made during this time had barrels exceeding 10 ft (3.0 m) in length, and could weigh up to 20,000 pounds (9,100 kg). Consequently, large amounts of gunpowder were needed to allow them to fire stone balls several hundred yards. By mid-century, European monarchs began to classify cannons to reduce the confusion. Henry II of France opted for six sizes of cannon, but others settled for more; the Spanish used twelve sizes, and the English sixteen. They are, from largest to smallest: the cannon royal, cannon, cannon serpentine, bastard cannon, demicannon, pedrero, culverin, basilisk, demiculverin, bastard culverin, saker, minion, falcon, falconet, serpentine, and rabinet. Better powder had been developed by this time as well. Instead of the finely ground powder used by the first bombards, powder was replaced by a "corned" variety of coarse grains. This coarse powder had pockets of air between grains, allowing fire to travel through and ignite the entire charge quickly and uniformly.The end of the Middle Ages saw the construction of larger, more powerful cannon, as well as their spread throughout the world. As they were not effective at breaching the newer fortifications resulting from the development of cannon, siege engines—such as siege towers and trebuchets—became less widely used. However, wooden "battery-towers" took on a similar role as siege towers in the gunpowder age—such as that used at Siege of Kazan in 1552, which could hold ten large-calibre cannon, in addition to 50 lighter pieces. Another notable effect of cannon on warfare during this period was the change in conventional fortifications. Niccolò Machiavelli wrote, "There is no wall, whatever its thickness that artillery will not destroy in only a few days." Although castles were not immediately made obsolete by cannon, their use and importance on the battlefield rapidly declined. Instead of majestic towers and merlons, the walls of new fortresses were thick, angled, and sloped, while towers became low and stout; increasing use was also made of earth and brick in breastworks and redoubts. These new defences became known as bastion forts, after their characteristic shape which attempted to force any advance towards it directly into the firing line of the guns. A few of these featured cannon batteries, such as the House of Tudor's Device Forts in England. Bastion forts soon replaced castles in Europe and, eventually, those in the Americas as well.By the end of the 15th century, several technological advancements made cannons more mobile. Wheeled gun carriages and trunnions became common, and the invention of the limber further facilitated transportation. As a result, field artillery became more viable, and began to see more widespread use, often alongside the larger cannons intended for sieges. Better gunpowder, cast-iron projectiles (replacing stone), and the standardisation of calibres meant that even relatively light cannons could be deadly. In The Art of War, Niccolò Machiavelli observed that "It is true that the arquebuses and the small artillery do much more harm than the heavy artillery." This was the case at the Battle of Flodden, in 1513: the English field guns outfired the Scottish siege artillery, firing two or three times as many rounds. Despite the increased maneuverability, however, cannon were still the slowest component of the army: a heavy English cannon required 23 horses to transport, while a culverin needed nine. Even with this many animals pulling, they still moved at a walking pace. Due to their relatively slow speed, and lack of organisation, and undeveloped tactics, the combination of pike and shot still dominated the battlefields of Europe.Innovations continued, notably the German invention of the mortar, a thick-walled, short-barrelled gun that blasted shot upward at a steep angle. Mortars were useful for sieges, as they could hit targets behind walls or other defences. This cannon found more use with the Dutch, who learnt to shoot bombs filled with powder from them. Setting the bomb fuse was a problem. "Single firing" was first used to ignite the fuse, where the bomb was placed with the fuse down against the cannon's propellant. This often resulted in the fuse being blown into the bomb, causing it to blow up as it left the mortar. Because of this, "double firing" was tried where the gunner lit the fuse and then the touch hole. This, however, required considerable skill and timing, and was especially dangerous if the gun misfired, leaving a lighted bomb in the barrel. Not until 1650 was it accidentally discovered that double-lighting was superfluous as the heat of firing would light the fuse.Gustavus Adolphus of Sweden emphasised the use of light cannon and mobility in his army, and created new formations and tactics that revolutionised artillery. He discontinued using all 12 pounder—or heavier—cannon as field artillery, preferring, instead, to use cannons that could be handled by only a few men. One obsolete type of gun, the "leatheren", was replaced by 4 pounder and 9 pounder demi-culverins. These could be operated by three men, and pulled by only two horses. Gustavus Adolphus's army was also the first to use a cartridge that contained both powder and shot which sped up reloading, increasing the rate of fire. Finally, against infantry he pioneered the use of canister shot—essentially a tin can filled with musket balls. Until then there was no more than one cannon for every thousand infantrymen on the battlefield but Gustavus Adolphus increased the number of cannons sixfold. Each regiment was assigned two pieces, though he often arranged them into batteries instead of distributing them piecemeal. He used these batteries to break his opponent's infantry line, while his cavalry would outflank their heavy guns.At the Battle of Breitenfeld, in 1631, Adolphus proved the effectiveness of the changes made to his army, by defeating Johann Tserclaes, Count of Tilly. Although severely outnumbered, the Swedes were able to fire between three and five times as many volleys of artillery, and their infantry's linear formations helped ensure they did not lose any ground. Battered by cannon fire, and low on morale, Tilly's men broke ranks and fled.In England, cannons were being used to besiege various fortified buildings during the English Civil War. Nathaniel Nye is recorded as testing a Birmingham cannon in 1643 and experimenting with a saker in 1645. From 1645 he was the master gunner to the Parliamentarian garrison at Evesham and in 1646 he successfully directed the artillery at the Siege of Worcester, detailing his experiences and in his 1647 book The Art of Gunnery. Believing that war was as much a science as an art, his explanations focused on triangulation, arithmetic, theoretical mathematics, and cartography as well as practical considerations such as the ideal specification for gunpowder or slow matches. His book acknowledged mathematicians such as Robert Recorde and Marcus Jordanus as well as earlier military writers on artillery such as Niccolò Fontana Tartaglia and Thomas (or Francis) Malthus (author of A Treatise on Artificial Fire-Works).Around this time also came the idea of aiming the cannon to hit a target. Gunners controlled the range of their cannons by measuring the angle of elevation, using a "gunner's quadrant". Cannons did not have sights; therefore, even with measuring tools, aiming was still largely guesswork.In the latter half of the 17th century, the French engineer Sébastien Le Prestre de Vauban introduced a more systematic and scientific approach to attacking gunpowder fortresses, in a time when many field commanders "were notorious dunces in siegecraft". Careful sapping forward, supported by enfilading ricochets, was a key feature of this system, and it even allowed Vauban to calculate the length of time a siege would take. He was also a prolific builder of bastion forts, and did much to popularize the idea of "depth in defence" in the face of cannon. These principles were followed into the mid-19th century, when changes in armaments necessitated greater depth defence than Vauban had provided for. It was only in the years prior to World War I that new works began to break radically away from his designs. History: 18th and 19th centuries The lower tier of 17th-century English ships of the line were usually equipped with demi-cannons, guns that fired a 32-pound (15 kg) solid shot, and could weigh up to 3,400 pounds (1,500 kg). Demi-cannons were capable of firing these heavy metal balls with such force that they could penetrate more than a metre of solid oak, from a distance of 90 m (300 ft), and could dismast even the largest ships at close range. Full cannon fired a 42-pound (19 kg) shot, but were discontinued by the 18th century, as they were too unwieldy. By the end of the 18th century, principles long adopted in Europe specified the characteristics of the Royal Navy's cannon, as well as the acceptable defects, and their severity. The United States Navy tested guns by measuring them, firing them two or three times—termed "proof by powder"—and using pressurized water to detect leaks.The carronade was adopted by the Royal Navy in 1779; the lower muzzle velocity of the round shot when fired from this cannon was intended to create more wooden splinters when hitting the structure of an enemy vessel, as they were believed to be more deadly than the ball by itself. The carronade was much shorter, and weighed between a third to a quarter of the equivalent long gun; for example, a 32-pounder carronade weighed less than a ton, compared with a 32-pounder long gun, which weighed over 3 tons. The guns were, therefore, easier to handle, and also required less than half as much gunpowder, allowing fewer men to crew them. Carronades were manufactured in the usual naval gun calibres, but were not counted in a ship of the line's rated number of guns. As a result, the classification of Royal Navy vessels in this period can be misleading, as they often carried more cannons than were listed. History: Cannons were crucial in Napoleon's rise to power, and continued to play an important role in his army in later years. During the French Revolution, the unpopularity of the Directory led to riots and rebellions. When over 25,000 royalists led by General Danican assaulted Paris, Paul Barras was appointed to defend the capital; outnumbered five to one and disorganised, the Republicans were desperate. When Napoleon arrived, he reorganised the defences but realised that without cannons the city could not be held. He ordered Joachim Murat to bring the guns from the Sablons artillery park; the Major and his cavalry fought their way to the recently captured cannons, and brought them back to Napoleon. When Danican's poorly trained men attacked, on 13 Vendémiaire 1795 (5 October in the calendar used in France at the time), Napoleon ordered his cannon to fire grapeshot into the mob, an act that became known as the "whiff of grapeshot". The slaughter effectively ended the threat to the new government, while, at the same time, making Bonaparte a famous—and popular—public figure. Among the first generals to recognise that artillery was not being used to its full potential, Napoleon often massed his cannon into batteries and introduced several changes into the French artillery, improving it significantly and making it among the finest in Europe. Such tactics were successfully used by the French, for example, at the Battle of Friedland, when 66 guns fired a total of 3,000 roundshot and 500 rounds of grapeshot, inflicting severe casualties to the Russian forces, whose losses numbered over 20,000 killed and wounded, in total. At the Battle of Waterloo—Napoleon's final battle—the French army had many more artillery pieces than either the British or Prussians. As the battlefield was muddy, recoil caused cannons to bury themselves into the ground after firing, resulting in slow rates of fire, as more effort was required to move them back into an adequate firing position; also, roundshot did not ricochet with as much force from the wet earth. Despite the drawbacks, sustained artillery fire proved deadly during the engagement, especially during the French cavalry attack. The British infantry, having formed infantry squares, took heavy losses from the French guns, while their own cannons fired at the cuirassiers and lancers, when they fell back to regroup. Eventually, the French ceased their assault, after taking heavy losses from the British cannon and musket fire.In the 1810s and 1820s, greater emphasis was placed on the accuracy of long-range gunfire, and less on the weight of a broadside. Around 1822, George Marshall wrote Marshall's Practical Marine Gunnery. The book was used by cannon operators in the United States Navy throughout the 19th century. It listed all the types of cannons and instructions. History: The carronade, although initially very successful and widely adopted, disappeared from the Royal Navy in the 1850s after the development of wrought-iron-jacketed steel cannon by William Armstrong and Joseph Whitworth. Nevertheless, carronades were used in the American Civil War.Western cannons during the 19th century became larger, more destructive, more accurate, and could fire at longer range. One example is the American 3-inch (76 mm) wrought-iron, muzzle-loading rifle, or Griffen gun (usually called the 3-inch Ordnance Rifle), used during the American Civil War, which had an effective range of over 1.1 mi (1.8 km). Another is the smoothbore 12-pounder Napoleon, which originated in France in 1853 and was widely used by both sides in the American Civil War. This cannon was renowned for its sturdiness, reliability, firepower, flexibility, relatively lightweight, and range of 1,700 m (5,600 ft). History: The practice of rifling—casting spiralling lines inside the cannon's barrel—was applied to artillery more frequently by 1855, as it gave cannon projectiles gyroscopic stability, which improved their accuracy. One of the earliest rifled cannons was the breech-loading Armstrong Gun—also invented by William Armstrong—which boasted significantly improved range, accuracy, and power than earlier weapons. The projectile fired from the Armstrong gun could reportedly pierce through a ship's side and explode inside the enemy vessel, causing increased damage and casualties. The British military adopted the Armstrong gun, and was impressed; the Duke of Cambridge even declared that it "could do everything but speak". Despite being significantly more advanced than its predecessors, the Armstrong gun was rejected soon after its integration, in favour of the muzzle-loading pieces that had been in use before. While both types of gun were effective against wooden ships, neither had the capability to pierce the armour of ironclads; due to reports of slight problems with the breeches of the Armstrong gun, and their higher cost, the older muzzle-loaders were selected to remain in service instead. Realising that iron was more difficult to pierce with breech-loaded cannons, Armstrong designed rifled muzzle-loading guns, which proved successful; The Times reported: "even the fondest believers in the invulnerability of our present ironclads were obliged to confess that against such artillery, at such ranges, their plates and sides were almost as penetrable as wooden ships."The superior cannon of the Western world brought them tremendous advantages in warfare. For example, in the First Opium War in China, during the 19th century, British battleships bombarded the coastal areas and fortifications from afar, safe from the reach of the Chinese cannons. Similarly, the shortest war in recorded history, the Anglo-Zanzibar War of 1896, was brought to a swift conclusion by shelling from British cruisers. The cynical attitude towards recruited infantry in the face of ever more powerful field artillery is the source of the term cannon fodder, first used by François-René de Chateaubriand, in 1814; however, the concept of regarding soldiers as nothing more than "food for powder" was mentioned by William Shakespeare as early as 1598, in Henry IV, Part 1. History: 20th and 21st centuries Cannons in the 20th and 21st centuries are usually divided into sub-categories and given separate names. Some of the most widely used types of modern cannon are howitzers, mortars, guns, and autocannon, although a few very large-calibre cannon, custom-designed, have also been constructed. Nuclear artillery was experimented with, but was abandoned as impractical. Modern artillery is used in a variety of roles, depending on its type. According to NATO, the general role of artillery is to provide fire support, which is defined as "the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize, or suppress the enemy".When referring to cannons, the term gun is often used incorrectly. In military usage, a gun is a cannon with a high muzzle velocity and a flat trajectory, useful for hitting the sides of targets such as walls, as opposed to howitzers or mortars, which have lower muzzle velocities, and fire indirectly, lobbing shells up and over obstacles to hit the target from above. History: By the early 20th century, infantry weapons had become more powerful, forcing most artillery away from the front lines. Despite the change to indirect fire, cannons proved highly effective during World War I, directly or indirectly causing over 75% of casualties. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers, as they were more suited at hitting targets in trenches. Furthermore, their shells carried more explosives than those of guns, and caused considerably less barrel wear. The German army had the advantage here as they began the war with many more howitzers than the French. World War I also saw the use of the Paris Gun, the longest-ranged gun ever fired. This 200 mm (8 in) calibre gun was used by the Germans against Paris and could hit targets more than 122 km (76 mi) away.The Second World War sparked new developments in cannon technology. Among them were sabot rounds, hollow-charge projectiles, and proximity fuses, all of which increased the effectiveness of cannon against specific target. The proximity fuse emerged on the battlefields of Europe in late December 1944. Used to great effect in anti-aircraft projectiles, proximity fuses were fielded in both the European and Pacific Theatres of Operations; they were particularly useful against V-1 flying bombs and kamikaze planes. Although widely used in naval warfare, and in anti-air guns, both the British and Americans feared unexploded proximity fuses would be reverse engineered, leading to them limiting their use in continental battles. During the Battle of the Bulge, however, the fuses became known as the American artillery's "Christmas present" for the German army because of their effectiveness against German personnel in the open, when they frequently dispersed attacks. Anti-tank guns were also tremendously improved during the war: in 1939, the British used primarily 2 pounder and 6 pounder guns. By the end of the war, 17 pounders had proven much more effective against German tanks, and 32 pounders had entered development. Meanwhile, German tanks were continuously upgraded with better main guns, in addition to other improvements. For example, the Panzer III was originally designed with a 37 mm gun, but was mass-produced with a 50 mm cannon. To counter the threat of the Russian T-34s, another, more powerful 50 mm gun was introduced, only to give way to a larger 75 mm cannon, which was in a fixed mount as the StuG III, the most-produced German World War II armoured fighting vehicle of any type. Despite the improved guns, production of the Panzer III was ended in 1943, as the tank still could not match the T-34, and was replaced by the Panzer IV and Panther tanks. In 1944, the 8.8 cm KwK 43 and many variations, entered service with the Wehrmacht, and was used as both a tank main gun, and as the PaK 43 anti-tank gun. One of the most powerful guns to see service in World War II, it was capable of destroying any Allied tank at very long ranges. History: Despite being designed to fire at trajectories with a steep angle of descent, howitzers can be fired directly, as was done by the 11th Marine Regiment at the Battle of Chosin Reservoir, during the Korean War. Two field batteries fired directly upon a battalion of Chinese infantry; the Marines were forced to brace themselves against their howitzers, as they had no time to dig them in. The Chinese infantry took heavy casualties, and were forced to retreat.The tendency to create larger calibre cannons during the World Wars has reversed since. The United States Army, for example, sought a lighter, more versatile howitzer, to replace their ageing pieces. As it could be towed, the M198 was selected to be the successor to the World War II–era cannons used at the time, and entered service in 1979. Still in use today, the M198 is, in turn, being slowly replaced by the M777 Ultralightweight howitzer, which weighs nearly half as much and can be more easily moved. Although land-based artillery such as the M198 are powerful, long-ranged, and accurate, naval guns have not been neglected, despite being much smaller than in the past, and, in some cases, having been replaced by cruise missiles. However, the Zumwalt-class destroyer's planned armament included the Advanced Gun System (AGS), a pair of 155 mm guns, which fire the Long Range Land-Attack Projectile. The warhead, which weighted 24 pounds (11 kg), had a circular error of probability of 50 m (160 ft), and was mounted on a rocket, to increase the effective range to 100 nmi (190 km), further than that of the Paris Gun. The AGS's barrels would be water cooled, and fire 10 rounds per minute, per gun. The combined firepower from both turrets would give a Zumwalt-class destroyer the firepower equivalent to 18 conventional M198 howitzers. The reason for the re-integration of cannons as a main armament in United States Navy ships was because satellite-guided munitions fired from a gun would be less expensive than a cruise missile but have a similar guidance capability. History: Autocannon Autocannons have an automatic firing mode, similar to that of a machine gun. They have mechanisms to automatically load their ammunition, and therefore have a higher rate of fire than artillery, often approaching, or, in the case of rotary autocannons, even surpassing the firing rate of a machine gun. While there is no minimum bore for autocannons, they are generally larger than machine guns, typically 20 mm or greater since World War II and are usually capable of using explosive ammunition even if it is not always used. Machine guns in contrast are usually too small to use explosive ammunition; such ammunition is additionally banned in international conflict for the parties to the Saint Petersburg Declaration of 1868. History: Most nations use rapid-fire cannon on light vehicles, replacing a more powerful, but heavier, tank gun. A typical autocannon is the 25 mm "Bushmaster" chain gun, mounted on the LAV-25 and M2 Bradley armoured vehicles. Autocannons may be capable of a very high rate of fire, but ammunition is heavy and bulky, limiting the amount carried. For this reason, both the 25 mm Bushmaster and the 30 mm RARDEN are deliberately designed with relatively low rates of fire. The typical rate of fire for a modern autocannon ranges from 90 to 1,800 rounds per minute. Systems with multiple barrels, such as a rotary autocannon, can have rates of fire of more than several thousand rounds per minute. The fastest of these is the GSh-6-23, which has a rate of fire of over 10,000 rounds per minute.Autocannons are often found in aircraft, where they replaced machine guns and as shipboard anti-aircraft weapons, as they provide greater destructive power than machine guns. History: Aircraft use The first documented installation of a cannon on an aircraft was on the Voisin Canon in 1911, displayed at the Paris Exposition that year. By World War I, all of the major powers were experimenting with aircraft-mounted cannons; however their low rate of fire and great size and weight precluded any of them from being anything other than experimental. The most successful (or least unsuccessful) was the SPAD 12 Ca.1 with a single 37mm Puteaux mounted to fire between the cylinder banks and through the propeller boss of the aircraft's Hispano-Suiza 8C. The pilot (by necessity an ace) had to manually reload each round.The first autocannon were developed during World War I as anti-aircraft guns, and one of these, the Coventry Ordnance Works "COW 37 mm gun", was installed in an aircraft. However, the war ended before it could be given a field trial, and it never became standard equipment in a production aircraft. Later trials had it fixed at a steep angle upwards in both the Vickers Type 161 and the Westland C.O.W. Gun Fighter, an idea that would return later. History: During this period autocannons became available and several fighters of the German Luftwaffe and the Imperial Japanese Navy Air Service were fitted with 20 mm cannons. They continued to be installed as an adjunct to machine guns rather than as a replacement, as the rate of fire was still too low and the complete installation too heavy. There was a some debate in the RAF as to whether the greater number of possible rounds being fired from a machine gun, or a smaller number of explosive rounds from a cannon was preferable. Improvements during the war in regards to rate of fire allowed the cannon to displace the machine gun almost entirely. The cannon was more effective against armour so they were increasingly used during the course of World War II, and newer fighters such as the Hawker Tempest usually carried two or four versus the six .50 Browning machine guns for US aircraft or eight to twelve M1919 Browning machine guns on earlier British aircraft. The Hispano-Suiza HS.404, Oerlikon 20 mm cannon, MG FF, and their numerous variants became among the most widely used autocannon in the war. Cannons, as with machine guns, were generally fixed to fire forwards (mounted in the wings, in the nose or fuselage, or in a pannier under either); or were mounted in gun turrets on heavier aircraft. Both the Germans and Japanese mounted cannons to fire upwards and forwards for use against heavy bombers, with the Germans calling guns so-installed Schräge Musik. This term derives from a German colloquialism for jazz music (the German word schräg means "off-key"). History: Preceding the Vietnam War the high speeds aircraft were attaining led to a move to remove the cannon due to the mistaken belief that they would be useless in a dogfight, but combat experience during the Vietnam War showed conclusively that despite advances in missiles, there was still a need for them. Nearly all modern fighter aircraft are armed with an autocannon and they are also commonly found on ground-attack aircraft. One of the most powerful examples is the 30mm GAU-8/A Avenger Gatling-type rotary cannon, mounted exclusively on the Fairchild Republic A-10 Thunderbolt II. The Lockheed AC-130 gunship (a converted transport) can carry a 105 mm howitzer as well as a variety of autocannons ranging up to 40 mm. Both are used in the close air support role. Materials, parts, and terms: Cannons in general have the form of a truncated cone with an internal cylindrical bore for holding an explosive charge and a projectile. The thickest, strongest, and closed part of the cone is located near the explosive charge. As any explosive charge will dissipate in all directions equally, the thickest portion of the cannon is useful for containing and directing this force. The backward motion of the cannon as its projectile leaves the bore is termed its recoil, and the effectiveness of the cannon can be measured in terms of how much this response can be diminished, though obviously diminishing recoil through increasing the overall mass of the cannon means decreased mobility. Materials, parts, and terms: Field artillery cannon in Europe and the Americas were initially made most often of bronze, though later forms were constructed of cast iron and eventually steel.: 61  Bronze has several characteristics that made it preferable as a construction material: although it is relatively expensive, does not always alloy well, and can result in a final product that is "spongy about the bore",: 61  bronze is more flexible than iron and therefore less prone to bursting when exposed to high pressure; cast-iron cannon are less expensive and more durable generally than bronze and withstand being fired more times without deteriorating. However, cast-iron cannon have a tendency to burst without having shown any previous weakness or wear, and this makes them more dangerous to operate. Materials, parts, and terms: The older and more-stable forms of cannon were muzzle-loading as opposed to breech-loading—to be used they had to have their ordnance packed down the bore through the muzzle rather than inserted through the breech. The following terms refer to the components or aspects of a classical western cannon (c. 1850) as illustrated here.: 66  In what follows, the words near, close, and behind will refer to those parts towards the thick, closed end of the piece, and far, front, in front of, and before to the thinner, open end. Negative spaces Bore: The hollow cylinder bored down the centre of the cannon, including the base of the bore or bottom of the bore, the nearest end of the bore into which the ordnance (wadding, shot, etc.) gets packed. The diameter of the bore represents the cannon's calibre. Chamber: The cylindrical, conical, or spherical recess at the nearest end of the bottom of the bore into which the gunpowder is packed. Materials, parts, and terms: Vent: A thin tube on the near end of the cannon connecting the explosive charge inside with an ignition source outside and often filled with a length of fuse; always located near the breech. Sometimes called the fuse hole or the touch hole. On the top of the vent on the outside of the cannon is a flat circular space called the vent field where the charge is lit. If the cannon is bronze, it will often have a vent piece made of copper screwed into the length of the vent. Materials, parts, and terms: Solid spaces The main body of a cannon consists of three basic extensions: the foremost and the longest is called the chase, the middle portion is the reinforce, and the closest and briefest portion is the cascabel or cascable. The chase: Simply the entire conical part of the cannon in front of the reinforce. It is the longest portion of the cannon, and includes the following elements: The neck: the narrowest part of the chase, always located near the foremost end of the piece. Materials, parts, and terms: The muzzle: the portion of the chase forward of the neck. It includes the following: The swell of the muzzle refers to the slight swell in the diameter of the piece at the very end of the chase. It is often chamfered on the inside to make loading the cannon easier. In some guns, this element is replaced with a wide ring and is called a muzzle band. Materials, parts, and terms: The face is the flat vertical plane at the foremost edge of the muzzle (and of the entire piece). Materials, parts, and terms: The muzzle mouldings are the tiered rings which connect the face with the rest of the muzzle, the first of which is called the lip and the second the fillet The muzzle astragal and fillets are a series of three narrow rings running around the outside of the chase just behind the neck. Sometimes also collectively called the chase ring. Materials, parts, and terms: The chase astragal and fillets: these are a second series of such rings located at the near end of the chase. The chase girdle: this is the brief length of the chase between the chase astragal and fillets and the reinforce. Materials, parts, and terms: The reinforce: This portion of the piece is frequently divided into a first reinforce and a second reinforce, but in any case is marked as separate from the chase by the presence of a narrow circular reinforce ring or band at its foremost end. The span of the reinforce also includes the following: The trunnions are located at the foremost end of the reinforce just behind the reinforce ring. They consist of two cylinders perpendicular to the bore and below it which are used to mount the cannon on its carriage. Materials, parts, and terms: The rimbases are short broad rings located at the union of the trunnions and the cannon which provide support to the carriage attachment. The reinforce band is only present if the cannon has two reinforces, and it divides the first reinforce from the second. Materials, parts, and terms: The breech refers to the mass of solid metal behind the bottom of the bore extending to the base of the breech and including the base ring; it also generally refers to the end of the cannon opposite the muzzle, i.e., the location where the explosion of the gunpowder begins as opposed to the opening through which the pressurized gas escapes. Materials, parts, and terms: The base ring forms a ring at the widest part of the entire cannon at the nearest end of the reinforce just before the cascabel. Materials, parts, and terms: The cascabel: This is that portion of the cannon behind the reinforce(s) and behind the base ring. It includes the following: The knob which is the small spherical terminus of the piece; The neck, a short, narrow piece of metal holding out the knob; and The fillet, the tiered disk connecting the neck of the cascabel to the base of the breech. Materials, parts, and terms: The base of the breech is the metal disk that forms the most forward part of the cascabel and rests against the breech itself, right next to the base ring.To pack a muzzle-loading cannon, first gunpowder is poured down the bore. This is followed by a layer of wadding (often nothing more than paper), and then the cannonball itself. A certain amount of windage allows the ball to fit down the bore, though the greater the windage the less efficient the propulsion of the ball when the gunpowder is ignited. To fire the cannon, the fuse located in the vent is lit, quickly burning down to the gunpowder, which then explodes violently, propelling wadding and ball down the bore and out of the muzzle. A small portion of exploding gas also escapes through the vent, but this does not dramatically affect the total force exerted on the ball. Materials, parts, and terms: Any large, smoothbore, muzzle-loading gun—used before the advent of breech-loading, rifled guns—may be referred to as a cannon, though once standardised names were assigned to different-sized cannon, the term specifically referred to a gun designed to fire a 42-pound (19 kg) shot, as distinct from a demi-cannon – 32 pounds (15 kg), culverin – 18 pounds (8.2 kg), or demi-culverin – 9 pounds (4.1 kg). Gun specifically refers to a type of cannon that fires projectiles at high speeds, and usually at relatively low angles; they have been used in warships, and as field artillery. The term cannon is also used for autocannon, a modern repeating weapon firing explosive projectiles. Cannon have been used extensively in fighter aircraft since World War II. Operation: In the 1770s, cannon operation worked as follows: each cannon would be manned by two gunners, six soldiers, and four officers of artillery. The right gunner was to prime the piece and load it with powder, and the left gunner would fetch the powder from the magazine and be ready to fire the cannon at the officer's command. On each side of the cannon, three soldiers stood, to ram and sponge the cannon, and hold the ladle. The second soldier on the left was tasked with providing 50 bullets.Before loading, the cannon would be cleaned with a wet sponge to extinguish any smouldering material from the last shot. Fresh powder could be set off prematurely by lingering ignition sources. The powder was added, followed by wadding of paper or hay, and the ball was placed in and rammed down. After ramming, the cannon would be aimed with the elevation set using a quadrant and a plummet. At 45 degrees, the ball had the utmost range: about ten times the gun's level range. Any angle above a horizontal line was called random-shot. Wet sponges were used to cool the pieces every ten or twelve rounds. Operation: During the Napoleonic Wars, a British gun team consisted of five gunners to aim it, clean the bore with a damp sponge to quench any remaining embers before a fresh charge was introduced, and another to load the gun with a bag of powder and then the projectile. The fourth gunner pressed his thumb on the vent hole, to prevent a draught that might fan a flame. The charge loaded, the fourth would prick the bagged charge through the vent hole, and fill the vent with powder. On command, the fifth gunner would fire the piece with a slow match. Friction primers replaced slow match ignition by the mid-19th century.When a cannon had to be abandoned such as in a retreat or surrender, the touch hole of the cannon would be plugged flush with an iron spike, disabling the cannon (at least until metal boring tools could be used to remove the plug). This was called "spiking the cannon". Operation: A gun was said to be honeycombed when the surface of the bore had cavities, or holes in it, caused either by corrosion or casting defects. Operation: Legal considerations In the United States, muzzleloading cannons are not subject to any regulations at the federal level. According to the Bureau of Alcohol, Tobacco, and Firearms, muzzleloading cannons made before 1899 (and replicas) that are unable to fire fixed ammunition are considered antiques. They are not subject to the Gun Control Act (GCA) of 1968 or National Firearms Act (NFA) of 1934. Muzzleloading cannons may be subject to state of local rules in some jurisdictions, however. Deceptive use: Historically, logs or poles have been used as decoys to mislead the enemy as to the strength of an emplacement. The "Quaker Gun trick" was used by Colonel William Washington's Continental Army during the American Revolutionary War; in 1780, approximately 100 Loyalists surrendered to them, rather than face bombardment. During the American Civil War, Quaker guns were also used by the Confederates, to compensate for their shortage of artillery. The decoy cannon were painted black at the "muzzle", and positioned behind fortifications to delay Union attacks on those positions. On occasion, real gun carriages were used to complete the deception. In popular culture: Cannon sounds have sometimes been used in classical pieces with a military theme. One of the best known examples of such a piece is Pyotr Ilyich Tchaikovsky's 1812 Overture. The overture is to be performed using an artillery section together with the orchestra, resulting in noise levels high enough that musicians are required to wear ear protection. The cannon fire simulates Russian artillery bombardments of the Battle of Borodino, a critical battle in Napoleon's invasion of Russia, whose defeat the piece celebrates. When the overture was first performed, the cannon were fired by an electric current triggered by the conductor. However, the overture was not recorded with real cannon fire until Mercury Records and conductor Antal Doráti's 1958 recording of the Minnesota Orchestra. Cannon fire is also frequently used annually in presentations of the 1812 on the American Independence Day, a tradition started by Arthur Fiedler of the Boston Pops in 1974.The hard rock band AC/DC also used cannon in their song "For Those About to Rock (We Salute You)", and in live shows replica Napoleonic cannon and pyrotechnics were used to perform the piece. A recording of that song has accompanied the firing of an authentic reproduction of a M1857 12-pounder Napoleon during Columbus Blue Jackets goal celebrations at Nationwide Arena since opening night of the 2007–08 season. The cannon is located behind the last row of section 111 and the focal point of the team's alternate logo on its third jerseys.Cannons have been fired in touchdown celebrations by several American football teams including the San Diego Chargers. The Pittsburgh Steelers used one only during the 1962 campaign but discontinued it after Buddy Dial was startled as a result of inadvertently running face-first into the cannon's smoky discharge in a 42–27 loss to the Dallas Cowboys at Forbes Field on October 21. Restoration: Cannon recovered from the sea are often extensively damaged from exposure to salt water; because of this, electrolytic reduction treatment is required to forestall the process of corrosion. The cannon is then washed in deionized water to remove the electrolyte, and is treated in tannic acid, which prevents further rust and gives the metal a bluish-black colour. After this process, cannon on display may be protected from oxygen and moisture by a wax sealant. A coat of polyurethane may also be painted over the wax sealant, to prevent the wax-coated cannon from attracting dust in outdoor displays. In 2011, archaeologists say six cannon recovered from a river in Panama that could have belonged to legendary pirate Henry Morgan are being studied and could eventually be displayed after going through a restoration process.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ionospheric Connection Explorer** Ionospheric Connection Explorer: Ionospheric Connection Explorer (ICON) is a satellite designed to investigate changes in the ionosphere of Earth, the dynamic region high in our atmosphere where terrestrial weather from below meets space weather from above. ICON studies the interaction between Earth's weather systems and space weather driven by the Sun, and how this interaction drives turbulence in the upper atmosphere. It is hoped that a better understanding of this dynamic will mitigate its effects on communications, GPS signals, and technology in general. It is part of NASA's Explorer program and is operated by University of California, Berkeley's Space Sciences Laboratory.On 12 April 2013, NASA announced that ICON, along with Global-scale Observations of the Limb and Disk (GOLD), had been selected for development with the cost capped at US$200 million, excluding launch costs. The principal investigator of ICON is Thomas Immel at the University of California, Berkeley.ICON was originally scheduled to launch in June 2017 and was repeatedly delayed because of problems with its Pegasus XL launch vehicle. It was next due to launch on 26 October 2018 but the launch was rescheduled to 7 November 2018, and postponed again just 28 minutes before launch. ICON was successfully launched on 11 October 2019, at 02:00 UTC. Overview: ICON will perform a two-year mission to observe conditions in both the thermosphere and ionosphere. ICON is equipped with four instruments: a Michelson interferometer, built by the United States Naval Research Laboratory (NRL), measures the winds and temperatures in the thermosphere; an ion drift meter, built by University of Texas at Dallas, measures the motion of charged particles in the ionosphere; and two ultraviolet imagers built at University of California, Berkeley, observe the airglow layers in the upper atmosphere in order to determine both ionospheric and thermospheric density and composition. Overview: Many low-Earth orbiting satellites, including the International Space Station (ISS), fly through the ionosphere and can be affected by its changing electric and magnetic fields. The ionosphere also acts as a conduit for many communications signals, such as radio waves and the signals that make GPS systems work. The ionosphere is where space weather manifests, creating unexpected conditions; electric currents can cause electrical charging of satellites, changing density can affect satellite orbits, and shifting magnetic fields can induce current in power systems, causing strain, disrupting communications and navigation or even triggering blackouts. Improved understanding of this environment can help predict such events and improve satellite safety and design. Launch planning: Upon initial completion and delivery of the ICON observatory in 2016, launch plans centered around the launch range at Kwajalein Atoll in the Pacific Ocean. ICON was originally scheduled to launch in June 2017, but was repeatedly delayed because of problems with its Pegasus XL launch vehicle. The launch vehicle was mated to its air-launch aircraft Stargazer for a launch attempt in June 2018. This launch was cancelled days before because the rocket showed issues on the first leg of the ferry flight to Kwajalein. Given the availability of the launch range in Cape Canaveral, and a review of the suitability of this site, it was adopted as the ICON launch site. The October 2018 launch from Florida was scheduled after an initial review of the avionics issues. Whereas the delays in 2017 were due to concerns with rocket-payload and fairing separation systems, the 2018 delays were due to noise in the rocket avionics systems. The issues resulted finally in the 2018 Cape Canaveral launch being scrubbed minutes before the scheduled launch. These issues were ultimately resolved and ICON launched from Cape Canaveral on 11 October 2019 at 02:00 UTC. After an approximately month-long commissioning period, ICON began sending back its first science data in November 2019. Science payload: ICON carries four scientific instruments designed to image even the faintest plasma or airglow to build up a picture of the ionosphere's density, composition and structure. The complete instrument payload has a mass of 130 kg (290 lb) and are listed below: Michelson Interferometer for Global High-Resolution Thermospheric Imaging (MIGHTI) Ion Velocity Meter (IVM) is an ion drift meter Extreme Ultra-Violet (EUV), an imager Far Ultra-Violet (FUV), an imagerMIGHTI was developed at the United States Naval Research Laboratory (NRL), IVM at the University of Texas, and EUV and FUV were developed at the University of California, Berkeley. MIGHTI measures wind speed and temperature between 90 km (56 mi) and 300 km (190 mi) in altitude. The velocity measurements are gathered by observing the Doppler shift in the red and green lines of atomic oxygen. This is done with the Doppler Asymmetric Spatial Heterodyne (DASH) which uses échelle gratings. The temperature measurements are done by photometeric observations with a CCD. MIGHTI is designed to detect wind speeds as low as 16 km/h (9.9 mph), even though the spacecraft is traveling at over 23,000 km/h (14,000 mph) (to stay in orbit).IVM collects in situ data about ions in the local environment around the spacecraft, whereas EUV and FUV are spectrographic imagers. EUV is a 1-dimension limb imager designed to observe height and density of the daytime ionosphere by detecting the glow of oxygen ions and other species at wavelengths between 55 and 85 nm. FUV is a 2-dimension imager that observes the limb and below at 135 and 155 nm, where bright emissions of atomic oxygen and molecular nitrogen are found The solar panel produces 780 watts, but the observatory's power consumption ranges between 209 and 265 watts when in science mode. Mission Operations: Once launched, and for the duration of its two-year science mission, the ICON observatory is controlled and operated by the Mission Operations Center (MOC) at the Space Sciences Laboratory at University of California, Berkeley. The UCB MOC currently operates seven NASA satellites. ICON was placed into a 27.00° inclination orbit, and communications are through Tracking and Data Relay Satellite System (TDRSS), the orbiting NASA communications network. Ground contacts with ICON are performed mainly from the Berkeley Ground Station, an 11 m (36 ft) dish, with backup contacts out of Wallops Flight Facility (WFF), Virginia and Santiago, Chile. Loss of Contact: The NASA ICON team lost contact with the ICON spacecraft on 25 November 2022. A fail-safe system designed to reset the spacecraft computer after 8 days with no receipt of commands from the ground failed to restore communications after it elapsed on 5 December 2022.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OSTbeta** OSTbeta: Organic solute transporter beta, also known as OST-beta, is a protein which in humans is encoded by the OSTB gene. Function: OST-beta together with OST-alpha is able to transport estrone sulfate, taurocholate, digoxin, and prostaglandin E2 across cell membranes. The Ost-alpha / Ost-beta heterodimer, but not the individual subunits, stimulates sodium-independent bile acid uptake. The heterodimer furthermore is essential for intestinal bile acid transport.OST-alpha and OST-beta have high expression in the testis, colon, liver, small intestine, kidney, ovary, and adrenal gland.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microbiological culture** Microbiological culture: A microbiological culture, or microbial culture, is a method of multiplying microbial organisms by letting them reproduce in predetermined culture medium under controlled laboratory conditions. Microbial cultures are foundational and basic diagnostic methods used as research tools in molecular biology. The term culture can also refer to the microorganisms being grown. Microbiological culture: Microbial cultures are used to determine the type of organism, its abundance in the sample being tested, or both. It is one of the primary diagnostic methods of microbiology and used as a tool to determine the cause of infectious disease by letting the agent multiply in a predetermined medium. For example, a throat culture is taken by scraping the lining of tissue in the back of the throat and blotting the sample into a medium to be able to screen for harmful microorganisms, such as Streptococcus pyogenes, the causative agent of strep throat. Furthermore, the term culture is more generally used informally to refer to "selectively growing" a specific kind of microorganism in the lab. Microbiological culture: It is often essential to isolate a pure culture of microorganisms. A pure (or axenic) culture is a population of cells or multicellular organisms growing in the absence of other species or types. A pure culture may originate from a single cell or single organism, in which case the cells are genetic clones of one another. For the purpose of gelling the microbial culture, the medium of agarose gel (agar) is used. Agar is a gelatinous substance derived from seaweed. A cheap substitute for agar is guar gum, which can be used for the isolation and maintenance of thermophiles. Bacterial culture: There are several types of bacterial culture methods that are selected based on the agent being cultured and the downstream use. Broth cultures One method of bacterial culture is liquid culture, in which the desired bacteria are suspended in a liquid nutrient medium, such as Luria broth, in an upright flask. This allows a scientist to grow up large amounts of bacteria for a variety of downstream applications. Liquid cultures are ideal for preparation of an antimicrobial assay in which the liquid broth is inoculated with bacteria and let to grow overnight (a ‘shaker’ may be used to mechanically mix the broth, to encourage uniform growth). Subsequently, aliquots of the sample are taken to test for the antimicrobial activity of a specific drug or protein (antimicrobial peptides). Static liquid cultures may be used as an alternative. These cultures are not shaken, and they provide the microbes with an oxygen gradient. Bacterial culture: Agar plates Microbiological cultures can be grown in petri dishes of differing sizes that have a thin layer of agar-based growth medium. Once the growth medium in the petri dish is inoculated with the desired bacteria, the plates are incubated at the optimal temperature for the growing of the selected bacteria (for example, usually at 37 degrees Celsius, or the human body temperature, for cultures from humans or animals, or lower for environmental cultures). After the desired level of growth is achieved, agar plates can be stored upside down in a refrigerator for an extended period of time to keep bacteria for future experiments. Bacterial culture: There are a variety of additives that can be added to agar before it is poured into a plate and allowed to solidify. Some types of bacteria can only grow in the presence of certain additives. This can also be used when creating engineered strains of bacteria that contain an antibiotic-resistance gene. When the selected antibiotic is added to the agar, only bacterial cells containing the gene insert conferring resistance will be able to grow. This allows the researcher to select only the colonies that were successfully transformed. Bacterial culture: Agar based dipsticks Miniaturised version of agar plates implemented to dipstick formats, eg. Dip Slide, Digital Dipstick show potential to be used at the point-of-care for diagnosis purposes. They have advantages over agar plates since they are cost effective and their operation does not require expertise or laboratory environment, which enable them to be used at the point-of-care. Stab cultures Stab cultures are similar to agar plates, but are formed by solid agar in a test tube. Bacteria is introduced via an inoculation needle or a pipette tip being stabbed into the center of the agar. Bacteria grow in the punctured area. Stab cultures are most commonly used for short-term storage or shipment of cultures. Culture collections Microbial culture collections focus on the acquisition, authentication, production, preservation, catalogueing and distribution of viable cultures of standard reference microorganisms, cell lines and other materials for research in microbial systematics. Culture collection are also repositories of type strains. Bacterial culture: Solid plate culture of thermophilic microorganisms For solid plate cultures of thermophilic microorganisms such as Bacillus acidocaldarius, Bacillus stearothermophilus, Thermus aquaticus and Thermus thermophilus etc. growing at temperatures of 50 to 70 degrees C, low acyl clarified gellan gum has been proven to be the preferred gelling agent comparing to agar for the counting or isolation or both of the above thermophilic bacteria. Viral culture: Virus and phage cultures require host cells in which the virus or phage multiply. For bacteriophages, cultures are grown by infecting bacterial cells. The phage can then be isolated from the resulting plaques in a lawn of bacteria on a plate. Viral cultures are obtained from their appropriate eukaryotic host cells. The streak plate method is a way to physically separate the microbial population, and is done by spreading the inoculate back and forth with an inoculating loop over the solid agar plate. Upon incubation, colonies will arise and single cells will have been isolated from the biomass. Once a microorganism has been isolated in pure culture, it is necessary to preserve it in a viable state for further study and use in cultures called stock cultures. These cultures have to be maintained, such that there is no loss of their biological, immunological and cultural characters. Eukaryotic cell culture: Isolation of pure cultures For single-celled eukaryotes, such as yeast, the isolation of pure cultures uses the same techniques as for bacterial cultures. Pure cultures of multicellular organisms are often more easily isolated by simply picking out a single individual to initiate a culture. This is a useful technique for pure culture of fungi, multicellular algae, and small metazoa, for example. Eukaryotic cell culture: Developing pure culture techniques is crucial to the observation of the specimen in question. The most common method to isolate individual cells and produce a pure culture is to prepare a streak plate. The streak plate method is a way to physically separate the microbial population, and is done by spreading the inoculate back and forth with an inoculating loop over the solid agar plate. Upon incubation, colonies will arise and single cells will have been isolated from the biomass. Once a microorganism has been isolated in pure culture, it is necessary to preserve it in a viable state for further study and use. Stock cultures have to be maintained, such that there is no loss of their biological, immunological and cultural characters.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monkey's uncle** Monkey's uncle: The term monkey's uncle, most notably seen in the idiom "(Well,) I'll be a monkey's uncle", is used to express complete surprise, amazement or disbelief. It can also be used to acknowledge the impossibility of a situation, in the same way that "pigs might fly" is used. An example is if one says: "I may agree that if two plus two equals five, then I am a monkey's uncle". Monkey's uncle: The phrase was used as early as 1917, in an El Paso, Texas newspaper advertisement for a play called The Brass Monkey. It appeared in newspapers several times in the early 1920s, including several other examples in advertisements. It was originally a sarcastic remark made by creationists. The notion "that [people] were descended from apes was considered blasphemous...by Darwin's contemporaries", and it was for this reason that the sarcastic phrase came into use.Michael Quinion notes that the phrase "Monkey's uncle" occurs in a parody of Henry Wadsworth Longfellow's 1855 poem The Song of Hiawatha which was reprinted in James Parton's 1881 The Humorous Poetry of the English Language, and observes: "This may be just an accident of invention, but the date fits". The Monkey's Uncle is a 1965 Walt Disney movie, with the title song written by the Sherman Brothers and performed by Annette Funicello and the Beach Boys. On their 2003 album Reel to Real, The Selecter included a song titled "Monkey's Uncle", criticizing religious dogma that contradicts scientific evidence. I'm a Monkey's Uncle is the title of a 1948 Three Stooges short film. Monkey's uncle: In the MMORPG RuneScape, asking the merchant Zeke about purchasing a dragon scimitar will result in the line of dialogue "Seriously, you'll be a monkey's uncle before you'll ever hold a dragon scimitar." In a humorous twist, purchasing and wielding a dragon scimitar requires completing the quest "Monkey Madness", in which the player must take on the role of an actual monkey's uncle.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Palo Quemado Formation** Palo Quemado Formation: The Palo Quemado Formation is a geologic formation in Mexico. It preserves fossils dating back to the Permian period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Evaporating gaseous globule** Evaporating gaseous globule: An evaporating gas globule (EGG) is a region of hydrogen gas in outer space approximately 100 astronomical units in size, such that gases shaded by it are shielded from ionizing UV rays. Dense areas of gas shielded by an evaporating gas globule can be conducive to the birth of stars. Evaporating gas globules were first conclusively identified via photographs of the Pillars of Creation in the Eagle Nebula taken by the Hubble Space Telescope in 1995.EGG's are the likely predecessors of new protostars. Inside an EGG the gas and dust are denser than in the surrounding dust cloud. Gravity pulls the cloud even more tightly together as the EGG continues to draw in material from its surroundings. As the cloud density builds up the globule becomes hotter under the weight of the outer layers, a protostar is formed inside the EGG. Evaporating gaseous globule: A protostar may have too little mass to become a star. If so it becomes a brown dwarf. If the protostar has sufficient mass, the density reaches a critical level where the temperature exceeds 10 million kelvin at its center. At this point, a nuclear reaction starts converting hydrogen to helium and releasing large amounts of energy. The protostar then becomes a star and joins the main sequence on the HR diagram.A study of 73 EGGs in the Pillars of Creation (Eagle Nebula) with the Very Large Telescope showed that only 15% of the EGGs show signs of star-formation. The star-formation is not everywhere the same: The largest pillar has a small cluster of these sources at the head of the pillar.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Compact Disc subcode** Compact Disc subcode: Subcode or subchannel data (called "control bytes" in the CD-ROM specification) refers to data contained in a compact disc (CD) in addition to digital audio or user data, which is used for control and playback of the CD. The original specification was defined in the Red Book standard for CD Digital Audio, though further specifications have extended their use (including the CD-ROM, CD Text and CD+G specifications). Structure: Subchannel data is multiplexed with the digital audio or user digital data. The data in a CD are arranged in frames. A frame comprises 33 bytes, of which 24 bytes are audio or user data, eight bytes are error correction (CIRC-generated), and one byte is for subcode. Frames are arranged in sectors, which contain 98 frames each. The subcode bytes of the first two frames of a sector are used as two synchronization words. The subcode bytes of the remaining 96 frames of a sector are split into eight 96-bit long subcode channels (also called subchannels or simply channels) by putting together the nth bit of each subcode byte. Each channel has a bit rate of 7.35 kbit/s. Structure: Each subcode bit/subchannel is designated by a letter from P to W. The following diagram illustrates how the channels are laid out: Channels: Both the P and Q channels on a regular audio CD are used for timing information. They assist the CD player in tracking the current location on the disc, and to provide the timing information for the time display on the CD player. The rest are not used in the Red Book specification. Channels: Channel P is a simple "pause music" flag, which can be used for low-cost search systems. Many players ignore it in favor of the Q Channel. It indicates a start of a new track by at least two consecutive seconds (150 sectors) of all 1s, and the last block with all 1s is the first block of the new track. Channels: Channel Q is used for control purposes of more sophisticated players. It has three different modes, but with a common structure for all of them.Control bits: The first four bits are used for control, each being a flag for a different feature: Four-channel Compact Disc digital audio flag: indicates that the track uses four-channel audio (applies only to CD-DA). This is very rarely used on Compact Discs. Channels: Data flag: Indicates that this track contains data (rather than audio). Can be used for muting in audio CD players. Not used in the original CD-DA standard, added in the CD-ROM specifications. Digital copy permission flag: Used by the Serial Copy Management System to indicate permission to digitally copy the track. Pre-emphasis flag: The audio track was recorded with pre-emphasis (applies only to CD-DA). Used very rarely on Compact Discs. Mode bits: The next four bits indicate the mode of the Q channel, which can vary from 1 to 3, and define the structure and contents of the next bits. Data bits: The next 72 bits contain Q-channel data, and their structure depends on the mode define in the previous bits. Q Mode 1: In this mode, the data bits contain the Table of Contents of the session (if the Q channel is in the lead-in area), or timing information for the current track (if the Q channel is in the program and lead-out areas of a session). Q Mode 2: In this mode, the data bits contain the Media Catalog Number (MCN) of the disc. Q Mode 3: In this mode, the data bits contain an International Standard Recording Code (ISRC) for each track (applicable to CD-DA only). The ISRC is used by the media industry, and contains information about the country of origin, the year of publication, owner of the rights, as well as a serial number. Cyclic redundancy check bits: The last 16 bits contain an error detection code computed over the previous bits of the channel. Channels: Channels R through W are unused by Red Book compliant CDs and Yellow Book compliant CD-ROMs, and have been used for extensions to the standard: CD-Text is an extension to the Red Book standard for audio CDs. It allows for storage of additional information (e.g. album name, song name, and artist) on the R through W subcode channels on the disc (either in the lead-in area or in the program or main area). Channels: The CD+G or “karaoke” extension also uses the R through W subcode channels to store low resolution graphics. Several copy protection systems made use of the fact that some disk copying utilities neglect to copy subcode data due to the obscurity of it. Jack on CD players: Some older CD players, such as the Pioneer PD-5010, have a socket for an eight-pin mini-DIN connector on the back labeled "Subcode Out".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Canine reproduction** Canine reproduction: Canine reproduction is the process of sexual reproduction in domestic dogs, wolves, coyotes and other canine species. Canine sexual anatomy and development: Male reproductive system Erectile tissue As with all mammals, a dog's penis is made up of three pieces of erectile tissue. These are the two corpora cavernosa and the singular corpus spongiosum which continues in the glans. A notable difference from the human penis is that the visible part during an erection consists entirely of the glans. The Retractor muscle is attached at the shaft of the penis. It is a paired smooth muscle that is used to retract the penis back into the sheath. Canine sexual anatomy and development: Glans A dog's glans consists of two sections: Behind the lower, long part (pars longa glandis) lies the "knot" (Bulbus glandis) which expands only after penetrating the vagina and causes the male dog to remain inside the bitch ("Tie") for some time after ejaculation (typically between 15 and 30 min). This increases the chance of fertilisation and prevents, albeit for a short time, other suites from mating with a particular female. Canine sexual anatomy and development: Behind the knot the penis is very flexible in the horizontal direction allowing the male to unmount while remaining tied. Shaft The shaft of a dogs penis is not visible, even during an erection; however its pathway can be felt starting at the knot passing between the hind legs and carrying on up to the anus. Baculum and Urethra Inside the corpus spongiosum lies the baculum. This allows the male dog to enter into the vagina before the erectile tissue is swollen. The urethra is located inside of a downward facing groove on the baculum and ends at the tip of the penis (urethral process). During an erection a small dip just above the urethral process can be seen. This is because the skin at the tip of the penis is connected via cartilage to the baculum. When the erectile tissue swells, the size of the baculum and connective tissue remains constant, pulling back the skin at the tip. Sheath The penile sheath entirely surrounds the glans while not erect. The back part is intergrown with the abdominal skin. The front part, almost reaching to the navel, is free. The inner sheath, just like the glans, is covered with a mucous membrane and the outer sheath is covered with normal, hairy epidermis. Canine sexual anatomy and development: Female reproductive system Development In domestic dogs, sexual maturity (puberty) occurs between the ages of 6 and 12 months for both males and females, although this can be delayed until up to two years of age for some large breeds. Pregnancy is possible as soon as the first estrus cycle, but breeding is not recommended prior to the second cycle. As with other domesticated species, domestication has selectively bred for higher libido, and earlier and more frequent breeding cycles in dogs than in their ancestors. The female reproductive cycle: Female cycle The average length of the reproductive cycle for females is 2–4 weeks. Females reach sexual maturity (puberty) between 8 and 18 months of age. There is a tremendous variability in the maturation age between breeds, and even within a breed of dog.1. Proestrus, in which eggs in the ovaries begin to mature and estrogen levels begin to rise, is the first stage of the reproductive cycle. During this stage females, though non-receptive, attract males. Initial changes include swelling of the vulva lips, which become pliable, small amounts of bloody vaginal discharge, frequent urination, and signs of restlessness. Proestrus generally lasts nine days.2. Estrus follows, in which estrogen levels are high, mature eggs are released from both ovaries, and females become receptive both physically and mentally to copulation. Only during estrus will copulation result in pregnancy. The female reproductive cycle: During proestrus and estrus, females may have a clear, blood tinged, or bloody discharge. Dogs during these stages are often informally referred to as being in heat. The length of these cycles varies greatly among breeds and even between individuals of the same breed. Proestrus and estrus can last anywhere from 5 days to 21 days.3. Diestrus is the period following mating. Diestrus lasts approximately 56 to 60 days in a pregnant female, and 60 to 100 days in a non-pregnant female. During both of these periods, progesterone levels are high. Because the hormonal profile of a pregnant female and a female in diestrus are the same, sometimes a non-pregnant female will go through a period of pseudo-pregnancy. At that time she may gain weight, have mammary gland development, produce milk, and exhibit nesting behaviours. The female reproductive cycle: 4. Anestrus is the remaining period, the time of reproductive quiescence. The female has no attraction to mating. Anestrus generally lasts four to five months. Copulation: As with most tetrapods, canine copulation involves the male mounting the female from behind, a position that is colloquially referred to as "doggy style" but does not have a specifically known origin. When a male canine is interested in mounting a female, he will sniff the female's vulva. If the female is unreceptive, she may sit, lie down, snap, retreat, or otherwise be uncooperative. If the female is receptive, she will stand still and hold her tail to the side, a stance referred to as "flagging". The male will often continue examining the female's rear, before mounting her from behind while attempting penetration with his penis.Unlike human sexual intercourse, where the male penis commonly becomes erect before entering the female, canine copulation involves the male first penetrating the female, after which swelling of the penis to erection occurs, which usually happens rapidly. At the time of penetration, the canine penis is not erect, and only able to penetrate the female because it includes a narrow bone called the "baculum", a feature of most placental mammals. When the male achieves penetration, he will usually hold the female tighter and thrust deeply. It is during this time that the male's penis expands and it is important that the bulbus gland is sufficiently far inside for the female to be able to trap it. Male canines are the only animals that have a locking bulbus glandis or "bulb", a spherical area of erectile tissue at the base of the penis. During copulation, and only after the male's penis is fully inside the female's vagina, the bulbus glandis becomes engorged with blood. When the female's vagina subsequently contracts, the penis becomes locked inside the female. This is known as "tying" or "knotting". While characteristic of mating in most canids, the copulatory tie has been reported to be absent or very brief (less than one minute) in the African wild dog, possibly due to the abundance of large predators in its environment.When the penis is locked into the vagina by the bulbus glandis (when the stud is "tied"), thrusting behavior stops and the male will usually lift a leg and swing it over the female's back while turning around. The two stand with their hind ends touching and the penis locked inside the vagina while ejaculation occurs, decreasing leakage of semen from the vagina. After some time, typically between 5 and 20 minutes (but sometimes longer), the bulbus glandis disengorges, allowing the mates to separate. Virgin dogs can become quite distressed at finding themselves unable to separate during their first copulation, and may try to pull away or run. Dog breeders often suggest it is appropriate for handlers to attempt to calm the mating dogs if they show anxiety once this stage is reached. After mating, the male usually licks his penis and prepuce. Gestation and litters: Gestation in a dog is 63 days in length, if measured from the day of ovulation. Since it is difficult to determine the exact date of ovulation, errors are often made in calculating gestation period. Canine sperm can live for 10 to 11 days in the uterine tubes (fallopian tubes) so if a female is bred 10 days before the oocytes (eggs) can be fertilized, she will appear to have a gestation length of 70 days. If she is bred on the day the oocytes can be fertilized, her gestation length will appear to be 60 days long. Gestation and litters: During gestation, many physiological changes are similar to other mammals like humans. This results in similar shifts in nutrients in the blood of dogs, especially affecting glucose, fatty acids (like DHA) and amino acid (like BCAA) levels.A rule of thumb is that a mammal will produce half as many offspring as the number of teats on the mother. This rule is altered in domesticated animals since larger litters are often favoured for economic reasons and in dogs, particularly, the great range of sizes and shapes plays a role in how many healthy puppies a female can carry. A female dog usually has 10 teats, though this does not mean she can necessarily provide sufficient nutrition for 10 puppies in one litter.An average litter consists of about five to six puppies, though this number may vary widely based on the breed of dog. Size of the breed is correlated with litter size. Miniature and small breeds average three to four puppies in each litter, with a maximum litter size of about 5–8. Large and giant breeds average 7 puppies per litter but can have a maximum litter size of about 15. In one study, the Rhodesian Ridgeback had the highest average litter size with 8.9 pups per litter while the Pomeranian and Toy Poodle had the lowest with 2.4 pups per litter.The number of puppies also varies with the mother's age. In smaller breeds, both young and old age are associated with smaller litter size. In larger breeds, only old age is associated with smaller litter size. Use of artificial insemination is also associated with smaller litter size, with frozen semen having a stronger effect than fresh semen.The largest litter size to date was set by a Neapolitan Mastiff in Manea, England on November 29, 2004; the litter was 24 puppies.Some breeds have been developed to emphasize certain physical traits beyond the point at which they can safely bear litters on their own.A large scale study in Norway showed that across all breeds, about 4% of pups will be stillborn and a further 4% will die within the first week (early neonatal mortality). Between 8 days and 8 weeks, 1% will die. Litter size, breed size and age of the female is associated with increased risk. High risk breeds for stillborn includes the Dogue de Bordeaux (14.2%), St. Bernard (12.3%), Chow Chow (12.1%), Pembroke Welsh Corgis (11.7%) and Dalmatian (10.6%). The Basenji, Italian Greyhound, Australian Terrier, Irish Soft Coated Wheaten Terrier and the Bichon Havanais had few to no stillborns (0-0.6%). High risk breeds for early neonatal mortality includes the Rhodesian Ridgeback (11.6%), Dogue de Bordeaux (10.4% ), Dalmatians (8.8%) and Icelandic Sheepdog (8.7%) while the Basenji and Tibetan Terrier had no early neonatal mortality and the Border Terrier and Danish-Swedish Farmdog had <1% early neonatal mortality.Common causes of early neonatal mortality are bacteria infection, fetal asphyxia and fading puppy syndrome. Other causes may include elective euthanasia because of congenital defects or failure to meet breed standards.Other multi-breed studies have put stillborn rates at 6.5-7% and early neonatal mortality at 11.5-19.8%. Gestation and litters: Inbreeding depression On the basis of an analysis of data on 42,855 dachshund litters, it was found that as the inbreeding coefficient increased, litter size decreased and the percentage of stillborn puppies increased, thus indicating inbreeding depression. Inbreeding depression is a reduction in progeny fitness due largely to the homozygous expression of deleterious recessive mutations.The gray wolves (Canis lupus) of Isle Royale National Park, Michigan, USA were a small highly inbred population that was considered to be at the threshold of extinction in 2019. This wolf population had been experiencing severe inbreeding depression largely due to the homozygous expression of strongly deleterious recessive mutations. Another highly inbred Scandinavian population of wolves (Canis lupus) also suffered from inbreeding depression that was again attributed to the homozygous expression of deleterious recessive mutations. Gestation and litters: Inbreeding avoidance Because the African wild dog (Lycaon pictus) largely exists in fragmented small populations, its existence is endangered. Inbreeding avoidance via mate selection is characteristic of the species and has important potential consequences for population persistence. Inbreeding is rare within natal packs. Computer-population simulations indicate that all populations continuing to avoid incestuous mating will become extinct within 100 years due to the unavailability of unrelated mates. Thus the impact of reduced numbers of suitable unrelated mates will likely have a severe demographic impact on the future viability of small wild dog populations. Gestation and litters: Red wolves primarily live in packs composed of a socially monogamous breeding pair and offspring of different ages. Using long-term data on red wolf individuals of known pedigree, it was found that inbreeding among first-degree relatives was rare. A likely mechanism for avoidance of inbreeding is independent dispersal trajectories from the natal pack. Many of the young wolves spend time alone or in small non-breeding packs composed of unrelated individuals. The union of two unrelated individuals in a new home range is the predominant pattern of breeding pair formation.Among Ethiopian wolves, most females disperse from their natal pack at about two years of age, and some become "floaters" that may successfully immigrate into existing packs. Breeding pairs are most often unrelated to each other, suggesting that female-biased dispersal reduces inbreeding.Grey wolves and Arctic foxes also exhibit inbreeding avoidance.Inbreeding is ordinarily avoided because it leads to a reduction in progeny fitness (inbreeding depression) due largely to the homozygous expression of deleterious recessive alleles. Cross-fertilization between unrelated individuals ordinarily leads to the masking of deleterious recessive alleles in progeny. Clinical issues: Female dogs are at risk for endometritis and pyometra in the postpartum period and after estrus or vaginitis. Signs and symptoms include fever, lethargy, loss of appetite, excessive thirst, restlessness, a foul smelling vaginal discharge which may or may not be bloody, infertility, or they may be asymptomatic. Uterine infections should be treated expeditiously if suspected. Contrary to common belief, uterine infections can strike any intact female, whether she has been bred or not, and whether it is her first season or not although it is more common as dogs become older. Dog breeding: Semen collection An artificial vagina is prepared, which is a conical thin latex sleeve ending in a sterile collection tube. The inside of the latex sleeve is lightly lubricated. The male is allowed to sniff a female in estrus. Experienced studs cooperate readily in the process. New studs often require encouragement in the form of manual stimulation. Generally the male will mount the female, and the collector quickly directs the male's penis into the latex sleeve. The male ejaculates and the semen is collected in the tube. The semen is then drawn up into a long thin pipette. Cross breeding: Designer breed dogs are Mixed-breed dogs that are intentionally bred from parents of two established breeds. Studies have shown that cross-bred dogs have a number of desirable reproductive traits. Scott and Fuller found that cross-bred dogs were superior mothers compared to purebred mothers, producing more milk and giving better care. These advantages led to a decreased mortality in the offspring of cross-bred dogs; however, the qualities of cross bred dogs are not predictable. For example, Labrador x Poodle ("Labradoodle") can inherit the coat of either a Labrador, a poodle, or a remix. Spaying and neutering: Spaying (females) and neutering (males) refers to the sterilization of animals—usually by castration (removal of the male's testicles) or ovariohysterectomy (removal of the female's ovaries and uterus)—to eliminate the ability to procreate, and reduce sex drive. Castration has also been known to reduce aggression in male dogs (in some cases), but spaying has been shown to occasionally increase aggression in female dogs.Animal control agencies in the United States and the ASPCA advise that dogs not intended for further breeding should be spayed or neutered so that they do not have undesired puppies. Spaying and castrating can decrease the risk of hormone-driven diseases such as mammary cancer, as well as undesired hormone-driven behaviors. However, certain medical problems are more likely after neutering, such as urinary incontinence in females and prostate cancer in males.Dogs shown in the conformation ring are not allowed to be either neutered or spayed. It disqualifies them from being shown as they must be intact and unaltered.Female cats and dogs are seven times more likely to develop mammary tumors if they are not spayed before their first heat cycle.Studies have shown that spaying or neutering may be associated with increasing some serious health and behavioural consequences while reducing others. The American Veterinary Association (AVMA) position provides no single recommendation to spay or neuter nor for one single age for spay or neuter that is more or less optimal than another. Rather, the AVMA position is that spay or neuter be determined on a case-by-case basis to assess risks for orthopedic disease, neoplasia, reproductive disease, longevity, and population control for each individual. Spaying and neutering: Altered females: Increased aggression can be shown in altered females if they have previously displayed aggression prior to surgical alteration. In a study by O'Farrell and Peachy, female dogs less than 11 months of age that had previously shown signs of aggression are more likely to have an increase in aggression after being spayed. These increases in aggression may be due to the sudden change in hormone concentrations that are the result of alteration. While spaying female dogs does not "induce" aggression, it can increase aggression and facilitate indiscriminate appetite in young altered females and can include them rapidly eating meals or eating food-associated items such as trash.Altered males: In nearly 2/3 of the cases that involve inter-dog aggression, castration can help decrease aggression. Castration also decreases other male-typical behavioral traits such as mounting, roaming, and urine marking. But a few studies have shown that male behavioral issues of mounting, roaming and urine marking still exist in altered males. Some people have reported after altering their male dogs that behavior such as roaming, mounting and urine marking has not changed the behavior. Aggression may increase, as the decrease in testosterone may lead to emotional issues and become more likely to react aggressively when feeling under threat. Male puppies that are neutered between 7 and 10 weeks are three times less likely to display behavioral problems, compared to canines neutered at 6 months or older. Most dominantly aggressive dogs are male, which causes many people to neuter their male canine companions. Removing testosterone can decrease the intensity of a canine's reaction to stimulus. Testosterone does not cause a behavior to occur, but its absence may decrease the occurrence of a "bad" behavior.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sound trucks in Japan** Sound trucks in Japan: In Japan, sound trucks (街宣車, gaisensha) are vehicles equipped with a public address system. They have been used notably in political and commercial contexts, and have one or more loudspeakers which can play a recorded message or recorded music as the truck tours through neighborhoods. In the political world, they are used by parties, candidates, and groups to express their views. In the early days of Japanese post-war democracy, they were one of the most common means of conducting political campaigns, alongside the likes of radio announcements and sponsored meetings. In a commercial context, vendors also use sound trucks for the purpose of selling goods, collecting recyclable materials, and other purposes. Law: The use of these sound trucks can be subject to so-called nuisance laws, although there have been instances in which police have been sympathetic to right-wing groups who utilise these trucks and have let such behaviour slide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ishin-denshin** Ishin-denshin: Ishin-denshin (以心伝心) is an idiom commonly used in East Asian cultures such as Japan, Korea, China, which denotes a form of interpersonal communication through unspoken mutual understanding. This four-character compound's (or yojijukugo) kanji (Chinese characters) literally translates as "like minds, (are) communicating minds". Sometimes translated into English as "telepathy" or "sympathy", ishin-denshin (i-shim-chon-shim, 이심전심 in Korean) is also commonly rendered as "heart-to-heart communication" or "tacit understanding".Silent understanding is recognized as a universal human phenomenon; however, some Japanese believe it to be a unique characteristic of Japanese culture. Whereas the Japanese concept of haragei denotes a deliberate form of nonverbal communication, ishin-denshin refers to a passive form of shared understanding. Ishin-denshin is traditionally perceived by the Japanese as sincere, silent communication via the heart or belly (i.e. symbolically from the inside, uchi), as distinct from overt communication via the face and mouth (the outside, soto), which is seen as being more susceptible to insincerities. The introduction of this concept to Japan (via China) is related to the traditions of Zen Buddhism, where the term ishin-denshin refers to direct mind transmission. Zen Buddhism tradition, in turn, draws the concept of ishin-denshin from the first Dharma transmission between Gautama Buddha and Mahākāśyapa in the Flower Sermon.Ishin-denshin, or non-verbal communication, continues to influence aspects of contemporary Japanese culture and ethics, ranging from business practices to end-of-life care.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hatice Altug** Hatice Altug: Hatice Altug (Turkish: Altuğ; born 1978) is a Turkish physicist and professor in the Bioengineering Department and head of the Bio-nanophotonic Systems laboratory at École Polytechnique Fédérale de Lausanne (EPFL), in Switzerland. Her research focuses on nanophotonics for biosensing and surface enhanced spectroscopy, integration with microfluidics and nanofabrication, to obtain high sensitivity, label-free characterization of biological material. She has developed low-cost biosensor allowing the identification of viruses such as Ebola that can work in difficult settings and therefore particularly useful in case of pandemics.Altug was the recipient of United States Presidential Early Career Award for Scientists and Engineers and The Optical Society of America Adolph Lomb Medal. She also received European Research Council Consolidator Award, Office of Naval Research Young Investigator Award, National Science Foundation CAREER Award and Popular Science Magazine Brilliant 10 Award. She is a Fellow of the Optical Society of America. Education: Altug, who was born in Karamanlı district of Burdur in 1978, completed her high school education in 1996 in Antalya Anatolian High School, Antalya, Turkey. She received her B.Sc. degree in physics in 2000 in Bilkent University (Ankara, Turkey), having been awarded a full scholarship there. In 2007, she was awarded a PhD in applied physics from Stanford University (California, U.S.), under the supervision of Professor Jelena Vučković. During her education at Stanford University, she worked on laser systems and optical instruments. Career: Altug completed a Postdoctoral Fellowship at the Center for Engineering in Medicine at the Harvard Medical School. From 2007 until 2013, she was first an assistant and later an associate professor of Electrical and Computer Engineering at Boston University. Career: In 2010, she was awarded the Faculty Early Career Development (CAREER) award by the National Science Foundation. Altug disseminated her findings to the public through Boston’s Museum of Science, local educational programs such as Boston Upward Bound Math and Science, and Boston University’s Summer Challenge program on engineering. At the College of Engineering, she added experimental modules to courses relating to nanotechnology. She was also named one of Popular Science’s “Brilliant 10,” a group of researchers under 40 who made transformational contributions to their fields during 2010.In 2011, IEEE Photonics Society named Altug as winner of the Young Investigator Award, which recognizes individuals who make outstanding technical contributions to the field of photonics prior to their 35th birthday. She was honored for her groundbreaking achievements in confining and manipulating light at the nanoscale to dramatically improve biosensing capabilities.Altug was recognized with OSA’s Adolph Lomb Medal in 2012 “for breakthrough contributions on integrated optical nano-biosensor and nanospectroscopy technologies based on nanoplasmonics, nanofluidics, and novel nanofabrication.”She was also named by President Obama among 94 researchers as a recipient of the 2011 Presidential Early Career Awards for Scientists and Engineers (PECASE), the highest honor bestowed by the United States government on science and engineering professionals in the early stages of their independent research careers. As well as attending the White House ceremony, awardees receive a research grant lasting up to five years. She was awarded for leading the development of a biosensor that uses tiny crystals to manipulate light to detect a virus, a protein, or a cancer cell in a drop of blood.In 2013, Altug joined Ecole Polytechnique Federale de Lausanne, where she became full professor in 2020. Career: In 2019, she was awarded the ERC Proof of Concept Grant by the European research council for her project: “Portable Infrared Biochemical Sensor Enabled by Pixelated Dielectric Metasurfaces.” Awards and honors: 2021 Fellow of Optica for "pioneering contributions to nano-optics, manipulation of light on-chip, the development of innovative nanobiosensors and sensing techniques, and exemplary contributions to the scientific community and Optica." 2020 European Physical Society Emmy Noether Distinction for Women in Physics 2019 ERC Proof of Concept Grant 2012 Optical Society's Adolph Lomb Medal 2011 Presidential Early Career Awards for Scientists and Engineers 2011 IEEE Photonics Society Young Investigator Award 2010 National Science Foundation Faculty Early Career Development (CAREER) award
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Poset game** Poset game: In combinatorial game theory, poset games are mathematical games of strategy, generalizing many well-known games such as Nim and Chomp. In such games, two players start with a poset (a partially ordered set), and take turns choosing one point in the poset, removing it and all points that are greater. The player who is left with no point to choose, loses. Game play: Given a partially ordered set (P, <), let Px=P−{a∣a≥x} denote the poset formed by removing x from P. A poset game on P, played between two players conventionally named Alice and Bob, is as follows: Alice chooses a point x ∈ P; thus replacing P with Px, and then passes the turn to Bob who plays on Px, and passes the turn to Alice. A player loses if it is his/her turn and there are no points to choose. Examples: If P is a finite totally ordered set, then game play in P is exactly the same as the game play in a game of Nim with a heap of size |P|. For, in both games, it is possible to choose a move that leads to a game of the same type whose size is any number smaller than |P|. In the same way, a poset game with a disjoint union of total orders is equivalent to a game of Nim with multiple heaps with sizes equal to the chains in the poset. Examples: A special case of Hackenbush, in which all edges are green (able to be cut by either player) and every configuration takes the form of a forest, may be expressed similarly, as a poset game on a poset in which, for every element x, there is at most one element y for which x covers y. If x covers y, then y is the parent of x in the forest on which the game is played. Examples: Chomp may be expressed similarly, as a poset game on the product of total orders from which the infimum has been removed. Grundy value: Poset games are impartial games, meaning that every move available to Alice would also be available to Bob if Alice were allowed to pass, and vice versa. Therefore, by the Sprague–Grundy theorem, every position in a poset game has a Grundy value, a number describing an equivalent position in the game of Nim. The Grundy value of a poset may be calculated as the least natural number which is not the Grundy value of any Px, x ∈ P. That is, min (N∖{G(Px)∣x∈P}). Grundy value: This number may be used to describe the optimal game play in a poset game. In particular, the Grundy value is nonzero when the player whose turn it is has a winning strategy, and zero when the current player cannot win against optimal play from his or her opponent. A winning strategy in the game consists of moving to a position whose Grundy value is zero, whenever this is possible. Strategy stealing: A strategy-stealing argument shows that the Grundy value is nonzero for every poset that has a supremum. For, let x be the supremum of a partially ordered set P. If Px has Grundy value zero, then P itself has a nonzero value, by the formula above; in this case, x is a winning move in P. If, on the other hand, Px has a nonzero Grundy value, then there must be a winning move y in Px, such that the Grundy value of (Px)y is zero. But by the assumption that x is a supremum, x > y and (Px)y = Py, so the winning move y is also available in P and again P must have a nonzero Grundy value.For more trivial reasons a poset with an infimum also has a nonzero Grundy value: moving to the infimum is always a winning move. Complexity: Deciding the winner of an arbitrary finite poset game is PSPACE-complete. This means that unless P=PSPACE, computing the Grundy value of an arbitrary poset game is computationally difficult.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sirius visualization software** Sirius visualization software: Sirius is a molecular modelling and analysis system developed at San Diego Supercomputer Center. Sirius is designed to support advanced user requirements that go beyond simple display of small molecules and proteins. Sirius supports high quality interactive 3D graphics, structure building, displaying protein or DNA primary sequences, access to remote data sources, and visualizing molecular dynamics trajectories. It can be used for scientific visualization and analysis, and chemistry and biology instruction. Sirius visualization software: This software is no longer supported as of 2011. Key features: Sirius supports a variety of applications with a set of features, including: Building and editing chemical structures using a library of fragments Protein structure and sequence alignment Command line interpreter and scripting support fully compatible with extant RasMol scripts Full support for molecular dynamics trajectory visualizing BLAST search directly in Protein Data Bank and Uniprot databases Ability to move parts of the loaded data while freezing the rest Interactive calculation of hydrogen bonding, steric clashes, Ramachandran plots Support for all major structure and sequence formats Bundled POV-Ray for creating photorealistic images Integrated selection and coloring across individual visualizing componentsSirius is based on molecular graphics code and data structures developed as a part of the Molecular Biology Toolkit. RasMol-compatible scripting: Sirius features a command line interpreter that can be used to quickly manipulate structure appearance and orientation. The set of commands has been patterned after RasMol, so it's fully compatible with extant scripts. Added commands introduced in Sirius provide support for manipulating multiple structures loaded at the same time, and enable more flexible selection. RasMol-compatible scripting: Extant RasMol scripts can be imported and run within Sirius to produce high quality representations of encoded molecular scenes. Since RasMol uses a coordinate system that differs from that Sirius, internal conversion is performed when RasMol scripts are imported, so that any orientation changes are shown correctly. Any manually entered commands, however, are executed according to the Sirius coordinate system.Sirius supports several predefined atom-residue sets and color schemes, allows editing of scripts using the Command Panel interface, and logical operators and parentheses can be used to create complex selection commands. Visualizing molecular dynamics trajectories: Sirius contains a full-featured molecular dynamics visualizing component. It can read output files from AMBER and CHARMM simulations, including compressed and AMBER out files. RMSD changes along the trajectory can be calculated using user-defined atom subsets and displayed in an interactively updated graph. In order to reduce memory requirements, large multifile simulations may be loaded in a buffered mode. If a simulation involves changes in protein fold, Sirius can be set to track and recompute displayed secondary structure features in real time, which provides a convenient way to observe transformations of the structure. The full trajectory or selected frames can be exported as QuickTime video or a set of POV-Ray scene snapshots that can later be converted to a high quality movie. Access and download: Sirius is distributed freely from the project website to individuals affiliated with academic and non-profit organizations. Native desktop application installers are available for Windows, Linux, and macOS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Printer Command Language** Printer Command Language: Printer Command Language, more commonly referred to as PCL, is a page description language (PDL) developed by Hewlett-Packard as a printer protocol and has become a de facto industry standard. Originally developed for early inkjet printers in 1984, PCL has been released in varying levels for thermal, matrix, and page printers. HP-GL/2 and PJL are supported by later versions of PCL.PCL is occasionally and incorrectly said to be an abbreviation for Printer Control Language which actually is another term for page description language. PCL levels 1 through 5 overview: PCL levels 1 through 5e/5c are command-based languages using control sequences that are processed and interpreted in the order they are received. At a consumer level, PCL data streams are generated by a print driver. PCL output can also be easily generated by custom applications. PCL 1 was introduced in 1984 on the HP ThinkJet 2225 and provides basic text and graphics printing with a maximum resolution of 150 dpi (dots per inch). PCL 1+ was released with the HP QuietJet 2227. PCL 2 added Electronic Data Processing/Transaction functionality. PCL levels 1 through 5 overview: PCL 3 was introduced in 1984 with the original HP LaserJet. This added support for bitmap fonts and increased the maximum resolution to 300 dpi. Other products with PCL 3 support were the HP DeskJet ink jet printer, HP 2932 series matrix printers and HP RuggedWriter 2235 matrix printers. PCL 3 is still in use on several impact printers which replaced the obsolete HP models. PCL levels 1 through 5 overview: PCL 3+ (mono) and PCL 3c+ (color) are used on later HP DeskJet and HP PhotoSmart products. PCL 3GUI is used in the HP DesignJet and some DeskJet series printers. It uses a compressed raster format that is not compatible with standard PCL 3. PCL 4 was introduced on the HP LaserJet Plus in 1985, adding macros, larger bitmapped fonts and graphics. PCL 4 is still popular for many applications. PCL 5 was released on the HP LaserJet III in March 1990, adding Intellifont font scaling (developed by Compugraphic, now part of Agfa), outline fonts and HP-GL/2 (vector) graphics. PCL 5e (PCL 5 enhanced) was released on the HP LaserJet 4 in October 1992 and added bi-directional communication between the printer and the PC and Windows fonts. PCL 5c introduced color support on the HP PaintJet 300XL and HP Color LaserJet in 1992. PCL 6 overview: HP introduced PCL 6 around 1995 with the HP LaserJet 4000 series printers. It consists of: PCL 6 "Enhanced": An object-oriented PDL optimized for printing from GUI interfaces such as Windows and compressed to optimize throughput. Formerly known as PCL XL or PXL. PCL 6 Standard: Equivalent to PCL 5e or PCL 5c, intended to provide backward compatibility. PCL 6 overview: Font synthesis: Provides scalable fonts, font management and storage of forms and fonts.PCL 6 "Enhanced" architecture was altered to be more modular and to be more easily modified for future HP printers, that it prints complex graphics faster, that it reduces network traffic, and has higher quality. In early implementations, HP did not market PCL 6 well, thus causing some confusion in terminology. PCL XL was renamed to PCL 6 Enhanced, but many third-party products still use the older term. PCL 6 overview: Some products may claim to be PCL 6 compliant, but may not include the PCL 5 backward compatibility. PCL 6 Enhanced is primarily generated by the printer drivers under Windows and CUPS. Due to its structure and compression methodology, custom applications rarely use it directly. PCL 6 overview: PCL 6 Enhanced is a stack-based, object-oriented protocol, similar to PostScript. However, it is restricted to binary encoding as opposed to PostScript, which can be sent either as binary code or as plain text. The plain-text commands and code examples shown in the PCL programming documentation are meant to be compiled with a utility like HP's JetASM before being sent to a printer. PCL 6 overview: PCL 6 Enhanced is designed to match the drawing model of Windows GDI. In this way, the Windows printer driver simply passes through GDI commands with very little modification, leading to faster return-to-application times. Microsoft has extended this concept with its next-generation XPS format, and printer implementations of XPS are being developed. This is not a new idea: it is comparable with Display Postscript and Apple's Quartz, and is in contrast to "GDI Printers" where a compressed bitmap is sent to the printer. PCL 6 overview: PCL 6 class revisions Class 1.1 Draw tools: Support drawing lines, arcs/ellipses/chords, (rounded) rectangles, polygons, Bézier paths, clipped paths, raster images, scanlines, raster operations. Color handling: Support 1/4/8-bit palettes, RGB/grey color space. Support custom halftone patterns (max 256 patterns). Compression: Supports RLE. Units of measurement: Inch, millimeter, tenth of millimeter. Paper handling: Support custom or predefined sets of paper size, including common Letter, Legal, A4, etc. Can choose paper from manual feed, trays, cassettes. Paper can be duplexed horizontally or vertically. Paper can be oriented in portrait, landscape, or 180 degree rotation of the former two. Font: Supports bitmap or TrueType fonts, 8 or 16-bit code points. Choosing character set uses different symbol set code from PCL 5. When bitmap font is used, many scaling commands are unavailable. When TrueType font is used, variable length descriptors, continuation blocks are not supported. Outline font can be rotated, scaled, or sheared. Class 2.0 Compression: Added JPEG compression. A Proprietary variant of JPEG-like compression optimized for integer hardware called JetReady is used in a few HP Color Laserjet models (at the time of writing, 3 models, CLJ 3500, 3550, 3600). Those models require Class 3.0 inputs. Paper handling: Media can redirected to different output bins (up to 256). Added A6 and Japanese B6 preset media sizes. Added Third cassette preset, 248 external tray media sources. Font: Text can be written vertically. Class 2.1 Color handling: Added Color matching feature. Compression: Added Delta Row. Paper handling: Orientation, media size are optional when declaring a new page. Added B5, JIS 8K, JIS 16K, JIS Exec paper sizes. Class 2.2 Compression: Added JFIF. Class 3.0 Color handling: Allow using different halftone settings for vector or raster graphics, text. Supports adaptive halftoning. Protocol: Supports PCL passthrough, allowing PCL 5 features to be used by PCL 6 streams. However, some PCL 6 states are not preserved when using this feature. Font: Supports PCL fonts.JetReady printers (CLJ 3500/3550/3600) use undocumented extensions but otherwise mandate Class 3.0 inputs. PJL overview: PJL (Printer Job Language) was introduced on the HP LaserJet IIIsi. PJL adds job level controls, such as printer language switching, job separation, environment commands, status feedback, device attendance and file system commands.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded